[1] 0.5031
user system elapsed
2.936 0.004 3.027
In spite of the many benefits of functional programming, this approach
usingapply()didn’t help. Instead, things got worse. Since this could be sim-
ply due to random sampling variation, I ran the code several times again,
with similar results.
So, let’s look at vectorizing this simulation.
1 sim3 <- function(nreps) {
2 nb1 <- 10
3 nb2 <- 6
4 n1 <- 18
5 n2 <- 13
6 u <- matrix(c(runif(2*nreps)),nrow=nreps,ncol=2)
7 # set up the condition vector
8 cndtn <- u[,1] <= nb1/n1 & u[,2] <= (nb2+1)/n2 |
9 u[,1] > nb1/n1 & u[,2] <= nb2/n2
10 return(mean(cndtn))
11 }
The main work is done in this statement:
cndtn <- u[,1] <= nb1/n1 & u[,2] <= (nb2+1)/n2 |
u[,1] > nb1/n1 & u[,2] <= nb2/n2
To get that, we reasoned out which conditions would lead to choosing
a blue marble on the second pick, coded them, and then assigned them to
cndtn.
Remember that<=and&are functions; in fact, they are vector functions,
so they should be fast. Sure enough, this brings quite an improvement:
> system.time(print(sim3(10000)))
[1] 0.4987
user system elapsed
0.060 0.016 0.076
In principle, the approach we took to speed up the code here could be
applied to many other Monte Carlo simulations. However, it’s clear that the
analog of the statement that computescndtnwould quickly become quite
complex, even for seemingly simple applications.
Moreover, the approach would not work in “infinite-stage” situations,
meaning an unlimited number of time steps. Here, we are considering the
marble example as being two-stage, with two columns to the matrixu.
Performance Enhancement: Speed and Memory 311