I simply chose two common tasks which are often performed jointly (or variants thereof). Results may be somewhat different when it comes to only sampling, or only basic operations. The two operations: (1) drawing from a normal distribution and (2) computation of the mean are bundled together.The computational time scales linearly with the size of the vector being drawn.It may be something to do with the draws from the normal distribution (extreme draws slowed down computation maybe). What we also notice is the asymmetry in the distribution of computational time.Impressively fast, with extremely low variability, about 6 times faster than R and about 3 times faster than Python. Here are the results, multiplied by 1000 for readability, so 1 translates to 1/1000 seconds. As simple implementation as possible in all three languages. We perform the exercise once with a vector of size 10,000 and once with a vector of size 100,000. Which operations shall we measure? I chose two fairly common operations: random sampling from a normal distribution, and computing the mean of that vector using a built-in mean function.
So not rounded in any meaningful way, and sufficient repetition so that results can be trusted.
Your weight should not change when you step on the scales so to speak. Then, when measuring the computational speed we need to be sure measurement is completely separated from the actual computation. For example you can write your own mean function as sum(x)/length(x) or you can use an existing built-in mean function. Almost all operations can be coded in more than one way.
Spoiler alert: MATLAB wins by a knockout.Ī genuinely fair speed comparison across different software can be tricky. In previous rounds we discussed the differences in 3d visualization, differences in syntax and input-output differences. This is another comparison between R and MATLAB (Python also in the mix this time).