Example: Mandelbrot Set Language Shootout
While trying to compare different languages and implementations of languages, I found myself in need of some way to compare the way different tools performed on the same logic.
Not finding anything I liked, I wrote code to do a straightforward calculation of the Mandelbrot set. There's nothing special about this test, other than the fact that it's computation-heavy, so makes a nice semi-realistic speed test. Of course, as with all benchmarks, this should be taken for what it is: a single algorithm on a single test machine, tested with some care, but not perfectly. Fundamentally, it is a flawed benchmark, like every other benchmark.
Then I wrote it for another language. Then another. Then I found more Python implementations. It all got out of hand.
In all of the code, I have tried to be as straightforward as possible. I am not an expert in performance-tuning any of these languages, and I don't want to test code that has received that treatment. My goal is to test “normal programmer” code.
The actual timing results: Mandelbrot Set timing results and code.
Dynamic Binding is Expensive.
The original motivation for this code was to distinguish between dynamically-typed (and dynamically-bound) Python code, and statically-typed Cython code. In this case, the Cython implementation differs only in the static type declarations in the
The payoff is a 30x speed difference. Deciding what “
+” means every time around a loop is expensive.
For individual languages, there are orders-of-magnitude difference between running times, depending on the choice of implementation and/or options when using them.
Maybe tools matter more than language
Particularly for dynamically-typed languages, a good JIT compiler makes a huge difference. Dynamic binding is expensive, but the JIT can compile a statically-bound version of a function when it decides its necessary. There will be some startup cost to that, but apparently not much.
What's a fast language? Unclear.
When I started doing these tests, I still thought I knew that there were “fast” and “slow” languages. Now, I'm not sure.
Maybe I could now argue that there are language that are easier to make fast code in, but that's going to depend radically on the nature of the calculations at hand.
One Benchmark Isn't Enough
This one benchmark is a very small data point: it grinds an inner loop with floating-point calculations without even using an array. That's far from the only thing compilers might be good or bad at. Compare the PyPy speed tests and their wealth of workloads.
But after writing the same code in more than a dozen different languages, I'm done. If you want more benchmarks, write your own. ?