Previous | Next --- Slide 6 of 66
Back to Lecture Thumbnails

It was mentioned in class that these metrics are relative to a sequential, single threaded C-program. If it was optimized and parallelized the difference would be even more extreme, considering that parallelization can be very difficult in languages such as PHP and Python.


Benchmarking scripting language's performance, (such as Python), can be unreliable. A garbage collector problem in Python 2.6 caused unexpected run time of Python code ( I was advised in my research to never measure Python's performance because it doesn't say much about the actual complexity of the algorithm.


I was trying to look more into Java's performance (I was surprised it was so good given what people usually say about Java) but fell into a rabbit hole about benchmarking Java code. This was a pretty interesting read:


Very cool post @CaptainBlueBear.


An interesting thing to note: with the exception of Javascript, the divide between fast and slow is exactly the divide between statically and dynamically typed languages, which is no coincidence, as the tagging performed by dynamic typing incurs a large run-time overhead. My guess is that if you didn't run the Javascript using V8, it would be more similar to the other bars.

(Looking at the source website, the same trend still holds. Ocaml, Swift, and Haskell are all relatively fast, Perl joins the slow.)


The divide between fast and slow is also roughly the divide between compiled and interpreted languages (with the exception of Javascript V8, which I believe is actually dynamically typed). This could also be contributed to the results shown above.