It was mentioned in class that these metrics are relative to a sequential, single threaded C-program. If it was optimized and parallelized the difference would be even more extreme, considering that parallelization can be very difficult in languages such as PHP and Python.
Benchmarking scripting language's performance, (such as Python), can be unreliable. A garbage collector problem in Python 2.6 caused unexpected run time of Python code (http://stackoverflow.com/questions/3916553/python-garbage-collection-can-be-that-slow). I was advised in my research to never measure Python's performance because it doesn't say much about the actual complexity of the algorithm.
I was trying to look more into Java's performance (I was surprised it was so good given what people usually say about Java) but fell into a rabbit hole about benchmarking Java code. This was a pretty interesting read: http://www.ibm.com/developerworks/library/j-benchmark1/
Very cool post @CaptainBlueBear.
(Looking at the source website, the same trend still holds. Ocaml, Swift, and Haskell are all relatively fast, Perl joins the slow.)