Previous | Next --- Slide 41 of 43
Back to Lecture Thumbnails

Moto x 2nd gen link to spec has quad-core 2.5 GHz Krait 400. Compared to Macbook Air link to spec, which has 1.6GHz dual-core Intel Core i5 (Turbo Boost up to 2.7GHz).

Based on the numbers alone (the frequency and the number of core), this means that Moto x beats Macbook air. Which it shouldn't. I realize that these numbers alone is not a good performance comparison, which leads to a question. How do you compare processor's performance? What are the criterias ?


@bysreg FLOPS (Floating point operations per second) are a rather good metric for comparing compute power.

Other differences between the chip are the microarchitecture choices. The Intel chip is certainly more pipelined and has more specialized hardware to make certain things faster, as that was their priority. For the Krait 400 they probably focused more on power efficiency than fast computation.


@bysreg, you compare processors by benchmarking them on a workload similar to what you plan on using them for. The presented marketing numbers are only useful when comparing between very similar cpus.


Adding on to the answer above: Synthetic benchmarks are widely published, SunSpider (a JS benchmark) and Antutu are pretty popular for mobile platforms. As you would expect, a lot of things make a big difference on how various chipsets perform on these benchmarks. Caching schemes, on-board memory, abstraction at the firmware level etc. Apple has had a firm grip over integrating software and hardware to get the best performance even though on paper their specs are rather modest.

This link has the Apple A9(dual core) performing better than octal core Snapdragons and Exynos:

This is a rather high-level abstract answer, but it sort of is in the same spirit as the reason the max frequency graph has flatlined in one of the earlier slides. There's more to how processes are handled than just hardware specs.


Are benchmark results self-reported? Are there any mechanisms to control for Volkswagen-style optimizations (learn to identify the test) when a benchmark is being executed? This seems quite common:


That's an interesting issue.


Another interesting thing to consider with benchmarks is potential bias by the compiler. In particular, the Intel compiler is known to generate executables that disable optimizations if they detect they are running on an AMD processor.