Previous | Next --- Slide 19 of 41
Back to Lecture Thumbnails

The very large computers (like Tianhe-2 and Titan) seem to perform much below their theoretical maximum compared to smaller computers. In class, we talked about how interconnect quality might be one factor. What are others?


If someone is planing to build a supercomputer, how do they decide which to use as the accelerator? Xeon Phi or Nvidia GPU?

According to Nvidia's own experiment, Tesla works much better than Xeon Phi on their benchmark applications. But I still believe there's some bias in that experiment, and there should be a lot of application that are more suitable for Xeon Phi.

Also, according to the next slide, Nvidia GPU seems more energy-efficient.

Then why Xeon Phi still has good market share? What is its advantage over GPU?



One reason someone building a supercomputer might choose a Xeon Phi instead of a Nvidia GPU is that it supports x86 assembly. This means that you don't have to rewrite existing code, or develop in a different language in order to use it.

I thought this article provided some interesting perspective on the Xeon Phi.


@uncreative Thanks! The article is really helpful!


Recently the U.S. government just banned Intel from exporting Xeon and Xeon phi to boost Tianhe-2 (

It will be interesting to see if the Chinese will try to incorporate their homegrown CPU into the supercomputer one day.