Is there anything wrong with having the machines be connected by ethernet? Ethernet is quite fast and reliable. What would be a preferable alternative?
(Of course, it's perhaps annoying to deal with all the cables, but the programmer doesn't care.)
Ethernet does of course have a bandwidth restriction - modern Cat-6 cables have a maximum transfer speed of around 10 Gigabits/sec. Of course, for most applications that's plenty fast enough, but when dealing with lots of processing power like the 105 TFLOPs here it might not cut it. If your operations need to interact often it might end up being the bottleneck. For example, you can see it pales in comparison to the RAM bandwidth here of 60 GB/sec.
What would typically be used in this type of system to connect the machines if the bandwidth limitations of Ethernet became a significant problem?
Here's an article written by engineers at Cray about their supercomputing interconnect. It looks like they use a combination of electrical and optical links: Electrical (copper) links connect nearby nodes into a group, and optical links connect groups together. The per-blade NICs are custom ASICs, and use some sort of special protocol (so no TCP here, I guess). Each link provides about 13 Gbps per direction, and each blade has 48 router links.
Overall, it seems that every supercomputer has a unique interconnect system, probably catered to the type of work intended for the system. But their interconnects are definitely more powerful than a typical consumer ethernet interconnect.
Minor correction: the Xeon Phi 5110P doesn't have AVX512 and as far as I know, nothing currently released supports it either. The currently-released Knights Corner line of Phis has 16-wide SIMD instructions, but there's some functionality and encoding difference from the spec for AVX512.
@ojd: you are correct. Phi supports a different set of 512-bit SIMD instructions (much like AVX512 not exactly the same)