Previous | Next --- Slide 40 of 60
Back to Lecture Thumbnails
Vincent

Can anyone explain why the total peak single precision flops are 105 TFLOPs? I'm confused.

Metalbird

@Vincent it adds up to 105 TFLOPs if you do (0.844 TFLOPs (Xeon e5-2620 v3) + 4.2 TFLOPs (GPU) + 2 TFLOPs (Xeon Phi 5110p))*15 (15 node cluster) and it comes out to be 105 TFLOPs. I was initially not sure if we have 2 844 GFLOPs from the six-core Xeon e5-2620, but the math works out so it should be ok.