It seems that in the graph, higher performance is achieved on the right hand side. This means we still need a huge core for executing the sequential-oriented code and reserve a tiny part for parallel code. Does that mean in today's world, most of the workload is still sequential?
EggyLv999
We can see in graph d especially that the best speedups are actually achieved with a balance of small cores and large cores. Additionally, this graph has nothing to do with workloads in real life. It's simply a simulation of "If I plug in different sequential vs parallel ratios, what kind of performance do I get?" Real workloads can be anything, and there are definitely mostly sequential and mostly parallel solutions out there.
It seems that in the graph, higher performance is achieved on the right hand side. This means we still need a huge core for executing the sequential-oriented code and reserve a tiny part for parallel code. Does that mean in today's world, most of the workload is still sequential?
We can see in graph d especially that the best speedups are actually achieved with a balance of small cores and large cores. Additionally, this graph has nothing to do with workloads in real life. It's simply a simulation of "If I plug in different sequential vs parallel ratios, what kind of performance do I get?" Real workloads can be anything, and there are definitely mostly sequential and mostly parallel solutions out there.