Such design chooses more number of simpler core over one complex core. If the programmes can be executed in parallel, then we can potentially speed things up. However, if we consider every individual instruction stream, they may be slower than that of one complex core. In this case, programmes which don't exploit parallelism suffer. The implication for programmers is that software had better take this into consideration and fully exploit the parallelism of processors so compute resources are not wasted(e.g. spawning pthreads for independent work).
Since in this model, adding a core reduces speed by 25%, is this the main reason why we can't keep adding cores to increase speed?
You cant keep adding cores to increase speed if you don't have the parallelism needed. Your parallel performance is going to be as good as the amount of parallelism available.
True, @sriharis304. This is also called Amdahl's law:
The amount of speedup we can get is limited by the serial part, and some amount of serial code is always present in systems, which proves to be a bottleneck for parallel execution. Thus, we can't keep adding cores to increase speed.
@efficiens: Adding cores will increase speed up to a point. When the program can only utilize a certain amount of cores and/or threads, anything beyond that limit will just be excess resources. Thus, we need to write code that will optimally use multi-core.