Are the scalability issues of shared caches due to bandwidth limitations of due to some other issue?
I think one reason for scalability issue is the size of cache. With more processors sharing one cache, we should increase the size of the cache, otherwise hit misses will happen frequently.
On the other hand, increasing cache size will slow down fetching speed.
In a shared cache, would it also be possible to have lock problems; waiting time for any one processor to read/write from the cache?
@althalus, whether it will have lock issue is based on your cache coherence implementation. I think using lock is one way to achieve cache coherence.