Previous | Next --- Slide 51 of 65
Back to Lecture Thumbnails
sbly

Can't you have arbitrarily many threads no matter how many contexts you have? Then you just need a context switch, right?

yixinluo

@sbly Correct, you can store arbitrarily many thread contexts in memory. These on-chip contexts are for reducing the context switch latency. You can view them as explicit caches for thread contexts.

lixf

Also, I asked the same question to the TA. And he said that basically you could have many threads on the OS level and then OS will do one level of context switching to avoid memory-disk or network latency etc. But on the CPU level if you have 4 contexts then you can run up to 4 threads without going to the memory. I just thought this two layers of hiding latency is fascinating and worth sharing.

asinha

I think what you all are saying above is that you are allowed to have multiple threads per context instead of one thread per context as these slides seem to imply. However, constantly context switching between multiple threads pollutes the cache each time you switch from one thread to the other, causing you to go to memory to keep updating the cache much more frequently than if there was only one thread per context. As we saw in lecture in the visual representation, having to go to memory takes much longer than having to access cache, so to hide the latency your best bet is to have a numerous threads per context or one per context but nothing in between. Is it correct to be equating "on-chip context" to "explicit cache for thread contexts" as the people did above and I did in my comment?