Previous | Next --- Slide 27 of 35
Back to Lecture Thumbnails
pd43

I am a bit confused about how this 'easy' optimization works.

Suppose processor 1 makes a request to read A. If processor 2 wants to read A as well, it can see there is an outstanding request to read A. The slide shows it marking a bit on 'P1 Request Table' that it too wants to read A. So are p1 and p2 both idling until there is a response on the Response Bus? I don't understand why there needs to be a 'share' bit. Can't p2 just wait and see what A is when it appears on the Response Bus?

kayvonf

@pd43: it essentially does.

However, P2 would want to make a note somewhere that it needs to take action when A arrives. Remember, P2 can move on and handle other memory requests while it is "waiting".

lazyplus

@pd43: I think the 'share' bit is a key to make things correct.

Based on the 'Tag check' state in slide 22, P2 might not be ready to handle the response immediately after it had issued the read request. If the main memory send the data when P1 is ready but P2 is not ready, P2 would miss that respond and thus wait forever.

So, we have the sync problem again. It seems complicated to make sure the 'share' bit is seen by all components when they do 'Tag check'.

monster

Here we should also set P1 and P2 to be in the share state rather than exclusive state.