Previous | Next --- Slide 48 of 52
Back to Lecture Thumbnails
spjames

I think it's worth pointing out that these programming models are not mutually exclusive and many programs make use of two or more. For example, a distributed map/reduce cluster could use a distributed file system to share data between instances. A master machine could use message passing to tell each compute instance what to do, and they in turn could use multiple threads and/or SIMD to process data parallel data with a shared address space. They would then pass messages back to the master to indicate completion, failure, etc.

woojoonk

It seems like the three explained here are not necessarily different in a way that it is possible to mimic the behavior of one in form of the other. For instance, if the MPI is only used the gather up the separate data, then it would be similar to having a case of shared address space where there is no need to worry about different threads accessing the same data (such as thread i only access sharedData[i]).