We learned in 18349 (Real Time Embedded Systems) about networking and bus communications, and a lot of latency came from trying to send messages on a shared bus. Although systems like CAN (controller area network) were pretty good, it's difficult to get 100% utilization and a lot of encoding is necessary. How well do processors on a shared bus handle talking to memory also on the same bus?
kapalani
In massively parallel machines with lots of cores, an alternative multiprocessor model is asymmetric multiprocessing by which delegate certain responsibilities to certain cores. In such machines, it is sometimes beneficial to have other cores communicate with a delegated core to do a specific task like modify some data structure or interact with hardware to gain the benefit of having mostly cache hits in that core as opposed to having all the cores manipulate the same data structure which could cause cache lines to be bounced around several processors and therefore result in more cache misses
chenh1
What is the response pattern or scheme of shared bus when multiple processors want to access the memory in parallel?
We learned in 18349 (Real Time Embedded Systems) about networking and bus communications, and a lot of latency came from trying to send messages on a shared bus. Although systems like CAN (controller area network) were pretty good, it's difficult to get 100% utilization and a lot of encoding is necessary. How well do processors on a shared bus handle talking to memory also on the same bus?
In massively parallel machines with lots of cores, an alternative multiprocessor model is asymmetric multiprocessing by which delegate certain responsibilities to certain cores. In such machines, it is sometimes beneficial to have other cores communicate with a delegated core to do a specific task like modify some data structure or interact with hardware to gain the benefit of having mostly cache hits in that core as opposed to having all the cores manipulate the same data structure which could cause cache lines to be bounced around several processors and therefore result in more cache misses
What is the response pattern or scheme of shared bus when multiple processors want to access the memory in parallel?