Previous | Next --- Slide 17 of 66
Back to Lecture Thumbnails
Kapteyn

In programs like this where a process spawns a thread, does the program begin execution on a single core who is then responsible for telling another core to execute the code in my_thread_start when pthread_create is called?

Also when the second core finishes executing the thread, if it has calculated some values needed by the parent process, does it send the computed value via some bus back to the first core running the parent process? Or can the two cores share registers and agree upon the register that the computed value will be stored in?

jazzbass

@Kapteyn to answer your second question:

You mention processes in your question, but I believe you're referring to threads. All threads share the same address space, therefore whatever value the threads need to share will be stored in memory and any of the threads (doesn't matter in which core they're running) can access it. As far as I remember, registers are not shared between threads or cores (please someone comment below if this is not the case on some architectures).

To communicate between processes, you might need to use higher level approaches such as message passing, files, signals, or others.

Kapteyn

@jazzbass You're right, I meant threads.

The wiki article on multi-core processors says:

"...cores may or may not share caches, and they may implement message passing or shared memory inter-core communication methods. Common network topologies to interconnect cores include bus, ring, two-dimensional mesh, and crossbar."

And the article on shared memory says:

"In computer hardware, shared memory refers to a block of random access memory (RAM) that can be accessed by several different central processing units (CPUs) in a multiple-processor computer system."

So it seems like either a bus or shared RAM are used to implement inter-core communication.

vrkrishn

@jazzbass, you are correct in stating that registers are not shared between cores. Having such dependencies would make the OS's job of scheduling tasks on multiple cores much harder because it would have to prevent the different cores from overwriting the register values during execution.

@Kaypteyn, basically the fork call has a support in the core hardware that drops the system into kernel mode. The kernel then will be in charge of placing the new thread's context on a different core. As you can see, this switching can and does happen even on a single core machine that is switching between your user process and the kernel.

In regards to sending the values back to the parent core, we will learn about different frameworks in the future; however, the shared-memory address system can simply store the values in memory after which the parent can access it

afa4

@Kapteyn That's right. Cores can communicate with each other using either inter-processor interrupts (IPIs) or shared memory.

TA-lixf

All of the answers here make sense with the assumption that the spawned threads are, in fact, scheduled on different physical cores. This may not always be the case. For example, it's perfectly possible if the kernel decides to schedule two threads on the same core if hyperthreading is turned on or if other cores have more stuff to do. This problem is essentially a abstraction vs. implementation problem because we as the programmers are using the interface of threads without much guarantee of how they are implemented. Note that there are techniques that allow programmers to break this constraint such as thread pinning, or maybe control interfaces exposed by the operating system.