Is it true that they can't be two tasks at one core at the same time, but only after one task finishes on a core then another task in the task set for the program can start on this core?
@CC I don't think so. I believe a task operates similarly to a thread in execution. Therefore tasks can context switch in the same core, i.e. task 1 runs for a bit, then task 2 runs for a bit, then task 1, and so forth.
@lol In my view, a task and a thread also have significant difference. Specifically, two threads on a core will take turns to run but two tasks don't have to. Especially, if there are 100 tasks for a single core machine. They won't take turn to run, i.e. some tasks will start after others end. Conversely, 100 threads on a single will still be very likely to take turn to run, and this makes the context switching very expensive.
I want to verify whether my understanding is correct or not. So assume the user wants to create 100 tasks. The ISPC implementation is responsible for creating an appropriate number of threads, say 10 threads, to run those tasks. Once it creates P1...P10, and say let T1...T10 run on them, the OS manages the threads' life cycle and determines which to run. Say T1 finishes on P1, then ISPC chooses the next task T11 to run on P1 until T100 finishes. Is that correct?
@pavelkang I think of tasks like this: tasks are to pthreads as foreach is to program instances. Your example makes sense. Tasks are just a units of work that can be done in parallel to each other, so having the OS schedule tasks to pthreads would seem like the implementation of the abstraction (tasks).
I have read the code of ISPC task which is in the 'common' folder in the assignment handout. It is essentially a cross-platform multithreading library, and it uses pthread if compiled on Linux.
@365sleeping, I am still confused about this point. You say "two tasks don't have to take turns to run". To me, tasks are carried out by worker threads. If two threads carrying different tasks are scheduled on the same core (so they can run in a interleaved way), does this mean that tasks can run concurrently on the same core?
@haboric Yes, tasks can run concurrently on the same core. I mentioned "don't have to" just to point out the difference between tasks and threads. Hence, I gave an extreme example, i.e. 10000 tasks. Clearly, creating 10000 threads at once will drain the kernel resources. In this case, the number of threads created may be 16, 32 or 64, etc. Thus, some tasks may not start (wait in a shared queue) until other tasks fully completed. However, when a thread runs for a while and other threads are waiting, kernel will decide that this thread has run for too long and should go to sleep. Concretely, say we have two cores and each core has two thread contexts, 4 threads are best suited to computation intensive workload (no benefits got from latency hiding). But 8 tasks may be better than 4 tasks. Because the number of worker threads may still be 4, while 8 tasks achieve better load balancing.
FYI, the source code shows that the actual number of threads will be the number of cores n (n - 1 new threads and one main thread, see InitTaskSystem() Line 720).
@CC Maybe that's not the case when a core supports hyper-threading.
The main things you are told (in the ISPC User's Guide) about how tasks execute is that:
Which means, they could function exactly like pthreads. They could function nothing at all like pthreads. They could run completely in serial, and it would still satisfy what was described there.
So while what @365sleeping says is true now, it may not be true in the future.
Since we have tasks that can help us achieve multi-core execution which seem to be more efficient (correct me if I'm wrong) than pthreads, is there any reason we would still rely on pthreads to carry out parallel execution of computations?
@chuangxuean if you wanted to fine tune performance for a system, using pthreads would offer you more control than using ISPC with tasks.
Or if you are using any non x86 processor, then you can no longer use ISPC.