Previous | Next --- Slide 10 of 57
Back to Lecture Thumbnails
rsvaidya

Every thread scheduled has a working set ,so if there are a small no of threads interleaving, data will remain in cache when it is swapped back in. So it is better if large number of threads are not generated.

paracon

Though we should fill up all execution contexts to maximise throughput, the number of threads should not be way larger than the execution contexts, since their aggregate working set will cause evictions in the cache(thrashing).

machine6

I/O Multiplexing is often an overlooked technique when talking about scaling web-servers.

The idea is to rather than have a pool of worker threads with each thread 'owning' a socket or file descriptor, we let a single thread handle multiple sockets. This concept is rewarding for a number of reasons:

  • Context switching is an expensive operation. Rather than suffer the penalty of switching to a different thread to handle a different request, we let a single thread handle a batch of such requests to minimize said penalty.

  • In practice, we are limited in the number of threads a process can create. I/O Multiplexing allows for more scalability by not restricting number of active connections to the maximum number of spawnable threads.

paramecinm

@machine6 The most important feature of I/O Multiplexing is that one thread can wait for more than one I/O operations to be ready at the same time.

boba

Lower bound: We want to create at least as many processes as execution contexts to fill up the machine and maximize throughput.

Add more processes to hide latency, but not so many that the overhead of storing the execution contexts doesn't exceed the cache.

Note that there is no overhead of swapping because more threads does not mean swapping more often.

manishj

If there are more threads then the rate at which the context switching of threads happen, will not increase. However, if the number of threads are very high, it will cause thrashing.