it is also interesting to check the process model of nginx
ask
Apache's parent dynamically manages the size of the worker pool. There is a limit on the maximum number of worker processes a server can have. If there are more requests to be served than this number, the excessive requests are dropped or queued. Typically, there are a few extra worker threads on standby than the number of active threads in order to quickly serve any new incoming request. Once these are used, a few other threads are made active and put on standby. Similarly, when no requests are completed and there are no new requests to serve, the active thread pool size is decreased. Thus, the size of the worker thread pool is scaled.
lfragago
The clever idea that Apache uses, is that it creates workers at the program startup, to avoid the overhead of creating/destroying workers while the system is running. This comes at little cost, since the OS will only schedule a few processor cycles to idle workers.
Levy
Nginx's process model is a single carry thread with event based operations... since all I/O are non-blocking, it performs surprisingly well in practice. Of course, it utilize poorly multicores
it is also interesting to check the process model of nginx
Apache's parent dynamically manages the size of the worker pool. There is a limit on the maximum number of worker processes a server can have. If there are more requests to be served than this number, the excessive requests are dropped or queued. Typically, there are a few extra worker threads on standby than the number of active threads in order to quickly serve any new incoming request. Once these are used, a few other threads are made active and put on standby. Similarly, when no requests are completed and there are no new requests to serve, the active thread pool size is decreased. Thus, the size of the worker thread pool is scaled.
The clever idea that Apache uses, is that it creates workers at the program startup, to avoid the overhead of creating/destroying workers while the system is running. This comes at little cost, since the OS will only schedule a few processor cycles to idle workers.
Nginx's process model is a single carry thread with event based operations... since all I/O are non-blocking, it performs surprisingly well in practice. Of course, it utilize poorly multicores