Throughput shouldn't (in theory) change under heavy load. Individual requests will take longer (higher latency) but requests/second served, in the steady state, will still be whatever the system can maximally serve.
abist
I think throughput could also decrease in a case where some requests that need a response from the server get slowed down because of dependencies.
Corian
For assignment 4, I noticed there was a focus on getting most requests under a certain latency which led to my group starving some requests to achieve a better score. Is this common among larger websites or do they focus on average latency instead?
ghawk
I guess the throughput change is dependent on the scheduling too. If the scheduler is akin to a round robin policy, a lot of running jobs would be continuously swapped in and out, reducing the throughput (with possible gains in decreasing latency for the new jobs).
Throughput shouldn't (in theory) change under heavy load. Individual requests will take longer (higher latency) but requests/second served, in the steady state, will still be whatever the system can maximally serve.
I think throughput could also decrease in a case where some requests that need a response from the server get slowed down because of dependencies.
For assignment 4, I noticed there was a focus on getting most requests under a certain latency which led to my group starving some requests to achieve a better score. Is this common among larger websites or do they focus on average latency instead?
I guess the throughput change is dependent on the scheduling too. If the scheduler is akin to a round robin policy, a lot of running jobs would be continuously swapped in and out, reducing the throughput (with possible gains in decreasing latency for the new jobs).