Previous | Next --- Slide 13 of 48
Back to Lecture Thumbnails
Elias

The parallel Mandelbrot calculation on the first assignment illustrated quite clearly that balancing workloads is important for maximum performance. In that case, we could visualize the work intensity (via the image), and use that to come up with a decent approach to dividing the work a priori among threads.

I'm curious: are there good ways for coming up with such balanced workloads on problems that are hard to visualize? Are there good ways for dynamically adjusting workloads, so that long running tasks are asymptotically optimally balanced? Is there a systematic approach to this problem, or do we have to take it on a case by case basis?

lament

Hmm... If you give me information about how you are distributing the workload, then I can design an adversarial algorithm that tries to scr- make your work load bad. So, I suppose uniform random assignment (without repeated work assignment) sets a reasonable bar. Assignment 1 with Mandelbrot could have been handled that way if orchestration/communication was in place, so not to repeat work.

lament

I'm betting someone is going to prove me wrong.

Well, I guess uniform random assignment still wouldn't work if I make only one task (not ISPC task - I mean generic piece of work) of thousands incredibly hard. So, I guess the question should be restricted to cases where there exists a "good" workload, and ask how a workload within epsilon of optimal can be achieved.

pmassey

What @lamant says is correct, being that a single piece of work out of thousands being significantly harder defeats the uniform random assignment strategy. However, there is a "good" workload layout even in that situation, which would be to give a single thread that piece of work and let the other threads deal with the easier tasks.

Expanding upon the same idea, it seems as though having a pool of work and then having workers draw tasks from it is an improvement upon random assignment, but requires significantly more communication.