Previous | Next --- Slide 9 of 40
Back to Lecture Thumbnails
Nicki_Minaj

Another algorithm for completing this assignment involves everyone except for the back row placing the number face-up on the back of their head. Each person in the back row quickly computes the max of their column and passes it down the row towards the center, like in class. The final answer comes from the center of the back row.

This violates the communications restrictions we placed before by allowing different messages to be sent simultaneously to different processors. However humans can compute the max of ~6 numbers approximately as fast as 2 numbers, giving this algorithm a span equal to the size of a row.

TDollasAKATMoneys

How do you know that "humans can compute the max of ~6 numbers approximately as fast as 2 numbers"? I feel like that's just speculation...

But otherwise I really like your proposed algorithm even though it breaks the communication restrictions. You could even take it a step further and have the back of the upper half of the rows count the numbers in front of them, have the back of the second half of the rows count the numbers in front of them, then have each half compare numbers with each other and pass the answer to the center. Both final rows will have 1 answer at their center, and the final answer is the greater one of the two.

russt17

Another randomized solution that might have gone very quickly, but requires mass broadcasting:

Everyone holds up their number high and randomly scans the other slips of paper they can see held up. If a person sees any number smaller than their own, they put their own down and stop scanning. The number of papers to scan diminishes fast and soon only one person, the minimum, remains.

Computation time might be expected logarithmic with respect to number of papers.

aoeuidhtns

@russt

I was thinking the exact same thing!

In practice, if implemented using computers and networks, it might not perform as well because you need to quickly broadcast from each computer to each other computer.

BryceToTheCore

While it is pedegogically important to impose restrictions on communication while trying to emulate the behavior of processors, it is important that we not underestimate the existence of algorithms that are constrained by reality and the human experience, rather than man made machines.

I think it is useful to think about what the fastest way tasks may be performed by humans without restriction. Humans have many parallel processors that they take for granted. For example, the Visual Cortex is capable of very fast feats of perception. In the third demo, what we did was reduce the problem of parallel computation to that of perception and communication. Humans are able to in this way to theoretically compute an answer bounded by the speed of light and sound.

Consider the problem of finding a single red glyph from a set of otherwise black glyphs. If a non computationally enlightened friend of ours were to look at a poster containing a billion of these glyphs, it is possible that they will be able to quickly inform us of the location of the red one, due to their sensitivity to contrasts in the emittance of light from the poster. If we were to give a computer a set of glyphs, then it would need to check every one of the glyphs for the redness property.

Because of the ubiquity of special hardware built into the human body, many people take the algorithmic tasks that they do on a daily basis for granted. Because of this psychological predilection for underestimating just how amazing they are, humans sometimes are inhibited from learning the conceptual models necessary for implementing algorithms on computers.

On the other hand, it is very interesting to think about which tasks may be done more efficiently using human hardware or natural physics rather than digital computers. One of my favorite sorting algorithms that may be understood in terms of gravity is bead sort: http://en.wikipedia.org/wiki/Bead_sort

My final paragraph will discuss a method I come up with that tries to exploit the parallelism of the human visual system in a way that may be better than that of repeated broadcasting, perceiving, and querying.

I believe that an average human may quickly find the minimum of a set of numbers on a set of paper, so it is conceivable that we could improve the performance by laying all of the slips of paper on a larger table, splitting them into a groups of size S, and then having humans use their visual parallel processors to find the minimum of their groups and then pass the minimums to further humans in a tree like form. We can determine S through experimentation using multiple trials.

Regardless, we can save time by eliminating verbal communication, eliminating physical motions such as turning one's head, standing up, raising hands, etc, and utilizing the inherent parallelism of the human perception system by performing minimum computations on larger sets at a time by single human processors.

russt17

@aoeuidhtns

Actually thinking about it more you might not need broadcasting. Each processor randomly picks other "active" processors and reads their number, and then deactivates itself if it sees some smaller number. So the algorithm could work like this:

Have a globally available list of memory addresses of each processor's number.

Then each processor follows this loop:

1) If global list has only one address, stop, and this processor has the minimum number

2) Randomly choose an address from the global list

3) If value at address is less than own value

    delete own address from global list and stop

Else

    repeat to (1)

Would this work?

BryceToTheCore

@russt17 I believe that your algorithm is sound, but I question the performance of the list with regards to the delete operation. Using a standard array or linked list based list, it would take roughly O(n) to delete the address from the list in a way that would improve the random selection mechanism for the other processors.

If you utilized a hash table, then the deletion might be faster, and random selection for be a bit more distributed across separate structures.

Other issues would involve how to make the use of the data structure safe with regards to parallel accesses and mutations, but I think we could get around this with a perfect hashing scheme where each of the addresses map to separate locations that could maintain a boolean value. But then the issue of finding a random on processor would be difficult.

I think the needs for the structure include: 1. finding an on candidate, 2. removing an address from the structure, and 3. a need for parallel friendliness for the first two operations.

It seems like it would be difficult to achieve each of these three operations perfectly, but some elaborate scheme involving persistence, tomb stones, and amortized analysis may bare fruits.

http://en.wikipedia.org/wiki/Tombstone_(programming)

toothless

With the original communication restrictions, I believe that the proposed algorithm (pass within row to the isle, then pass within column to the center) is optimal. Consider a graph with vertices as the participants, and edges between two participants that are allowed to communicate with each other. Then, you can't do better than diameter/2 and still ensure correctness. This is because whoever calls out the final answer must know information about every vertex, and conveying any information from one vertex to another vertex distance d away takes at least d sequential iterations. And if you take two endpoints of a diameter, then one of them will be diameter/2 away from whoever calls out the final answer.

I believe that the proposed algorithm achieves exactly diameter/2. So great job, whoever proposed this algorithm - I think your algorithm was optimal under the original communication constraints.