Previous | Next --- Slide 29 of 47
Back to Lecture Thumbnails
BryceToTheCore

I believe that the idea of colocating tasks is very important for the implementations of recursive geometric computations such as raytracing and others, because instead of running each work thread to execution, we can do partial work for each one and do work on threads that are working on the same object in memory.

It seems that spatial locality extends from the abstract mental memory model of the programmer to the algorithm metaphors used as well in some cases.

kayvonf

Yes, it's almost the other way around though. Since many algorithms and computations have intrinsic locality in them, it makes sense that machines be developed to exploit that locality for greater efficiency and performance.

You can think of it in either direction:

  1. Since it is fundamental expensive to move data, algorithms must exhibit significant locality to run well on computers.
  2. Since real-world algorithms tend to have locality, machines that want to run them efficiently should optimize for this characteristics: e.g., build a complex memory hierarchy that allows fast access to small working sets.
BryceToTheCore

@kayvonf Your arguments sound reasonable.

I was under the assumption that before the advent of computers, Humans designed practical algorithms such as assembly lines for cars and collocating office workers and the algorithmic ways of thinking came about because of the past way of designing things.

I think you are saying that instead the old practical algorithms, the computer algorithms, and the computer architectures all came about from intrinsic cost of movement.

It makes me happy that the friendly machine makers are making machines to help out the algorithmists as opposed to the algorithmists tailoring algorithms to suit machines.

I think in an ideal world the algorithmists would only need to think about problems and mathematics and not need to even consider the needs of the machine, but as it stands right now their is no magic compiler, so people are forced to get along and respect each other. So, I think that the algorithm people especially when doing things like parallel code are tailoring their programs to the machine. In time this bidirectional relationship should transform into a one-directional relationship.

kayvonf

@BryceToTheCore. But if we want to get philosophical, I'd say that physics is the reason why communication is expensive -- and that all computations (even before digital computers) are designed in the context of the real-world constraint that moving things or information is expensive. Societies build cities near water sources for a reason, information transfer is limited by the speed of light, great industrial dynasties like Carnegie Steel excelled at vertical integration to minimize the cost of transferring materials and goods, etc...

HLAHat

Hmmm, how would we get to a one-directional relationship? Any device or algorithm we make today is inherently limited by the current research and knowledge. Who knows what kind of hardware breakthroughs or algorithmic techniques will be discovered in the future?

It would be impossible for someone strictly in hardware to know what parallel algorithms are going to be ran ahead of time. Similarly, an algorithm designer can't really predict what the hardware configurations of the future will look like, so how can he/she make an algorithm tailored to those specifications? Hardware designers will increasingly need to learn the mathematics and the algorithm designers will increasingly need to understand the hardware.

The only thing I can think of that would always make things easier is to have those disciplines work together as much as possible. We would be cutting down on the communication costs between them so there would be less overhead in getting work done! Basically, as research continues, I can only see the disciplines growing closer together and needing each other more.

BryceToTheCore

@HLAHat

I would say that in theory if the Hardware designers were to create perfect machines and compilers, such that any description of a procedure by a Algorithm programmer were automatically implemented efficiency on the machine, then the Algorithm designer would no longer need to worry about the machine.

I like to think of it like this: If I were to create a library that raytraces images, the user of the library should not need to know anything about the library other that what behavior to expect for the given input. Much in the same way, if I were to make a machine that I want people to program, I would ideally require the programmer to know how they want their procedure to run, but I should not require them to know the details of the machine that are not inherent in the description of the problem that they want to solve. My client should ideally be able to specify only the details necessary as to what ordered procedure and dependencies need to be executed.

By keeping these disciplines separate, it will also enable the hardware people to optimize their machines in the future and not have to worry about the Algorithmic code breaking as long as the interface and semantics are not changed.

We will still need to start over if some technological breakthrough happens that is beyond our current anticipations. But, even if a quantum computer were invented, it would be nice if I could give it my standard sorting algorithm specification that lists the steps and work dependencies, and the quantum computer could map my algorithm to an efficient use of quantum processors under the hood.