The slide mentions briefly that we need knowledge of Gauss-Seidel to know that this change is okay. I'm curious about some more details there. Is there a mathematical analysis of the two algorithms that shows that they give results within some epsilon of each other? Or, did people simply run the different algorithms on a wide variety of test cases to see if the change was "good enough" in some hand-wavey sense?

Is there some general approach used when making these adaptations? Are approximations of the same solution common, or does it more often happen that the parallel algorithm gives exactly the same results? (I'd imagine this depends on the domain; for images and audio, approximations are probably more acceptable than for, say, sorting or searching algorithms.)

grose

Since they're both "approximations", there might be pathological examples that never happen in the real world where either of them are wildly inaccurate. So, that might be a reason why a mathematical analysis might be impossible, in which case you'd want to use test cases, which are also more representative of the kinds of data you'll get anyway

lament

I'm pretty sure that the study of "numerical analysis" goes over this sort of thing.

Ints and floats on computers aren't actual integers and real numbers, but both have well-defined algebraic properties that make them subject to formal analysis.

apoms

Another great example of utilizing domain knowledge to reformulate an approach to solving a problem is bump mapping.

In graphics, we approximately represent the geometry of objects in our scene using triangle meshes. However, if we want extremely high geometric detail we need a massive amount of vertex data to represent every part of the object. One way to get around this is to stay with the simple mesh of triangles but use a texture that describes the variation in surface texture (the "bumps" in the surface). This technique is called bump mapping and allows us to achieve most of the effect we would want at a much lower memory and processing cost.

As in the example above, the result is not the exact same but historically enabled much more detailed scenes for a relatively minor cost.

The slide mentions briefly that we need knowledge of Gauss-Seidel to know that this change is okay. I'm curious about some more details there. Is there a mathematical analysis of the two algorithms that shows that they give results within some epsilon of each other? Or, did people simply run the different algorithms on a wide variety of test cases to see if the change was "good enough" in some hand-wavey sense?

Is there some general approach used when making these adaptations? Are approximations of the same solution common, or does it more often happen that the parallel algorithm gives exactly the same results? (I'd imagine this depends on the domain; for images and audio, approximations are probably more acceptable than for, say, sorting or searching algorithms.)

Since they're both "approximations", there might be pathological examples that never happen in the real world where either of them are wildly inaccurate. So, that might be a reason why a mathematical analysis might be impossible, in which case you'd want to use test cases, which are also more representative of the kinds of data you'll get anyway

I'm pretty sure that the study of "numerical analysis" goes over this sort of thing.

Ints and floats on computers aren't actual integers and real numbers, but both have well-defined algebraic properties that make them subject to formal analysis.

Another great example of utilizing domain knowledge to reformulate an approach to solving a problem is bump mapping.

In graphics, we approximately represent the geometry of objects in our scene using triangle meshes. However, if we want extremely high geometric detail we need a massive amount of vertex data to represent every part of the object. One way to get around this is to stay with the simple mesh of triangles but use a texture that describes the variation in surface texture (the "bumps" in the surface). This technique is called bump mapping and allows us to achieve most of the effect we would want at a much lower memory and processing cost.

As in the example above, the result is not the exact same but historically enabled much more detailed scenes for a relatively minor cost.