Note that if we have some image who's cells are all the same color, then the resulting matrix would be filled with all zeros!

makingthingsfast

I'm still a little confused as to how this is edge detection. Some of the computations we mentioned in class (such as getting a blank image from a one color input, or getting an output of 0 and something else at the boundary when the input is a half-half image of 2 colors) still don't make total sense to me.

krillfish

So the way these two matrices work (they're the Sobel filters), is that they basically take the derivative at a pixel.

Consider an edge

000111
000111
000111

Let's say, when we apply the filter, we extend the bounds of the image so that the filter won't get random stuff there. So, we convolve this padded image with the left filter. Our result would be

005500
005500
005500
^ EDGE!!

So right where the edge is, we get the highest response. This particular filter is just taking the difference in intensity going either left and right or up and down.

pavelkang

If a pixel is on a horizontal boundary in an image, then what's above it and below it should be very different. By convolving with the horizontal operator in this slide, we take 2(pixel below - pixel above), which enlarges the difference between what's above and what's under. In fact, we can use this idea to implement auto-focusing. The idea is when we take out our phone to take a picture, the program adjusts the focus of the phone camera to find the largest gradient using the sobel operator (https://en.wikipedia.org/wiki/Sobel_operator).

stl

A good way to see how convolution with these filters extracts gradients is to consider two extremes: an image that is all the same color and an image that is black on one side and white on the other. For the one color image, because the numbers on either side of the 0 cancel each other out, when the convolution is applied on the pixels, you end up with 0. But, in the half black half white case, if we're right on the edge separating the colors then there will be a large response because the pixels on either side have very different values and their values will not cancel each other out.

Lawliet

Just clarifying, the left filter extracts horizontal gradients (or vertical lines) and the right filter detects vertical gradients (or horizontal lines).

For example,

010

010

010

composed with the left filter gives

4,0,-4

4,0,-4

4,0,-4

patrickbot

To expand on what others have been saying, each filter only detects vertical or horizontal edges. To detect edges not aligned to one of the axes, we would need to combine the results of the convolution (by taking the magnitude). As posted above, you can read more about it detecting these on https://en.wikipedia.org/wiki/Sobel_operator.

Note that if we have some image who's cells are all the same color, then the resulting matrix would be filled with all zeros!

I'm still a little confused as to how this is edge detection. Some of the computations we mentioned in class (such as getting a blank image from a one color input, or getting an output of 0 and something else at the boundary when the input is a half-half image of 2 colors) still don't make total sense to me.

So the way these two matrices work (they're the Sobel filters), is that they basically take the derivative at a pixel.

Consider an edge

Let's say, when we apply the filter, we extend the bounds of the image so that the filter won't get random stuff there. So, we convolve this padded image with the left filter. Our result would be

So right where the edge is, we get the highest response. This particular filter is just taking the difference in intensity going either left and right or up and down.

If a pixel is on a horizontal boundary in an image, then what's above it and below it should be very different. By convolving with the horizontal operator in this slide, we take 2(pixel below - pixel above), which enlarges the difference between what's above and what's under. In fact, we can use this idea to implement auto-focusing. The idea is when we take out our phone to take a picture, the program adjusts the focus of the phone camera to find the largest gradient using the sobel operator (https://en.wikipedia.org/wiki/Sobel_operator).

A good way to see how convolution with these filters extracts gradients is to consider two extremes: an image that is all the same color and an image that is black on one side and white on the other. For the one color image, because the numbers on either side of the 0 cancel each other out, when the convolution is applied on the pixels, you end up with 0. But, in the half black half white case, if we're right on the edge separating the colors then there will be a large response because the pixels on either side have very different values and their values will not cancel each other out.

Just clarifying, the left filter extracts horizontal gradients (or vertical lines) and the right filter detects vertical gradients (or horizontal lines).

For example,

010

010

010

composed with the left filter gives

4,0,-4

4,0,-4

4,0,-4

To expand on what others have been saying, each filter only detects vertical or horizontal edges. To detect edges not aligned to one of the axes, we would need to combine the results of the convolution (by taking the magnitude). As posted above, you can read more about it detecting these on https://en.wikipedia.org/wiki/Sobel_operator.