If how backpropagation works is still a little bit vague to you, this book chapter will definitely help!
anonymous
Can someone explain this multiplication and how it relates too convolution? I thought from the previous lectures that convolution involved multiplying a smaller square matrix (within the larger matrix) to a weight filter of the same size. Why are we are only looking at a single column, instead of a square?
patrickbot
It looks like we flatten the 3x3 matrix into a single vector, w. There was nothing special about the original weight filter except it tells you the location of the pixels that need to be weighed.
Here, X is a really big matrix and already encodes the location information. Each row represents a pixel and each entry in that row contains information for a pixel in the 3x3 box around that pixel.
If how backpropagation works is still a little bit vague to you, this book chapter will definitely help!
Can someone explain this multiplication and how it relates too convolution? I thought from the previous lectures that convolution involved multiplying a smaller square matrix (within the larger matrix) to a weight filter of the same size. Why are we are only looking at a single column, instead of a square?
It looks like we flatten the 3x3 matrix into a single vector, w. There was nothing special about the original weight filter except it tells you the location of the pixels that need to be weighed.
Here, X is a really big matrix and already encodes the location information. Each row represents a pixel and each entry in that row contains information for a pixel in the 3x3 box around that pixel.