Previous | Next --- Slide 14 of 41
Back to Lecture Thumbnails
taylor128

Just making sure I understand this: applying many filters at once is NOT adding multiple layers to the neural network. The set of 3x3 convolution filters shown above is still one layer. And the intuition behind applying many filters at once is like we apply gradient filter along various directions, so that we can pick up the response from the image better.

jrgallag

I believe you're right - because each filter is just a 3x3 matrix, there's no difference between doing (((I x A) x B) x C)... and doing I x (A x B x C ...). So we can just multiply all the filters together at initialization and encode the product in a single layer.

chuangxuean

I'm guessing that each of the filters is meant to have a different purpose and the result of each filter is then passed on to the next and this implies that the ordering of the filters affects the result. How then would this order be determined?

RX

I don't think the ordering has any affect, geometrically they should be the same, and the network should still be able to train the model and achieve the same accuracy

TanXiaoFengSheng

@chuangxuean, if that's the case, then the output should be W x H x 1 response instead of W x H x num_filters. I think you misunderstood the setting here