Previous | Next --- Slide 46 of 51
Back to Lecture Thumbnails
jkorn

Are there proven lower bounds on number of layers needed for different tasks? For instance with image classification, is there a point where if you use fewer than X-many layers you don't consider enough features to distinguish, say, a cat from a dog? I suppose it also depends on the level of detail you want, for instance maybe "animal" vs. "fruit" is good enough for a certain task.

tarabyte

@jkorn I am not exactly sure, but in 10-601 we learned that the number of layers distinguishes what kinds of decision boundaries you can draw. If there are 0 layers, then we can make a linear classifier. With one layer, we can have the boundary of a open or closed convex region. So, we can have a circle as our decision boundary.

If you are interested, check out these slides: http://www.cs.cmu.edu/~mgormley/courses/10601-s17/slides/lecture19-nn.pdf

Slides 39 to the end would be most helpful :)

crow

@jkorn the universal approximation theorem proves that just one hidden layer is sufficient to approximate any reasonable function with arbitrary accuracy, so there is no theoretical limit.

holard

How well do these methods work work with recurrent neural nets? Or does it not make a difference?