Previous | Next --- Slide 10 of 44
Back to Lecture Thumbnails

Error, or loss, is the difference between the answer we want the network to spit out and the answer that the network actually spits out. Depending on what the task at hand is, there may be some way to measure the loss, so we can know how wrong the network is. For example, for all of the training samples (like the pictures of professors), the network will give us an answer, and if it doesn't match our actual data, then we see some sort of loss. The following slides show how we can use gradient descent to tune the parameters to reduce the loss produced by the network.