Edit in Exercise 8

Edit in Exercise 8

by Sena Kiciroglu -
Number of replies: 0


There has been a minor change to Exercise 8. In the images describing the networks in Part 3, we realized that there is an unnecessary ReLu operation. 

For LeNet-5: Once we perform the final convolution, we pass the result through a ReLu, and then we do max pooling. Afterwards, we reshape our features from 16x7x7 to 784, and we pass through the ReLu again (according to the old figure). This is unnecessary, as we had already passed the features through the ReLu just before. Therefore, we have removed this ReLu operation from the figure. The same goes for the 3-layered CNN.