How does error get back propagated through pooling layers?

I asked a question earlier that might have been too specific so I'll ask again in more general terms. How does error get propagated backwards through a pooling layer when there are no weights to train? In the tensorflow video at 6:36 https://www.youtube.com/watch?v=Y_hzMnRXjhI there's a GlobalAveragePooling1D after Embedding, How does the error go backwards?

1 answer

  • answered 2022-01-13 06:44 Shai

    A layer doesn't need to have weights in order to back-prop. You can compute the gradients of a global avg pool w.r.t the inputs - it's simply dividing by the number of elements pooled.
    It is a bit more tricky when it comes to max pooling: in that case, you propagate gradients through the pooled indices. That is, during back-prop, the gradients are "routed" to the input elements that contributed the maximal elements, no gradient is propagated to the other elements.

How many English words
do you know?
Test your English vocabulary size, and measure
how many words do you know
Online Test
Powered by Examplum