Showing 1 Result(s)

Deeper neural networks are more difficult to train. We present a residual learning framework to ease the training of networks that are substantially deeper than those used previously. We explicitly reformulate the layers as learning residual functions with reference to the layer inputs, instead of learning unreferenced functions. Solely due to our extremely deep representations, we obtain a 28% relative improvement on the COCO object detection dataset.

Continue Reading