Residual Connections are a type of skip-connection that learn residual functions with reference to the layer inputs, instead of learning unreferenced functions.
Formally, denoting the desired underlying mapping as $\mathcal{H}({x})$, we let the stacked nonlinear layers fit another mapping of $\mathcal{F}({x}):=\mathcal{H}({x})-{x}$. The original mapping is recast into $\mathcal{F}({x})+{x}$.
The intuition is that it is easier to optimize the residual mapping than to optimize the original, unreferenced mapping. To the extreme, if an identity mapping were optimal, it would be easier to push the residual to zero than to fit an identity mapping by a stack of nonlinear layers.
Source: Deep Residual Learning for Image RecognitionPaper | Code | Results | Date | Stars |
---|
Task | Papers | Share |
---|---|---|
Language Modelling | 53 | 7.01% |
Retrieval | 38 | 5.03% |
Semantic Segmentation | 28 | 3.70% |
Question Answering | 27 | 3.57% |
Large Language Model | 25 | 3.31% |
Sentence | 14 | 1.85% |
Object Detection | 14 | 1.85% |
Benchmarking | 12 | 1.59% |
Image Classification | 11 | 1.46% |
Component | Type |
|
---|---|---|
Batch Normalization
|
Normalization | (optional) |
ReLU
|
Activation Functions | (optional) |