reading:reference_material
This is an old revision of the document!
Deep Learning
Chp 7: Regularization for deep learning
- Drop-out: A Simple Way to Prevent Neural Networks from Overfitting by Srivastava et al. (Demonstrates the regularization effect of drop-out )
- Transformation Invariance in Pattern Recognition: Tangent Distance and Propagation by Simard et al. (Details on the tangent distance and tangent prop algorithms)
reading/reference_material.1505369911.txt.gz · Last modified: 2017/09/14 02:18 by yang