Handwritten Digit Recognition
23 papers with code • 1 benchmarks • 5 datasets
Most implemented papers
LipschitzLR: Using theoretically computed adaptive learning rates for fast convergence
In this paper, we propose a novel method to compute the learning rate for training deep neural networks with stochastic gradient descent.
How Important is Weight Symmetry in Backpropagation?
Gradient backpropagation (BP) requires symmetric feedforward and feedback connections -- the same weights must be used for forward and backward passes.
Compressing deep neural networks on FPGAs to binary and ternary precision with HLS4ML
We discuss the trade-off between model accuracy and resource consumption.
MNIST-MIX: A Multi-language Handwritten Digit Recognition Dataset
In this letter, we contribute a multi-language handwritten digit recognition dataset named MNIST-MIX, which is the largest dataset of the same type in terms of both languages and data samples.
Deep Big Simple Neural Nets Excel on Handwritten Digit Recognition
Good old on-line back-propagation for plain multi-layer perceptrons yields a very low 0. 35% error rate on the famous MNIST handwritten digits benchmark.
A neuromorphic hardware architecture using the Neural Engineering Framework for pattern recognition
The architecture is not limited to handwriting recognition, but is generally applicable as an extremely fast pattern recognition processor for various kinds of patterns such as speech and images.
Large-scale Artificial Neural Network: MapReduce-based Deep Learning
Faced with continuously increasing scale of data, original back-propagation neural network based machine learning algorithm presents two non-trivial challenges: huge amount of data makes it difficult to maintain both efficiency and accuracy; redundant data aggravates the system workload.
Group Sparse Regularization for Deep Neural Networks
In this paper, we consider the joint task of simultaneously optimizing (i) the weights of a deep neural network, (ii) the number of neurons for each hidden layer, and (iii) the subset of active input features (i. e., feature selection).
Formal Verification of Piece-Wise Linear Feed-Forward Neural Networks
We present a specialized verification algorithm that employs this approximation in a search process in which it infers additional node phases for the non-linear nodes in the network from partial node phase assignments, similar to unit propagation in classical SAT solving.
Incremental and Iterative Learning of Answer Set Programs from Mutually Distinct Examples
This paper is under consideration for acceptance in