Handwriting Recognition
50 papers with code • 3 benchmarks • 20 datasets
Libraries
Use these libraries to find Handwriting Recognition models and implementationsMost implemented papers
LSTM: A Search Space Odyssey
Several variants of the Long Short-Term Memory (LSTM) architecture for recurrent neural networks have been proposed since its inception in 1995.
OrigamiNet: Weakly-Supervised, Segmentation-Free, One-Step, Full Page Text Recognition by learning to unfold
On IAM we even surpass single line methods that use accurate localization information during training.
Speech Recognition with Deep Recurrent Neural Networks
Recurrent neural networks (RNNs) are a powerful model for sequential data.
Full Page Handwriting Recognition via Image to Sequence Extraction
We present a Neural Network based Handwritten Text Recognition (HTR) model architecture that can be trained to recognize full pages of handwritten or printed text without image segmentation.
Multi-Dimensional Recurrent Neural Networks
Recurrent neural networks (RNNs) have proved effective at one dimensional sequence learning tasks, such as speech and online handwriting recognition.
Spatially-sparse convolutional neural networks
Convolutional neural networks (CNNs) perform well on problems such as handwriting recognition and image classification.
A Critical Review of Recurrent Neural Networks for Sequence Learning
Recurrent neural networks (RNNs) are connectionist models that capture the dynamics of sequences via cycles in the network of nodes.
ScrabbleGAN: Semi-Supervised Varying Length Handwritten Text Generation
This is especially true for handwritten text recognition (HTR), where each author has a unique style, unlike printed text, where the variation is smaller by design.
Segmental Recurrent Neural Networks
Representations of the input segments (i. e., contiguous subsequences of the input) are computed by encoding their constituent tokens using bidirectional recurrent neural nets, and these "segment embeddings" are used to define compatibility scores with output labels.
Trainable Spectrally Initializable Matrix Transformations in Convolutional Neural Networks
In this work, we investigate the application of trainable and spectrally initializable matrix transformations on the feature maps produced by convolution operations.