Authors: Xianbiao Qi, Yihao Chen, Shaoqiong Chen, Ning Lu, Yuan Gao, Wenwen Yu, Rong Xiao
Description: We employ an encoder-decoder sequence method with attention mechanism. First, we create 2 millions of systhesis text lines, where the receipt background is used. Each line consists of one to five words. Then, we finetune the network with real-world receipt data.
1. Li, Hui, et al. "Show, Attend and Read: A Simple and Strong Baseline for Irregular Text Recognition." arXiv preprint arXiv:1811.00751 (2018).
2. Shi, Baoguang, et al. "Aster: An attentional scene text recognizer with flexible rectification." IEEE transactions on pattern analysis and machine intelligence (2018).
3. Vaswani, Ashish, et al. "Attention is all you need." Advances in neural information processing systems. 2017.