method: SogouMM2019-09-05

Authors: Xu Liu, Hongyuan Zhang, Yan Zhang, Bo Qin, Tao Wei

Description: Description: Our method is based on 2D-attention, we simply use ResNet as backbone and a tailored 2D-attention module is applied. The result is generated by a single model without ensemble tricks.

method: SenseTime-CKD2019-06-22

Authors: CKD Team(Xiaocong Cai,Wenyang Hu, Jun Hou,,Miaomiao Cheng)

Description:
1) The method is designed based on the Rectify-Encoder-Decoder framework.
2) Our training data contains about 5, 600, 000 images from Synth90k, SynthText, SynthAdd and some academic dataset.
3) Varying length input is adopted here and the maximum input size is 64x160. Images are rectified by STN(spatial transform network) firstly. Then the rectified images are passed to some CNN backbones(e.g. ResNet) to extract features. As for the decoder part, we use three kinds of decoders to train different models, including CTC,1D attention,2D attention.Specially, the prediction results of these models are ensembled together.
4) Besides, some data augmentation methods and other tricks are used in this work.

method: HIK_OCR2017-07-01

Authors: Zhanzhan Cheng*, Gang Zheng*, Fan Bai, Yunlu Xu, Jie Wang, Ying Yao, Zhaoxuan Fan, Zhiqian Zhang, Yi Niu(*equal contribution)

Description: The method is designed based on the sequence-sequence framework. In the encoder part, images are resized to 100x100, and features are extracted by using a CNN; In the decoder part, character sequence is generated by an attention-based decoder. The novelties of their method include 1) A complicated CNN-based model is proposed for the feature extraction. The model has a few special mechanisms, including mask spatial transform, for handling text of arbitrary placement; 2) Instead of softmax loss, an Edit Probability Loss is developed for training; 3) A self-adaption gate mechanism is adopted to capture global information

Ranking Table

Description Paper Source Code
DateMethodTotal Edit distance (case sensitive)Correctly Recognised Words (case sensitive)T.E.D. (case insensitive)C.R.W. (case insensitive)
2019-09-05SogouMM3,496.312144.64%1,037.219777.97%
2019-06-22SenseTime-CKD4,054.823641.52%824.644977.22%
2017-07-01HIK_OCR3,661.578541.72%899.100976.11%
2019-08-19MASTER-Ping An Property & Casualty Insurance Co3,272.081049.09%1,203.420171.33%
2017-06-30Tencent-DPPR Team & USTB-PRIR4,022.122436.91%1,233.460970.83%
2019-02-25CLOVA-AI3,594.484247.35%1,583.772469.27%
2018-12-19SAR4,002.356341.27%1,528.739666.85%
2019-10-09Attention-OCR4,320.173437.87%1,251.984166.73%
2019-03-20ustc_pr3164,111.811940.00%1,615.442065.35%
2017-06-30HKU-VisionLab3,921.938840.17%1,903.372559.29%
2017-06-30BRTRS-Recognition4,895.959328.18%2,282.488859.25%
2019-07-03Advanced Readotron4,698.415834.52%1,998.229758.03%
2019-09-04juxinli5,544.602528.14%3,169.861045.41%
2017-06-29CCFLAB4,743.275226.52%2,982.660942.66%
2017-10-06CRNN - Sravya5,704.537924.26%3,532.961636.98%
2017-06-303CNN_2BiLSTM_CTC6,405.612912.19%4,395.417430.17%
2017-06-30Enhancing Text Recognition Accuracy by Adding External Language Model7,231.871817.88%5,555.892229.69%
2017-06-28LSTM based text recognition6,594.006910.11%4,638.834526.25%

Ranking Graphic

Ranking Graphic