method: Shopee MMU OCR2022-10-31

Authors: Jianqiang Liu, Hanfei Xu, Bin Zheng, Eric W, Ronnie T, Alex X

Affiliation: Shopee MMU OCR

Description: Our method adopts a transformer-based context-aware framework. We utilize a hybrid architecture encoder and a context-aware autoregressive decoder to construct the recognition pipeline. Finally, a simple but effective multi-model fusion strategy is adopted.

method: SogouMM2019-09-05

Authors: Xu Liu, Hongyuan Zhang, Yan Zhang, Bo Qin, Tao Wei

Description: Description: Our method is based on 2D-attention, we simply use ResNet as backbone and a tailored 2D-attention module is applied. The result is generated by a single model without ensemble tricks.

method: SenseTime-CKD2019-06-22

Authors: CKD Team(Xiaocong Cai,Wenyang Hu, Jun Hou,,Miaomiao Cheng)

1) The method is designed based on the Rectify-Encoder-Decoder framework.
2) Our training data contains about 5, 600, 000 images from Synth90k, SynthText, SynthAdd and some academic dataset.
3) Varying length input is adopted here and the maximum input size is 64x160. Images are rectified by STN(spatial transform network) firstly. Then the rectified images are passed to some CNN backbones(e.g. ResNet) to extract features. As for the decoder part, we use three kinds of decoders to train different models, including CTC,1D attention,2D attention.Specially, the prediction results of these models are ensembled together.
4) Besides, some data augmentation methods and other tricks are used in this work.

Ranking Table

Description Paper Source Code
DateMethodTotal Edit distance (case sensitive)Correctly Recognised Words (case sensitive)T.E.D. (case insensitive)C.R.W. (case insensitive)
2022-10-31Shopee MMU OCR3,537.900143.29%746.598478.21%
2019-08-19MASTER-Ping An Property & Casualty Insurance Co3,272.081049.09%1,203.420171.33%
2022-02-23Singularity Systems Inc OCR4,410.081638.87%1,326.407370.94%
2017-06-30Tencent-DPPR Team & USTB-PRIR4,022.122436.91%1,233.460970.83%
2017-06-30Enhancing Text Recognition Accuracy by Adding External Language Model7,231.871817.88%5,555.892229.69%
2017-06-28LSTM based text recognition6,594.006910.11%4,638.834526.25%

Ranking Graphic

Ranking Graphic