method: Shopee MMU OCR2022-09-28
Authors: Jianqiang Liu, Hanfei Xu, Bin Zheng, Longhuang Wu, Shangxuan Tian, Pengfei Xiong
Affiliation: Shopee MMU
Description: Our method adopts a transformer-based context-aware framework. We utilize a hybrid architecture encoder and a context-aware autoregressive decoder to construct the recognition pipeline. Finally, a simple but effective multi-model fusion strategy is adopted.
method: SogouMM2019-11-07
Authors: Xu Liu, Tao Wei
Description: Our method is based on 2D-attention, we use ResNet as backbone and a tailored 2D-attention module is applied. The result is generated by single model without ensemble tricks.
method: Hancom Vision2020-10-06
Authors: Hancom Vision team
Description: Our model is featured by CNN-based, BiLSTM, and Attention.
Trained on MJSynthText + SynthText + external data (Pretrain), Focused Scene Text 2013-2015, and Incidental Scene Text 2015.
Date | Method | Total Edit distance (case sensitive) | Correctly Recognised Words (case sensitive) | T.E.D. (case insensitive) | C.R.W. (case insensitive) | |||
---|---|---|---|---|---|---|---|---|
2022-09-28 | Shopee MMU OCR | 134.8110 | 87.14% | 104.6682 | 89.17% | |||
2019-11-07 | SogouMM | 144.5029 | 86.42% | 113.1573 | 88.11% | |||
2020-10-06 | Hancom Vision | 160.2667 | 86.09% | 108.3773 | 88.93% | |||
2019-10-31 | Sogou_OCR | 163.0954 | 84.35% | 129.2831 | 86.66% | |||
2018-09-29 | Alibaba-PAI V2 | 174.3919 | 83.92% | 129.3690 | 86.57% | |||
2018-09-13 | Clova AI / Lens | 175.4367 | 83.00% | 132.4229 | 85.56% | |||
2020-06-10 | test 1 | 164.4290 | 82.91% | 129.2433 | 85.07% | |||
2018-07-03 | Baidu VIS | 185.8078 | 82.85% | 150.8527 | 84.68% | |||
2017-10-17 | Dahua OCR | 226.8219 | 82.76% | 179.2576 | 85.89% | |||
2018-09-10 | Alibaba-PAI | 198.9750 | 81.32% | 160.9424 | 83.44% | |||
2017-07-06 | Baidu IDL v3 | 211.5909 | 80.02% | 171.1517 | 82.33% | |||
2017-07-06 | HIK_OCR_v3 | 191.2471 | 78.29% | 158.8399 | 80.12% | |||
2017-07-05 | Baidu IDL v2 | 219.1941 | 77.80% | 178.6945 | 80.26% | |||
2017-07-03 | Tencent-DPPR Team & USTB-PRIR | 251.9840 | 76.31% | 185.3572 | 80.55% | |||
2017-03-20 | HIK_OCR_v2 | 300.3392 | 72.94% | 244.1903 | 75.69% | |||
2017-06-29 | HKU-VisionLab | 258.5862 | 72.03% | 212.1685 | 74.19% | |||
2017-03-11 | HIK_OCR | 318.8730 | 71.50% | 266.6894 | 74.05% | |||
2016-06-23 | Baidu IDL | 351.5258 | 68.27% | 298.8028 | 70.92% | |||
2018-12-19 | SAR | 437.1642 | 67.40% | 203.1446 | 78.82% | |||
2020-01-10 | FiberHome | 479.9295 | 64.32% | 389.9657 | 68.32% | |||
2016-01-29 | SRC-B-TextProcessingLab | 419.7412 | 62.11% | 367.1222 | 64.95% | |||
2019-09-04 | juxinli | 507.2980 | 62.06% | 304.5993 | 70.29% | |||
2015-11-09 | Megvii-Image++ | 508.8323 | 57.82% | 377.6521 | 63.99% | |||
2020-01-11 | SAR_tensorflow_reproduced | 1,031.1057 | 41.36% | 333.1592 | 69.19% | |||
2019-10-14 | SAR tf-reproduce | 843.4374 | 40.73% | 494.6210 | 55.66% | |||
2019-10-14 | transformer-based method | 807.5391 | 39.24% | 454.9708 | 54.45% | |||
2018-04-25 | CNN+LSTM | 767.3535 | 34.28% | 623.1899 | 40.20% | |||
2015-04-01 | MAPS | 1,128.0075 | 32.93% | 1,068.7184 | 33.90% | |||
2015-04-01 | NESP | 1,164.4968 | 31.68% | 1,094.7071 | 32.98% | |||
2015-04-02 | DSM | 1,178.6140 | 25.85% | 1,108.9381 | 27.97% |