method: TH2020-04-16

Authors: Tsinghua University and Hyundai Motor Group AIRS Company

Email: Shanyu Xiao: xiaosy19@mails.tsinghua.edu.cn

Description: We have built an end-to-end scene text spotter based on Mask R-CNN & Transformer. The ResNeXt-101 backbone and multiscale training/testing are used.

method: Sogou_OCR2019-11-08

Authors: Xudong Rao, Lulu Xu, Long Ma, Xuefeng Su

Description: An arbitrary-shaped text detection method based on Mask R-CNN, we use resnext-152 as our backbone, multi-scale training and testing are adopted to get the final results.

method: AntAI-Cognition2020-04-22

Authors: Qingpei Guo, Yudong Liu, Pengcheng Yang, Yonggang Li, Yongtao Wang, Jingdong Chen, Wei Chu

Affiliation: Ant Group & PKU

Email: qingpei.gqp@antgroup.com

Description: We are from Ant Group & PKU. Our approach is an ensemble method with three text detection models. The text detection models mainly follow the MaskRCNN framework[1], with different backbones(ResNext101-64x4d[2], CBNet[3], ResNext101-32x32d_wsl[4]) used. GBDT[5] is trained to normalize confidence scores and select quadrilateral boxes with the highest quality from all text detection models' outputs. Multi-scale training and testing are adopted for all basic models. For the training set, we also add ICDAR19 MLT datasets, both training & validation sets are used to get the final result.

[1] He K, Gkioxari G, Dollár P, et al. Mask r-cnn[C]//Proceedings of the IEEE international conference on computer vision. 2017: 2961-2969. [2] Xie S, Girshick R, Dollár P, et al. Aggregated residual transformations for deep neural networks[C]//Proceedings of the IEEE conference on computer vision and pattern recognition. 2017: 1492-1500. [3] Liu Y, Wang Y, Wang S, et al. Cbnet: A novel composite backbone network architecture for object detection[J]. arXiv preprint arXiv:1909.03625, 2019. [4] Mahajan D, Girshick R, Ramanathan V, et al. Exploring the limits of weakly supervised pretraining[C]//Proceedings of the European Conference on Computer Vision (ECCV). 2018: 181-196. [5] Ke G, Meng Q, Finley T, et al. Lightgbm: A highly efficient gradient boosting decision tree[C]//Advances in neural information processing systems. 2017: 3146-3154.

Ranking Table

Description Paper Source Code
DateMethodHmeanPrecisionRecallAverage Precision
2020-04-16TH58.66%49.57%71.82%49.46%
2019-11-08Sogou_OCR56.69%47.66%69.94%47.27%
2020-04-22 AntAI-Cognition56.55%46.52%72.10%46.56%
2019-05-08Baidu-VIS53.38%42.87%70.72%29.94%
2019-05-30PMTD53.34%42.54%71.51%49.93%
2019-08-08JDAI53.26%42.56%71.13%50.82%
2019-06-02NJU-ImagineLab52.80%41.06%73.94%49.97%
2019-03-23PMTD50.87%40.87%67.37%45.30%
2019-06-11 4Paradigm-Data-Intelligence49.41%37.84%71.18%26.31%
2019-05-234Paradigm-Data-Intelligence48.88%37.61%69.78%25.79%
2018-11-20Pixel-Anchor47.93%40.71%58.24%22.48%
2019-03-29GNNets (single scale)46.72%38.47%59.46%30.88%
2018-11-28CRAFT46.15%37.37%60.33%22.35%
2019-12-13BDN46.05%34.06%71.03%23.70%
2018-10-29Amap-CVLab44.87%35.48%61.00%30.08%
2018-11-15USTC-NELSLIP44.42%32.85%68.55%38.69%
2017-11-09EAST++43.15%33.57%60.37%27.28%
2018-05-18PSENet_NJU_ImagineLab (single-scale)41.03%31.96%57.29%17.80%
2018-12-04 SPCNet_TongJi & UESTC (multi scale)40.84%31.29%58.81%17.97%
2019-01-08ALGCD_CP40.45%30.10%61.65%26.49%
2019-07-15stela39.20%31.46%51.99%25.52%
2018-03-12ATL Cangjie OCR38.91%28.76%60.12%31.21%
2017-06-28SCUT_DLVClab137.02%31.48%44.93%25.34%
2019-05-30Thesis-SE34.72%25.80%53.07%21.64%
2018-12-05EPTN-SJTU34.48%25.57%52.91%21.71%
2018-12-03SPCNet_TongJi & UESTC (single scale)30.87%21.16%57.04%11.89%
2017-06-29SARI_FDU_RRPN_v130.72%22.58%48.02%19.88%
2017-06-28SARI_FDU_RRPN_v028.73%19.91%51.53%24.29%
2017-06-30TH-DL20.20%16.53%25.97%9.24%
2017-06-30Sensetime OCR18.68%10.93%64.03%27.49%
2017-06-30linkage-ER-Flow18.52%12.13%39.18%6.15%

Ranking Graphic

Ranking Graphic