- Task 1 - Text Localization
- Task 2 - Script identification
- Task 3 - Joint text detection and script identification
method: AntAI-Cognition2020-04-22
Authors: Qingpei Guo, Yudong Liu, Pengcheng Yang, Yonggang Li, Yongtao Wang, Jingdong Chen, Wei Chu
Affiliation: Ant Group & PKU
Email: qingpei.gqp@antgroup.com
Description: We are from Ant Group & PKU. Our approach is an ensemble method with three text detection models. The text detection models mainly follow the MaskRCNN framework[1], with different backbones(ResNext101-64x4d[2], CBNet[3], ResNext101-32x32d_wsl[4]) used. GBDT[5] is trained to normalize confidence scores and select quadrilateral boxes with the highest quality from all text detection models' outputs. Multi-scale training and testing are adopted for all basic models. For the training set, we also add ICDAR19 MLT datasets, both training & validation sets are used to get the final result.
[1] He K, Gkioxari G, Dollár P, et al. Mask r-cnn[C]//Proceedings of the IEEE international conference on computer vision. 2017: 2961-2969. [2] Xie S, Girshick R, Dollár P, et al. Aggregated residual transformations for deep neural networks[C]//Proceedings of the IEEE conference on computer vision and pattern recognition. 2017: 1492-1500. [3] Liu Y, Wang Y, Wang S, et al. Cbnet: A novel composite backbone network architecture for object detection[J]. arXiv preprint arXiv:1909.03625, 2019. [4] Mahajan D, Girshick R, Ramanathan V, et al. Exploring the limits of weakly supervised pretraining[C]//Proceedings of the European Conference on Computer Vision (ECCV). 2018: 181-196. [5] Ke G, Meng Q, Finley T, et al. Lightgbm: A highly efficient gradient boosting decision tree[C]//Advances in neural information processing systems. 2017: 3146-3154.
method: TH2020-04-16
Authors: Tsinghua University and Hyundai Motor Group AIRS Company
Email: Shanyu Xiao: xiaosy19@mails.tsinghua.edu.cn
Description: We have built an end-to-end scene text spotter based on Mask R-CNN & Transformer. The ResNeXt-101 backbone and multiscale training/testing are used.
method: Sogou_OCR2019-11-08
Authors: Xudong Rao, Lulu Xu, Long Ma, Xuefeng Su
Description: An arbitrary-shaped text detection method based on Mask R-CNN, we use resnext-152 as our backbone, multi-scale training and testing are adopted to get the final results.
Date | Method | Hmean | Precision | Recall | Average Precision | |||
---|---|---|---|---|---|---|---|---|
2020-04-22 | AntAI-Cognition | 84.45% | 88.55% | 80.72% | 77.19% | |||
2020-04-16 | TH | 84.36% | 89.66% | 79.65% | 77.33% | |||
2019-11-08 | Sogou_OCR | 83.93% | 89.95% | 78.66% | 75.91% | |||
2019-08-08 | JDAI | 82.82% | 87.83% | 78.35% | 76.15% | |||
2019-06-02 | NJU-ImagineLab | 82.74% | 86.62% | 79.19% | 76.32% | |||
2019-05-30 | PMTD | 82.12% | 87.05% | 77.72% | 75.22% | |||
2019-06-11 | 4Paradigm-Data-Intelligence | 81.60% | 85.27% | 78.22% | 66.62% | |||
2019-05-23 | 4Paradigm-Data-Intelligence | 80.99% | 85.33% | 77.08% | 65.66% | |||
2019-05-08 | Baidu-VIS | 80.65% | 86.31% | 75.68% | 65.15% | |||
2019-03-23 | PMTD | 80.18% | 85.20% | 75.72% | 72.28% | |||
2019-12-13 | BDN | 79.47% | 82.75% | 76.44% | 63.08% | |||
2018-11-15 | USTC-NELSLIP | 76.85% | 79.33% | 74.51% | 69.04% | |||
2018-10-29 | Amap-CVLab | 76.08% | 80.91% | 71.79% | 67.72% | |||
2018-11-20 | Pixel-Anchor | 74.79% | 84.24% | 67.24% | 56.83% | |||
2019-03-29 | GNNets (single scale) | 74.55% | 81.23% | 68.89% | 62.05% | |||
2018-12-04 | SPCNet_TongJi & UESTC (multi scale) | 74.13% | 80.61% | 68.62% | 55.20% | |||
2018-11-28 | CRAFT | 74.03% | 80.82% | 68.30% | 55.17% | |||
2019-01-08 | ALGCD_CP | 73.84% | 80.84% | 67.96% | 57.13% | |||
2018-03-12 | ATL Cangjie OCR | 73.52% | 78.88% | 68.84% | 64.30% | |||
2017-11-09 | EAST++ | 72.86% | 80.42% | 66.61% | 54.94% | |||
2018-05-18 | PSENet_NJU_ImagineLab (single-scale) | 72.45% | 77.01% | 68.40% | 52.51% | |||
2019-07-15 | stela | 71.50% | 78.68% | 65.52% | 60.26% | |||
2018-12-03 | SPCNet_TongJi & UESTC (single scale) | 70.00% | 73.40% | 66.89% | 49.02% | |||
2018-12-05 | EPTN-SJTU | 67.58% | 75.71% | 61.02% | 49.59% | |||
2019-05-30 | Thesis-SE | 67.22% | 75.68% | 60.47% | 47.30% | |||
2017-06-28 | SCUT_DLVClab1 | 64.96% | 80.28% | 54.54% | 50.34% | |||
2017-06-30 | Sensetime OCR | 62.56% | 56.93% | 69.43% | 61.24% | |||
2017-06-29 | SARI_FDU_RRPN_v1 | 62.37% | 71.17% | 55.50% | 50.33% | |||
2017-06-28 | SARI_FDU_RRPN_v0 | 60.66% | 67.07% | 55.37% | 48.76% | |||
2017-06-30 | TH-DL | 45.97% | 67.75% | 34.78% | 30.88% | |||
2017-06-30 | linkage-ER-Flow | 32.49% | 44.48% | 25.59% | 15.47% |