method: DH_OCR2021-03-24

Authors: Qiang Zeng(曾强),Zhaolin You(游照林),Yuanyuan Chen(陈媛媛),Jianping Xiong(熊剑平)

Affiliation: ZHEJIANG DAHUA TECHNOLOGY CO.,LTD

Description: We used EfficientNet series as baseline.They were trained with different depth and different width. We also used synthetic samples generated by our own algorithm.To balanced the data, the samples were processed with smooth, cut, rotate. We trained the model with ReCTS train data and synthetic data ,we also resampled the data and rewighted the loss.

method: CNNs-IFLYTEK2020-04-30

Authors: IFLYTEK&USTC

Affiliation: IFLYTEK&USTC

Description: We use an ensemble of CNN models. These models are trained with different model archs (resnext, densenet), different data aug methods (cutout, mixup, rotate, random cut), different input scales and different data distributions. To generate different data distributions, we trained a gan-based generative model to create new samples and use it to adjust word distribution of training sets. We do not use any external real data.
name organization
Hao Wu(吴浩) iFLYTEK(科大讯飞)
Chenyu Liu(刘辰宇) iFLYTEK(科大讯飞)
Xiangxiang Wang(王翔翔) iFLYTEK(科大讯飞)
Yixing Zhu(朱意星) USTC(中国科技大学)
Zhengyan Yang(杨争艳) iFLYTEK(科大讯飞)
Changjie Wu(吴昌杰) USTC(中国科技大学)
Mobai Xue(薛莫白) USTC(中国科技大学)
Jiajia Wu(吴嘉嘉) iFLYTEK(科大讯飞)
Bing Yin(殷兵) iFLYTEK(科大讯飞)
Cong Liu(刘聪) iFLYTEK(科大讯飞)
Jinshui Hu(胡金水) iFLYTEK(科大讯飞)
Jun Du(杜俊) USTC(中国科技大学)
Jianshu Zhang(张建树) USTC(中国科技大学)
Lirong Dai(戴礼荣) USTC(中国科技大学)

method: ATL Cangjie OCR2020-03-20

Authors: Yang Fan, Liu Yang, Guo Shan, Alibaba Turing Lab

Affiliation: Alibaba Turing Lab

Description: We used a customized ResNeXt based classification network with attention mechanism. Different depth of ResNeXt with different inputs' resolution formed an ensemble.
Besides, our external data is used which is generated by our own algorithm. To balance all the samples, data augmentation is applied including rotate, noise, scale, perspective transform. We trained all the samples together, and then finetuned it with samples selected by active-learning and the ReCTS training data.

Ranking Table

Description Paper Source Code
DateMethodResult
2021-03-24DH_OCR97.73%
2020-04-30CNNs-IFLYTEK97.59%
2020-03-20ATL Cangjie OCR97.53%
2019-05-01BASELINE v197.37%
2019-04-30Amap_CVLab97.27%
2024-01-09UCR96.55%
2020-05-21My method96.19%
2019-04-30TPS-ResNet v196.11%
2019-04-30SANHL_v495.94%
2020-07-15Baseline95.77%
2020-07-10Baseline95.58%
2019-04-221295.36%
2019-04-29Tencent-DPPR Team95.12%
2019-04-2312395.00%
2020-07-24Method94.98%
2019-04-30ResNet_HUSTer94.73%
2020-07-02Baseline94.70%
2019-04-29ResNet_HUST94.54%
2019-04-30ReCTS_Task193.89%
2019-04-30Task1-re593.87%
2019-09-06cool and cool93.55%
2019-04-30ocr_densenet93.47%
2019-04-30MixNet based on multiple classic CNN93.19%
2019-04-29class_5435_scale_292.95%
2019-04-23Task 1 - Character Recognition92.81%
2019-11-23test92.44%
2019-04-23task1_19042391.04%
2019-04-30task1_389.91%
2019-04-27Siamese Net89.79%
2019-04-26Subm190426_ensemble0189.14%
2019-04-29task1_0488.82%
2019-04-26casual train train87.32%
2019-04-30LCT_OCR (中国科学院信息工程研究所)8.86%
2019-04-28jxl_ocr6.68%

Ranking Graphic