method: SRFormer (ResNet50-#1seg)2023-08-09

Authors: Qingwen Bu

Affiliation: Shanghai Jiao Tong University

Description: We first pre-train our model on SynthText150k, MLT17, LSVT and ICDAR19-ArT for 300k iterations and then tune it on ArT for 50k iterations. No TTA or any ensemble method is employed.

method: TextFuseNet (ResNeXt-101)2020-10-01

Authors: Jian Ye, Zhe Chen, Juhua Liu, Bo Du

Affiliation: Wuhan University

Email: leaf-yej@whu.edu.cn

Description: This is a preliminary evaluation result of TextFuseNet with ResNeXt-101. Multi-scale training and single-scale testing are used to get the final results. Sigma Lab, Wuhan University.

method: TextFuseNet (ResNet-50)2021-03-26

Authors: Jian Ye, Zhe Chen, Juhua Liu, Bo Du

Affiliation: Wuhan University, The University of Sydney

Email: leaf-yej@whu.edu.cn

Description: This is a preliminary evaluation result of TextFuseNet with ResNet50-FPN. Multi-scale training and single-scale testing are used to get the final results. Sigma Lab, Wuhan University.

Ranking Table

Description Paper Source Code
DateMethodRecallPrecisionHmean
2023-08-09SRFormer (ResNet50-#1seg)73.51%86.08%79.30%
2020-10-01TextFuseNet (ResNeXt-101)72.77%85.42%78.59%
2021-03-26TextFuseNet (ResNet-50)69.42%82.59%75.44%
2019-04-30Fudan-Supremind Detection v371.61%79.26%75.24%
2019-04-29SRCB_Art70.30%80.41%75.02%
2019-04-30DMText_art66.15%85.09%74.43%
2019-04-25Art detect by vivo57.15%80.72%66.92%
2019-04-28Improved Progressive scale expansion Net52.24%75.88%61.88%

Ranking Graphic