method: SRFormer (ResNet50-#1seg)2023-08-09
Authors: Qingwen Bu
Affiliation: Shanghai Jiao Tong University
Description: We first pre-train our model on SynthText150k, MLT17, LSVT and ICDAR19-ArT for 300k iterations and then tune it on ArT for 50k iterations. No TTA or any ensemble method is employed.
method: TextFuseNet (ResNeXt-101)2020-10-01
Authors: Jian Ye, Zhe Chen, Juhua Liu, Bo Du
Affiliation: Wuhan University
Email: leaf-yej@whu.edu.cn
Description: This is a preliminary evaluation result of TextFuseNet with ResNeXt-101. Multi-scale training and single-scale testing are used to get the final results. Sigma Lab, Wuhan University.
method: TextFuseNet (ResNet-50)2021-03-26
Authors: Jian Ye, Zhe Chen, Juhua Liu, Bo Du
Affiliation: Wuhan University, The University of Sydney
Email: leaf-yej@whu.edu.cn
Description: This is a preliminary evaluation result of TextFuseNet with ResNet50-FPN. Multi-scale training and single-scale testing are used to get the final results. Sigma Lab, Wuhan University.
Date | Method | Recall | Precision | Hmean | |||
---|---|---|---|---|---|---|---|
2023-08-09 | SRFormer (ResNet50-#1seg) | 73.51% | 86.08% | 79.30% | |||
2020-10-01 | TextFuseNet (ResNeXt-101) | 72.77% | 85.42% | 78.59% | |||
2021-03-26 | TextFuseNet (ResNet-50) | 69.42% | 82.59% | 75.44% | |||
2019-04-30 | Fudan-Supremind Detection v3 | 71.61% | 79.26% | 75.24% | |||
2019-04-29 | SRCB_Art | 70.30% | 80.41% | 75.02% | |||
2019-04-30 | DMText_art | 66.15% | 85.09% | 74.43% | |||
2019-04-25 | Art detect by vivo | 57.15% | 80.72% | 66.92% | |||
2019-04-28 | Improved Progressive scale expansion Net | 52.24% | 75.88% | 61.88% |