method: SRFormer (ResNet50-#1seg)2023-08-09

Authors: Qingwen Bu

Affiliation: Shanghai Jiao Tong University

Description: We first pre-train our model on SynthText150k, MLT17, LSVT and ICDAR19-ArT for 300k iterations and then tune it on ArT for 50k iterations. No TTA or any ensemble method is employed.