method: TextFuseNet2020-07-31
Authors: Jian Ye, Zhe Chen, Juhua Liu and Bo Du
Affiliation: Wuhan University, The University of Sydney
Email: liujuhua@whu.edu.cn
Description: Arbitrary shape text detection in natural scenes is an extremely challenging task. Unlike existing text detection approaches that only perceive texts based on limited feature representations, we propose a novel framework, namely TextFuseNet, to exploit the use of richer features fused for text detection. More specifically, we propose to perceive texts from three levels of feature representations, i.e., character-, word- and global-level, and then introduce a novel text representation fusion technique to help achieve robust arbitrary text detection. The multi-level feature representation can adequately describe texts by dissecting them into individual characters while still maintaining their general semantics. TextFuseNet then collects and merges the texts’ features from different levels using a multi-path fusion architecture which can effectively align and fuse different representations. In practice, our proposed TextFuseNet can learn a more adequate description of arbitrary shapes texts, suppressing false positives and producing more accurate detection results. Our proposed framework can also be trained with weak supervision for those datasets that lack character-level annotations. Experiments on several datasets show that the proposed TextFuseNet achieves state-of-the-art performance. Specifically, we achieve an F-measure of 94.3% on ICDAR2013, 92.1% on ICDAR2015,87.1% on Total-Text and 86.6% on CTW-1500, respectively.
method: CRAFT2018-11-07
Authors: Youngmin Baek, Bado Lee, Dongyoon Han, Sangdoo Yun, and Hwalsuk Lee
Description: We propose a novel text detector called CRAFT. The proposed method effectively detects text area by exploring each character and affinity between characters. To overcome the lack of individual character level annotations, our framework exploits the pseudo character-level bounding boxes acquired by the learned interim model in a weakly-supervised manner.
Clova AI OCR Team, NAVER/LINE Corp.
method: FOTS2018-01-22
Authors: Xuebo Liu, Ding Liang, Shi Yan, Dagui Chen, Yu Qiao, Junjie Yan
Description: A unified end-to-end trainable Fast Oriented Text Spotting (FOTS) network for simultaneous detection and recognition, sharing computation and visual information among the two complementary tasks.
Date | Method | Recall | Precision | Hmean | |||
---|---|---|---|---|---|---|---|
2020-07-31 | TextFuseNet | 90.78% | 95.58% | 93.11% | |||
2018-11-07 | CRAFT | 89.04% | 93.93% | 91.42% | |||
2018-01-22 | FOTS | 89.68% | 91.43% | 90.55% | |||
2019-05-03 | Mask Textspotter | 87.40% | 93.55% | 90.37% | |||
2019-07-12 | stela | 88.13% | 91.38% | 89.73% | |||
2020-05-19 | Craft++ | 86.67% | 91.07% | 88.82% | |||
2020-11-10 | Hancom Vision | 81.74% | 92.94% | 86.98% | |||
2017-03-22 | MCLAB_TextBoxes_v2 | 83.29% | 89.94% | 86.49% | |||
2016-12-16 | RRPN-4 | 83.56% | 89.53% | 86.44% | |||
2018-01-04 | crpn | 82.28% | 89.65% | 85.81% | |||
2016-11-13 | RRPN-3 | 70.50% | 88.23% | 78.38% |