method: TextFuseNet2020-07-31
Authors: Jian Ye, Zhe Chen, Juhua Liu and Bo Du
Affiliation: Wuhan University, The University of Sydney
Email: liujuhua@whu.edu.cn
Description: Arbitrary shape text detection in natural scenes is an extremely challenging task. Unlike existing text detection approaches that only perceive texts based on limited feature representations, we propose a novel framework, namely TextFuseNet, to exploit the use of richer features fused for text detection. More specifically, we propose to perceive texts from three levels of feature representations, i.e., character-, word- and global-level, and then introduce a novel text representation fusion technique to help achieve robust arbitrary text detection. The multi-level feature representation can adequately describe texts by dissecting them into individual characters while still maintaining their general semantics. TextFuseNet then collects and merges the texts’ features from different levels using a multi-path fusion architecture which can effectively align and fuse different representations. In practice, our proposed TextFuseNet can learn a more adequate description of arbitrary shapes texts, suppressing false positives and producing more accurate detection results. Our proposed framework can also be trained with weak supervision for those datasets that lack character-level annotations. Experiments on several datasets show that the proposed TextFuseNet achieves state-of-the-art performance. Specifically, we achieve an F-measure of 94.3% on ICDAR2013, 92.1% on ICDAR2015,87.1% on Total-Text and 86.6% on CTW-1500, respectively.
method: stela2019-07-12
Authors: Linjie Deng
Description: STELA is a simple and intuitive method for multi-oriented text detection based on RetinaNet. The key idea is utilizing the learned anchor which is obtained through a regression operation to replace the original into the final predictions.
method: RRPN-42016-12-16
Authors: Jianqi Ma, Weiyuan Shao, Hao Ye, Li Wang, Hong Wang, Yingbin Zheng, Xiangyang Xue
Description: This paper introduces a novel rotation-based framework for arbitrary-oriented text detection in natural scene images. We present the Rotation Region Proposal Networks (RRPN), which are designed to generate inclined proposals with text orientation angle information. The angle information is then adapted for bounding box regression to make the proposals more accurately fit into the text region in terms of the orientation. The Rotation Region-of-Interest (RRoI) pooling layer is proposed to project arbitrary-oriented proposals to a feature map for a text region classifier. The whole framework is built upon a regionproposal-based architecture, which ensures the computational efficiency of the arbitrary-oriented text detection compared with previous text detection systems. We conduct experiments using the rotation-based framework on three real-world scene text detection datasets and demonstrate its superiority in terms of effectiveness and efficiency over previous approaches.
Date | Method | Recall | Precision | Hmean | |||
---|---|---|---|---|---|---|---|
2020-07-31 | TextFuseNet | 92.09% | 97.27% | 94.61% | |||
2019-07-12 | stela | 89.66% | 93.74% | 91.65% | |||
2016-12-16 | RRPN-4 | 87.85% | 94.91% | 91.25% | |||
2017-03-22 | MCLAB_TextBoxes_v2 | 84.38% | 91.21% | 87.67% | |||
2018-01-04 | crpn | 83.80% | 91.90% | 87.66% | |||
2019-06-26 | std(single-scale) | 77.13% | 84.48% | 80.64% | |||
2016-11-13 | RRPN-3 | 71.89% | 90.22% | 80.02% |