method: FOTS2018-01-22

Authors: Xuebo Liu, Ding Liang, Shi Yan, Dagui Chen, Yu Qiao, Junjie Yan

Description: A unified end-to-end trainable Fast Oriented Text Spotting (FOTS) network for simultaneous detection and recognition, sharing computation and visual information among the two complementary tasks.

method: CRAFT2019-04-08

Authors: Youngmin Baek, Bado Lee, Dongyoon Han, Sangdoo Yun, and Hwalsuk Lee

Description: We propose a novel text detector called CRAFT. The proposed method effectively detects text area by exploring each character and affinity between characters. To overcome the lack of individual character level annotations, our framework exploits the pseudo character-level bounding boxes acquired by the learned interim model in a weakly-supervised manner.

Clova AI OCR Team, NAVER/LINE Corp.

method: PixelLink2017-09-13

Authors: Dan Deng

Description: PixelLink: Detecting Scene Text via Instance Segmentation

Accepted by AAAI2018

Most state-of-the-art scene text detection algorithms are deep learning based methods that depend on bounding box regression and perform at least two kinds of predictions: text/non-text classification and location regression. Regression plays a key role in the acquisition of bounding boxes in these methods, but it is not indispensable, because text/non-text prediction can also be considered as a kind of semantic segmentation that contains full location information in itself. However, text instances in scene images often lie very close to each other, making them very difficult to separate via semantic segmentation. Therefore, instance segmentation is needed to address this problem. In this paper, PixelLink, a novel scene text detection algorithm based on instance segmentation, is proposed. Text instances are first segmented out by linking pixels within the same instance together. Text bounding boxes are then extracted directly from the segmentation result without location regression. Experiments show that, compared with regression based methods, PixelLink can achieve better or comparable performance on several benchmarks, while requiring much fewer training iterations and less training data.

Using only the 1,000 images in IC15-train, the best performance is 83.7%; when SynthText is added for pretraining, it is 85%

Ranking Table

Description Paper Source Code
2019-03-08R2CNN++ (single scale)78.86%81.33%80.08%

Ranking Graphic