Overview - ICDAR2017 Competition on Multi-lingual scene text detection and script identification

RRC-MLT Call for Participation RRC-MLT-2017-CFP1.pdf

Text detection and recognition in a natural environment is a key component of many applications, ranging from business card digitization to shop indexation in a street. This new competition aims at assessing the ability of state of the art methods to detect text where the user is facing various scripts and languages in a way which prevent using much a priori knowledge, as in modern cities where multiple cultures live and communicate together. This situation is also frequent when analyzing streams of contents gathered on the Internet. This competition therefore is an extension of the existing Robust Reading Competition (RRC) which has been held since 2003 both in ICDAR and in an online context.

Registration

To register in this MLT-challenge of the RRC competition 2017, please register to the RRC portal as a user (if you are not already a registered user), this will allow you to access the "downloads".

This does not oblige you to participate or submit results, it is an expression of interest.

You can participate in one or more tasks of the challenge. It is not obligatory to participate in all the tasks.

Motivation and relevance to ICDAR community

In this proposed competition we try to answer the question whether text detection methods (whether deep learning-based or otherwise) could handle different scripts without fundamental changes in the used algorithms/techniques, or do we really need script-specific methods ?. The ultimate goal of robust reading is be able to read the text which appears in any captured image despite image source (type), image quality, text script or any other difficulties. Many research works have been devoted to solve this problem. The previous editions of RRC competitions and other works, have provided useful datasets to help researchers tackle each of those problems in order to robustly read text in natural scene images. In this competition, we extend state-of-the-art work further by tackling the problem of multi- lingual text detection and script identification. In other words, methods should be script-robust text detection methods.

Despite the available datasets related to scene text detection or to script identification, our proposed dataset offers interesting novel aspects. The dataset is composed of complete scene images which come from 9 languages representing 6 different scripts. It combines text detection with script identification, and contains much more images than related datasets. The number of images per script is equal. This makes it a useful benchmark for the task of multi-lingual scene text detection. The dataset along with its ground truth contains all necessary information to prepare for text recognition systems as well. The considered languages are the following: Chinese, Japanese, Korean, English, French, Arabic, Italian, German and Indian.

Such dataset is the natural extension of the RRC series, with more scripts and more images while only focusing on intentional (or focused) text. It addresses the needs of the community for improved and robust scene text detection. Datasets following this idea are being created because they are needed by industry and regular users. However, such datasets -- we argue -- cannot be considered for benchmarking multi-script scene text detection.

The target audience of this dataset is obviously not only the ICDAR community, but also the computer vision community. In both communities, researchers work on analyzing scenes, scene text detection and recognition, quality of text images and script identification.

The datasets available in the literature for scene text detection are mostly not multilingual. The datasets which contain multi-script text are either built for Indian scripts only, or they contain a small number of scripts (2 - 4) with a relatively small number of images. On the other hand, datasets that have been created for the tasks of script identification (classification) are composed of cropped text word images.

Important Dates

1 Feb to 31 Mar

  • Manifestation of interest by participants opens
  • Asking/Answering questions about the details of the competition Initial website available

1 Mar

  • Competition formal announcement

31 Mar

  • Website fully ready
  • Registration of participants continues
  • Evaluation protocol, file formats etc. available

1 Apr to 31 May

  • Train set available - training period - MLT challenge in progress -Participants evaluate their methods on the training/validation sets - Prepare for submission
  • Registration is still open

1 Jun

  • Registration closes for this MLT challenge for ICDAR-2017

1 Jun to 1 Jul

  • Test set available

1 Jul

  • Deadline for submission of results by participants

1 Nov

  • The public release of the full dataset