Introduction

"Robust Reading" refers to the research area dealing with the interpretation of written communication in unconstrained settings. Typically Robust Reading is linked to the detection and recognition of textual information in scene images, but in the wider sense it refers to techniques and methodologies that have been developed specifically for text containers other than scanned paper documents, and include born-digital images and videos to mention a few.

Robust Reading is at the meeting point between camera based document analysis and scene interpretation, and serves as common ground between the document analysis community and the wider computer vision community.

The ICDAR Robust Reading Competition has been held six times [1-9], in 2003, 2005, 2011, 2013, 2015 and 2017. The competition is organized around challenges that represent specific application domains for robust reading. Challenges are selected to cover a wide range of real-world situations. Each challenge is set up around different tasks.

ICDAR 2019 Robust Reading competitions

The ICDAR 2019 Robust Reading competition builds upon the success of the previous editions. The challenges introduced for the 2019 edition are summarized in the following figure:

Timeline_2019_v2.png

 

More information about each challenge will be provided in their respective pages:

 

References

  1. S.M. Lucas, A. Panaretos, L. Sosa, A. Tang, S. Wong, and R. Young, "ICDAR 2003 robust reading competitions", In Proc. 7th International Conference on Document Analysis and Recognition, IEEE Computer Society, 2003, pp. 682-682.

  2. D. Karatzas, S. Robles Mestre, J. Mas, F. Nourbakhsh, P. Pratim Roy , "ICDAR 2011 Robust Reading Competition - Challenge 1: Reading Text in Born-Digital Images (Web and Email)", In Proc. 11th International Conference of Document Analysis and Recognition, 2011, IEEE CPS, pp. 1485-1490. [pdf] [presentation]

  3. A. Shahab, F. Shafait, A. Dengel, "ICDAR 2011 Robust Reading Competition - Challenge 2: Reading Text in Scene Images",  In Proc. 11th International Conference of Document Analysis and Recognition, 2011, IEEE CPS, pp. 1491-1496. [pdf]

  4. D. Karatzas, F. Shafait, S. Uchida, M. Iwamura, L. Gomez, S. Robles, J. Mas, D. Fernandez, J. Almazan, L.P. de las Heras , "ICDAR 2013 Robust Reading Competition", In Proc. 12th International Conference of Document Analysis and Recognition, 2013, IEEE CPS, pp. 1115-1124. [pdf] [poster] [presentation]

  5. D. Karatzas, L. Gomez-Bigorda, A. Nicolaou, S. Ghosh, A. Bagdanov, M. Iwamura, J. Matas, L. Neumann, V. Ramaseshan Chandrasekhar, S. Lu, F. Shafait, S. Uchida, E. Valveny, "ICDAR 2015 Competition on Robust Reading", In Proc. 13th International Conference on Document Analysis and Recognition (ICDAR 2015), IEEE, 2015, pp. 1156-1160. [pdf] [presentation]

  6. M. Iwamura, N. Morimoto, K. Tainaka, D. Bazazian, L. Gomez, D. Karatzas, (2017, November). ICDAR2017 robust reading challenge on omnidirectional video. In Document Analysis and Recognition (ICDAR), 2017 14th IAPR International Conference on (Vol. 1, pp. 1448-1453). IEEE.

  7. R. Gomez, B. Shi, L. Gomez, L. Neumann, A. Veit, J. Matas, S. Belongie, D. Karatzas, (2017, November). ICDAR2017 robust reading challenge on COCO-Text. In 2017 14th IAPR International Conference on Document Analysis and Recognition (ICDAR) (pp. 1435-1443). IEEE.

  8. N. Nayef, F. Yin, I. Bizid, H. Choi, Y. Feng, D. Karatzas, Z. Luo, U. Pal, C. Rigaud, J. Chazalon, W. Khlif, (2017, November). ICDAR2017 Robust Reading Challenge on Multi-Lingual Scene Text Detection and Script Identification-RRC-MLT. In Document Analysis and Recognition (ICDAR), 2017 14th IAPR International Conference on (Vol. 1, pp. 1454-1459). IEEE.
  9. C. Yang, X.C. Yin, H. Yu, D. Karatzas, Y. Cao, (2017, November). ICDAR2017 robust reading challenge on text extraction from biomedical literature figures (DeTEXT). In Document Analysis and Recognition (ICDAR), 2017 14th IAPR International Conference on (Vol. 1, pp. 1444-1447). IEEE.

 

10491registered users
122countries
38511evaluated methods (*)
1150public methods

(*) overall number of submissions including private and public ones