Overview - ICDAR 2017 Robust Reading Challenge on Omnidirectional Video
This challenge focuses on scene text localization and recognition on the Downtown Osaka Scene Text (DOST) dataset . Five tasks will be opened within this Challenge: Text Localisation in Videos, Text Localisation in Still Images, Cropped Word Recognition, End-to-End Recognition in Videos, and End-
The DOST dataset preserves scene texts observed in the real environment as they were. The dataset contains videos (sequential images) captured in shopping streets in downtown Osaka with an omnidirectional camera. Use of the omnidirectional camera contributes to excluding user’s intention in capturing images. Sequential images contained in the dataset contribute to
Important Notice about Ground Truth Quality and Renovation
Before releasing the DOST dataset, we decided to improve (renovate) the ground truth (GT) quality of the dataset. Particularly, transcriptions and bounding boxes of GTs are improved and more non-readable text regions are covered as "Don't care" regions. However, the renovation took
We also notice the GTs are not perfect. As you can imagine, improvement of the ground truths takes a lot of effort. Though we will try best, please understand the limitation. We are
Anyway, enjoy the DOST dataset!
 Iwamura, M., Matsuda, T., Morimoto, N., Sato, H., Ikeda, Y., Kise, K..: Downtown Osaka Scene Text Dataset. ECCV 2016 International Workshop of Robust Reading, pp
Downtime due to revisions between 29 and 31 December 2019
Extended: Special Issue on Scene Text Reading and its Applications
New Challenges for 2019 Announced
Special Issue on Scene Text Reading and its Applications
Do NOT use qq.com emails to register or contact us
Downtime due to scheduled revisions on 26 and 27 March 2018
Downtime due to scheduled revision on 11 and 12 April 2017
April, 15: Initial training data available
May 27: More training data available
June 10: Test data available / Submissions open
June, 30: Submission of results deadline.
November, 10-15: Results presentation.