Overview - Occluded RoadText
Albeit the rapid advancements in reading systems, robust scene text detection and recognition in the presence of occlusions, pose a significant challenge. This difficulty is of practical importance in interpreting real-world environments. The task is not only challenging but also holds a substantial impact in various real applications.
Scene text often appears in unstructured environments with varying degrees of occlusion, making its detection and recognition a complex task. This complexity is often amplified in urban landscapes, where text on signs, billboards, and storefronts are frequently partially obscured by objects, lighting conditions, or angles. In Occluded RoadText, we focus on scene text seen while driving. Occlusions can arise from a multitude of sources as seen in Figure 1, including natural elements, urban infrastructure, dynamic crowd movement, and more – each presenting unique challenges in terms of shape, size, texture, and positioning relative to the text. The dataset introduced in this challenge is the first of its kind for occluded scene text detection and recognition, with only natural objects as occlusions. It requires the development of methods capable of making predictions based on partially visible text. Succesful methods should leverage other visible texts and objects within the image that might provide contextual information, external knowledge, text style and layout, etc to properly detect and recognise the occluded text instances.
For comprehensive details on accessing the dataset, formatting your submission, and the evaluation parameters, please refer to the Tasks page.
15 January 2024: Competition Announced
19 February 2024: Validation data released
5th March 2024: Test data release
10 March 2024: Submission site opens
31 March 2024: Deadline for competition submissions
All deadlines are in the AoE time zone