Toggle navigation
R.R.C.
Robust Reading Competition
Home
(current)
Challenges
Occluded RoadText
2024
MapText
2024
HR-Ciphers
2024
DocVQA
2020-23
ReST
2023
SVRD
2023
DSText
2023
DUDE 😎
2023
NewsVideoQA
2023
RoadText
2023
DocILE
2023
HierText
2022
Out of Vocabulary
2022
ST-VQA
2019
MLT
2019
LSVT
2019
ArT
2019
SROIE
2019
ReCTS
2019
COCO-Text
2017
DeTEXT
2017
DOST
2017
FSNS
2017
MLT
2017
IEHHR
2017
Incidental Scene Text
2015
Text in Videos
2013-2015
Focused Scene Text
2013-2015
Born-Digital Images (Web and Email)
2011-2015
Register
MLT 2019
Overview
Tasks
Downloads
Results
My Methods
Organizers
Home
MLT
Results
Task 1 - Text Localization
Method: Drew
Samples
Task 1 - Text Localization - Method:
Drew
Method info
Samples list
Per sample details
< previous
Page
of 488
next >
View
Thumbnails
Table
Sample
Results
IoU - Global
Average Precision
Recall
Precision
Hmean
21
0.00%
0.00%
0.00%
0.00%
22
50.16%
55.56%
62.50%
58.82%
23
66.67%
66.67%
100.00%
80.00%
24
91.67%
100.00%
75.00%
85.71%
25
65.74%
66.67%
88.89%
76.19%
26
55.56%
55.56%
100.00%
71.43%
27
60.45%
64.71%
78.57%
70.97%
28
85.82%
91.67%
61.11%
73.33%
29
51.48%
55.56%
83.33%
66.67%
30
28.47%
50.00%
46.15%
48.00%
31
100.00%
100.00%
100.00%
100.00%
32
100.00%
100.00%
100.00%
100.00%
33
88.89%
88.89%
100.00%
94.12%
34
85.71%
85.71%
100.00%
92.31%
35
100.00%
100.00%
75.00%
85.71%
36
55.02%
58.33%
90.32%
70.89%
37
50.00%
50.00%
33.33%
40.00%
38
100.00%
100.00%
100.00%
100.00%
39
100.00%
100.00%
100.00%
100.00%
40
40.00%
40.00%
100.00%
57.14%
Download full table (.csv)
Results
IoU - Global