Toggle navigation
R.R.C.
Robust Reading Competition
Home
(current)
Challenges
Comics Understanding
2025
Occluded RoadText
2024
MapText
2024
HR-Ciphers
2024
DocVQA
2020-23
ReST
2023
SVRD
2023
DSText
2023
DUDE 😎
2023
NewsVideoQA
2023
RoadText
2023
DocILE
2023
HierText
2022
Out of Vocabulary
2022
ST-VQA
2019
MLT
2019
LSVT
2019
ArT
2019
SROIE
2019
ReCTS
2019
COCO-Text
2017
DeTEXT
2017
DOST
2017
FSNS
2017
MLT
2017
IEHHR
2017
Incidental Scene Text
2015
Text in Videos
2013-2015
Focused Scene Text
2013-2015
Born-Digital Images (Web and Email)
2011-2015
Register
MLT 2019
Overview
Tasks
Downloads
Results
My Methods
Organizers
Home
MLT
Results
Task 1 - Text Localization
Method: MEAST_V3_23_Oct
Samples
Task 1 - Text Localization - Method:
MEAST_V3_23_Oct
Method info
Samples list
Per sample details
Page
of 488
next >
View
Thumbnails
Table
Sample
Results
IoU - Global
Average Precision
Recall
Precision
Hmean
1
0.00%
0.00%
0.00%
0.00%
2
7.47%
33.33%
25.00%
28.57%
3
66.67%
66.67%
100.00%
80.00%
4
100.00%
100.00%
75.00%
85.71%
5
56.10%
58.33%
87.50%
70.00%
6
16.67%
22.22%
50.00%
30.77%
7
15.13%
23.53%
57.14%
33.33%
8
55.35%
75.00%
69.23%
72.00%
9
61.01%
66.67%
85.71%
75.00%
10
9.13%
25.00%
37.50%
30.00%
11
0.00%
0.00%
0.00%
0.00%
12
75.00%
75.00%
75.00%
75.00%
13
46.85%
66.67%
42.86%
52.17%
14
57.14%
57.14%
100.00%
72.73%
15
80.56%
100.00%
60.00%
75.00%
16
45.34%
60.42%
76.32%
67.44%
17
50.00%
50.00%
50.00%
50.00%
18
0.00%
0.00%
0.00%
0.00%
19
20.83%
25.00%
66.67%
36.36%
20
33.33%
40.00%
18.18%
25.00%
Download full table (.csv)
Results
IoU - Global