Toggle navigation
R.R.C.
Robust Reading Competition
Home
(current)
Challenges
MapText
2025
Comics Understanding
2025
Occluded RoadText
2024
MapText
2024
HR-Ciphers
2024
DocVQA
2020-23
ReST
2023
SVRD
2023
DSText
2023
DUDE 😎
2023
NewsVideoQA
2023
RoadText
2023
DocILE
2023
HierText
2022
Out of Vocabulary
2022
ST-VQA
2019
MLT
2019
LSVT
2019
ArT
2019
SROIE
2019
ReCTS
2019
COCO-Text
2017
DeTEXT
2017
DOST
2017
FSNS
2017
MLT
2017
IEHHR
2017
Incidental Scene Text
2015
Text in Videos
2013-2015
Focused Scene Text
2013-2015
Born-Digital Images (Web and Email)
2011-2015
Register
MLT 2017
Overview
Tasks
Downloads
Results
My Methods
Organizers
Home
MLT
Results
Task 1 - Text Localization
Method: NCU_FPN
Samples
Task 1 - Text Localization - Method:
NCU_FPN
Method info
Samples list
Per sample details
Page
of 450
next >
View
Thumbnails
Table
Sample
Results
IoU - Global
Average Precision
Recall
Precision
Hmean
1
59.17%
83.33%
83.33%
83.33%
2
46.24%
70.83%
65.38%
68.00%
3
50.00%
50.00%
50.00%
50.00%
4
75.00%
75.00%
100.00%
85.71%
5
83.33%
83.33%
100.00%
90.91%
6
17.78%
66.67%
33.33%
44.44%
7
64.29%
100.00%
57.14%
72.73%
8
100.00%
100.00%
100.00%
100.00%
9
65.49%
81.16%
82.35%
81.75%
10
0.00%
0.00%
0.00%
0.00%
11
32.86%
66.67%
57.14%
61.54%
12
61.27%
90.00%
75.00%
81.82%
13
51.31%
60.00%
69.23%
64.29%
14
29.90%
41.18%
46.67%
43.75%
15
100.00%
100.00%
100.00%
100.00%
16
76.36%
85.71%
85.71%
85.71%
17
100.00%
100.00%
100.00%
100.00%
18
100.00%
100.00%
100.00%
100.00%
19
59.17%
83.33%
83.33%
83.33%
20
100.00%
100.00%
100.00%
100.00%
Download full table (.csv)
Results
IoU - Global