Toggle navigation
R.R.C.
Robust Reading Competition
Home
(current)
Challenges
Occluded RoadText
2024
MapText
2024
HR-Ciphers
2024
DocVQA
2020-23
ReST
2023
SVRD
2023
DSText
2023
DUDE 😎
2023
NewsVideoQA
2023
RoadText
2023
DocILE
2023
HierText
2022
Out of Vocabulary
2022
ST-VQA
2019
MLT
2019
LSVT
2019
ArT
2019
SROIE
2019
ReCTS
2019
COCO-Text
2017
DeTEXT
2017
DOST
2017
FSNS
2017
MLT
2017
IEHHR
2017
Incidental Scene Text
2015
Text in Videos
2013-2015
Focused Scene Text
2013-2015
Born-Digital Images (Web and Email)
2011-2015
Register
MLT 2017
Overview
Tasks
Downloads
Results
My Methods
Organizers
Home
MLT
Results
Task 3 - Joint text detection and script identification
Method: SCUT-DLVClab2
Samples
Task 3 - Joint text detection and script identification - Method:
SCUT-DLVClab2
Method info
Samples list
Per sample details
Page
of 450
next >
View
Thumbnails
Table
Sample
Results
Global
Average Precision
Hmean
Precision
Recall
1
83.33%
90.91%
100.00%
83.33%
2
29.18%
47.06%
80.00%
33.33%
3
0.00%
0.00%
0.00%
0.00%
4
0.00%
0.00%
0.00%
0.00%
5
31.94%
60.00%
75.00%
50.00%
6
66.67%
80.00%
100.00%
66.67%
7
12.50%
25.00%
25.00%
25.00%
8
61.98%
76.19%
80.00%
72.73%
9
27.83%
43.01%
83.33%
28.99%
10
0.00%
0.00%
0.00%
0.00%
11
13.89%
33.33%
33.33%
33.33%
12
90.00%
94.74%
100.00%
90.00%
13
0.48%
6.67%
6.67%
6.67%
14
5.88%
10.53%
50.00%
5.88%
15
100.00%
100.00%
100.00%
100.00%
16
68.38%
80.00%
90.91%
71.43%
17
41.67%
50.00%
50.00%
50.00%
18
50.00%
66.67%
100.00%
50.00%
19
50.00%
66.67%
100.00%
50.00%
20
69.05%
66.67%
62.50%
71.43%
Download full table (.csv)
Results
Global