method: ssbaseline2020-09-09
Authors: Qi Zhu, Chenyu Gao, Peng Wang, Qi Wu
Affiliation: Northwestern Polytechnical University
Email: zephyrzhuqi@gmail.com
Description: We wish this work to set the new baseline for these two OCR text related applications and to inspire new thinking of multi-modality encoder design.
method: VTA2019-04-30
Authors: Fengren Wang, iFLYTEK, frwang@iflytek.com; Jinshui Hu, iFLYTEK, jshu@iflytek.com; Jun Du, USTC, jundu@ustc.edu.cn; Lirong Dai, USTC, lrdai@ustc.edu.cn; Jiajia Wu, iFLYTEK, jjwu@iflytek.com
Description: An ED model for ST-VQA
1. We use OCR and object detection models to extract text and objects from images.
2. Then We use Bert to encode the extracted text and QA pairs.
3. Finally We use a similar model of Bottom-Up and Top-Down[1] to handle the image and question input and give the answer output.
method: USTB-TQA2019-04-29
Authors: USTB-PRIR (Zan-Xia Jin, Heran Wu, Lu Zhang, Bei Yin, Jingyan Qin,Xu-Cheng Yin)
Description: This is an NLP-QA based method for ST-VQA. Generally, the VQA models only include shallow NLP processes, they can’t understand semantic information of the question completely. In our model, we totally consider the ST-VQA as a QA task in NLP. Firstly, we employ the pre-trained OCR (Optical Character Recognition) and OD (Object Detection) models to obtain the text information of ST-VQA datasets. Secondly, the OCR and OD results are used as the input of our method. Each of them goes through a sub-network including RNN layers and attention layers separately, while share the same parameters. Then we conduct attention from OD representation to OCR representation. Finally, we predict the answer with a high-level question representation and final OCR representations.
Date | Method | Score | |||
---|---|---|---|---|---|
2020-09-09 | ssbaseline | 0.5513 | |||
2019-04-30 | VTA | 0.2793 | |||
2019-04-29 | USTB-TQA | 0.1730 | |||
2019-04-29 | USTB-TVQA | 0.0929 | |||
2019-04-29 | Focus: A bottom-up approach for Scene Text VQA | 0.0800 |