method: ssbaseline2020-09-09

Authors: Qi Zhu, Chenyu Gao, Peng Wang, Qi Wu

Affiliation: Northwestern Polytechnical University


Description: We wish this work to set the new baseline for these two OCR text related applications and to inspire new thinking of multi-modality encoder design.

method: VTA2019-04-30

Authors: Fengren Wang, iFLYTEK,; Jinshui Hu, iFLYTEK,; Jun Du, USTC,; Lirong Dai, USTC,; Jiajia Wu, iFLYTEK,

Description: An ED model for ST-VQA
1. We use OCR and object detection models to extract text and objects from images.
2. Then We use Bert to encode the extracted text and QA pairs.
3. Finally We use a similar model of Bottom-Up and Top-Down[1] to handle the image and question input and give the answer output.

method: USTB-TQA2019-04-29

Authors: USTB-PRIR (Zan-Xia Jin, Heran Wu, Lu Zhang, Bei Yin, Jingyan Qin,Xu-Cheng Yin)

Description: This is an NLP-QA based method for ST-VQA. Generally, the VQA models only include shallow NLP processes, they can’t understand semantic information of the question completely. In our model, we totally consider the ST-VQA as a QA task in NLP. Firstly, we employ the pre-trained OCR (Optical Character Recognition) and OD (Object Detection) models to obtain the text information of ST-VQA datasets. Secondly, the OCR and OD results are used as the input of our method. Each of them goes through a sub-network including RNN layers and attention layers separately, while share the same parameters. Then we conduct attention from OD representation to OCR representation. Finally, we predict the answer with a high-level question representation and final OCR representations.

Ranking Table

Description Paper Source Code
2019-04-29Focus: A bottom-up approach for Scene Text VQA0.0800

Ranking Graphic