method: Super_KVer2023-03-16

Authors: Lele Xie, Zuming Huang, Boqian Xia, Yu Wang, Yadong Li, Hongbin Wang, Jingdong Chen

Affiliation: Ant Group


Description: An ensemble of both discriminated and generated models. The former is a multimodal method which utilizes text, layout and image, and we train this model with two different sequence lengths, 2048 and 512 respectively. The texts and boxes are generated by independent OCR models. The latter model is an end-to-end method which directly generates K-V pairs for an input image.

Authors: Huiyan Wu, Pengfei Li, Can Li, Liang Qiao,

Affiliation: Davar-Lab

Description: Our method realized end-to-end information extraction (single-model) through OCR, NER and RE technologies. Text information extracted by OCR and image information are jointly transmitted to NER to identify key and value entities. RE module extracts entity pair relationships through multi-classification.
Where NER and RE are based on LayoutlmV3, and our training dataset is Hust-Cell.

method: sample-32023-03-16

Authors: Zhenrong Zhang, Lei Jiang, Youhui Guo, Jianshu Zhang, Jun Du

Affiliation: University of Science and Technology of China (USTC), iFLYTEK AI Research


Description: 1. A table cell detection[1] model is performed to split images into table and non-table regions.
2.We perform the key-value-background classification for each OCR bounding box using the GraphDoc[2] .
3. For the table regions, we merge OCR boxes into table cells and then find the left and top keys for each value table cell according to manual rules.
4. For non-table regions (including plain text outside table cells in table images), we directly use a MLP to predict all keys for each value box.

Ranking Table

Description Paper Source Code
2023-03-15End-to-end document relationship extraction (single-model)43.55%57.90%50.73%
2023-03-14Layoutlm relation extraction10.99%19.22%15.10%

Ranking Graphic