method: Super_KVer2023-03-16

Authors: Lele Xie, Zuming Huang, Boqian Xia, Yu Wang, Yadong Li, Hongbin Wang, Jingdong Chen

Affiliation: Ant Group

Email: yule.xll@antgroup.com

Description: An ensemble of both discriminated and generated models. The former is a multimodal method which utilizes text, layout and image, and we train this model with two different sequence lengths, 2048 and 512 respectively. The texts and boxes are generated by independent OCR models. The latter model is an end-to-end method which directly generates K-V pairs for an input image.

Authors: Huiyan Wu, Pengfei Li, Can Li, Liang Qiao,

Affiliation: Davar-Lab

Description: Our method realized end-to-end information extraction (single-model) through OCR, NER and RE technologies. Text information extracted by OCR and image information are jointly transmitted to NER to identify key and value entities. RE module extracts entity pair relationships through multi-classification.
Where NER and RE are based on LayoutlmV3, and our training dataset is Hust-Cell.

method: sample-32023-03-16

Authors: Zhenrong Zhang, Lei Jiang, Youhui Guo, Jianshu Zhang, Jun Du

Affiliation: University of Science and Technology of China (USTC), iFLYTEK AI Research

Email: zzr666@mail.ustc.edu.cn

Description: 1. A table cell detection[1] model is performed to split images into table and non-table regions.
2.We perform the key-value-background classification for each OCR bounding box using the GraphDoc[2] .
3. For the table regions, we merge OCR boxes into table cells and then find the left and top keys for each value table cell according to manual rules.
4. For non-table regions (including plain text outside table cells in table images), we directly use a MLP to predict all keys for each value box.

Ranking Table

Description Paper Source Code
DateMethodScore1Score2Score
2023-03-16Super_KVer49.93%62.97%56.45%
2023-03-15End-to-end document relationship extraction (single-model)43.55%57.90%50.73%
2023-03-16sample-342.52%56.68%49.60%
2023-03-16sample-142.13%56.36%49.25%
2023-03-16Pre-trained model based fullpipe pair extraction (opti_v3, no inf_aug)42.17%55.63%48.90%
2023-03-16Pre-trained model based fullpipe pair extraction (opti_v2, no inf_aug)42.10%55.56%48.83%
2023-03-16Pre-trained model based fullpipe pair extraction (opti_v2, inf_aug)42.01%55.50%48.76%
2023-03-15Pre-trained model based fullpipe pair extraction (opti_v1)41.56%55.34%48.45%
2023-03-16Meituan OCR V441.10%54.55%47.83%
2023-03-16Meituan OCR V340.67%54.17%47.42%
2023-03-15Meituan OCR V240.97%53.47%47.22%
2023-03-16submit-trainall40.65%52.98%46.82%
2023-03-16submit-ocrkie-8to240.15%52.97%46.56%
2023-03-14Meituan OCR39.85%52.46%46.15%
2023-03-16f241.07%50.82%45.94%
2023-03-16final41.05%50.80%45.93%
2023-03-16submit-8finetune239.58%51.93%45.75%
2023-03-15new-model39.38%48.59%43.99%
2023-03-15800-fix237.06%46.46%41.76%
2023-03-11add-pplssm36.45%43.83%40.14%
2023-03-16LayoutLM & STrucText Based Method33.09%45.92%39.51%
2023-03-15bug-80034.17%43.91%39.04%
2023-03-16Layoutlmv329.81%41.45%35.63%
2023-03-15old-500-fix127.64%35.52%31.58%
2023-03-15数据之关联223.26%35.07%29.16%
2023-03-16处理t17.34%26.92%22.13%
2023-03-16refinet17.11%26.60%21.86%
2023-03-12FirstResult16.51%26.12%21.32%
2023-03-16不处理t的结果16.39%25.56%20.97%
2023-03-15表格结构分析+layout的结果_031516.25%25.38%20.81%
2023-03-15数据之关联16.75%24.48%20.61%
2023-03-16Ant-FinCV14.44%22.68%18.56%
2023-03-16Ant-FinCV14.32%22.70%18.51%
2023-03-16Ant-FinCV14.38%22.62%18.50%
2023-03-16Ant-FinCV14.21%22.35%18.28%
2023-03-16Ant-FinCV13.79%21.75%17.77%
2023-03-15layoutxlm-relation and ppstructure box level12.86%21.56%17.21%
2023-03-15vocr11.71%19.13%15.42%
2023-03-13FIne tuned DONUT13.06%17.15%15.11%
2023-03-14Layoutlm relation extraction10.99%19.22%15.10%
2023-03-14layoutxlm and ppstructure11.63%18.43%15.03%
2023-03-15layoutxlm-relation and ppstructure token level11.51%18.26%14.89%
2023-03-14vocr10.31%17.53%13.92%
2023-03-16Ant-FinCV8.96%14.84%11.90%
2023-03-14e2e1.77%3.44%2.60%
2023-03-13first commit 1.22%2.33%1.78%
2023-03-14e2e0.55%1.01%0.78%
2023-03-10test0.00%0.00%0.00%
2023-03-11test_t10.00%0.00%0.00%
2023-03-13intime0.00%0.00%0.00%
2023-03-13test20.00%0.00%0.00%
2023-09-14Graph Attention0.00%0.00%0.00%

Ranking Graphic