Authors: Minhui Wu(伍敏慧),Mei Jiang(姜媚),Chen Li(李琛),Jing Lv(吕静),Qingxiang Lin(林庆祥),Fan Yang(杨帆)
Description: Our methods are mainly based on LayoutLMv3 and StrucTextv1 model architecture. All training models are finetuned on large pretrained models of LayoutLM and StrucText. During training and testing, we did some preprocessings to merge and split some badly detected boxes. Since entity label of kv-pair boxes are ignored, we used model trained on task1 images to predict kv relations of text boxes in task2 training/testing images. Thus we added additional 2 classes of labels (question/answer) and mapped original labels to new labels(other -> question/answer) to ease the difficulty of training. Similarly, During testing, we used kv-prediction model to filter those text boxes with kv relations and used model trained on task2 to predict entity label of the lefted boxes. In addition, we combined predicted results of different models based on scores and rules and did some postprocessings to merge texts with same entity label and generated final output.