method: BERT with Multi-task Confidence Prediction2019-04-30

Authors: T.J. Torres, Chang Liu, Homa Foroughi, Tharathorn Joy Rimchala

Description: Our method uses a Base BERT encoder that is then pre-trained to extract four fields from US based receipts. We then adapt that model to extract the new fields in the competition and fine-tune the resulting model via token-wise NER. This fine-tuning occurs through a linear decoder as well as predicting the binary outcome of whether the resultant extraction is correct in a multi-task setting given the input text embedding using another linear decoder head. The final results are then post-processed to expand missing tokens by matching extracted outputs to the original text and using heuristics based on line completion.