method: TEKLIA Kaldi + Flair2021-02-22

Authors: Marie-Laurence Bonhomme, Christopher Kermorvant

Affiliation: Teklia

Description: We used existing libraries (Kaldi for the HTR, and a number of NER tools ; here we submit the results obtained with FLAIR) on the IEHHR 2017 task (basic track) to compare the results in terms of impact of the HTR on the NER results to those observed on other datasets.

method: InstaDeep-CRNS2021-01-21

Authors: Ahmed Cheikh Rouhou, Marwa Dhiaf, Yousri Kessentini and Sinda Ben Salem

Affiliation: InstaDeep, CRNS

Email: a.cheikhrouhou@instadeep.com

Description: Joint text and named entity recognition on paragraph level images (Records) using Transformer.

method: Naver Labs2018-06-25

Authors: Animesh Prasad, Hervé Déjean, Jean-Luc Meunier, Max Weidemann, Johannes Michael, Gundram Leifert

Description: For this task we use a pipeline approach where first the line image is preprocessed and then passed through a CNN-BLSTM architecture with CTC loss (i.e. HTR). Then in next step, we use a BLSTM over the feature layer (computed as all character n-gram for the tokens generated from best effort decoding of HTR output) trained using cross entropy loss to maximize the accuracy.

Ranking Table

Description Paper Source Code
DateMethodBasic ScoreComplete ScoreNameSurnameLocationOccupationStateInput Type
2021-02-22TEKLIA Kaldi + Flair96.96%0.00%97.70%95.18%96.26%97.53%98.22%LINE
2021-01-21InstaDeep-CRNS96.25%95.54%96.58%94.60%95.81%95.92%98.19%REGISTER
2018-06-25Naver Labs95.46%95.03%97.01%92.73%95.03%96.43%96.41%LINE
2017-07-09CITlab ARGUS (with OOV)91.94%91.58%95.14%85.78%88.43%93.08%97.54%LINE
2017-07-10CITlab ARGUS (with OOV, net2)91.63%91.19%95.09%85.84%87.32%92.96%97.19%LINE
2018-10-27Joint HTR + NER no postprocessing90.59%89.40%89.94%84.07%90.71%92.10%96.59%LINE
2017-07-09CITlab ARGUS (without OOV)89.54%89.17%94.37%76.54%87.65%92.66%97.43%LINE
2017-07-01Baseline HMM80.28%63.11%81.06%60.15%78.90%90.23%93.79%LINE

Ranking Graphic

Ranking Graphic