Conference paper
Active learning for BERT: An empirical study
Liat Ein-Dor, Alon Halfon, et al.
EMNLP 2020
We describe a large, high-quality benchmark for the evaluation of Mention Detection tools. The benchmark contains annotations of both named entities as well as other types of entities, annotated on different types of text, ranging from clean text taken from Wikipedia, to noisy spoken data. The benchmark was built through a highly controlled crowd sourcing process to ensure its quality. We describe the benchmark, the process and the guidelines that were used to build it. We then demonstrate the results of a state-of-the-art system running on that benchmark.
Liat Ein-Dor, Alon Halfon, et al.
EMNLP 2020
Igor Melnyk, Youssef Mroueh, et al.
NeurIPS 2024
Ming Tan, Yang Yu, et al.
EMNLP-IJCNLP 2019
Arafat Sultan, Shubham Chandel, et al.
ACL 2020