Home // SEMAPRO 2020, The Fourteenth International Conference on Advances in Semantic Processing // View article
Large Scale Legal Text Classification Using Transformer Models
Authors:
Zein Shaheen
Gerhard Wohlgenannt
Erwin Filtz
Keywords: multi-label text classification; legal document datasets; transformer models; EuroVoc
Abstract:
Large multi-label text classification is a challenging Natural Language Processing (NLP) problem that is concerned with text classification for datasets with thousands of labels. We tackle this problem in the legal domain, where datasets, such as JRC-Acquis and EURLEX57K labeled with the EuroVoc vocabulary were created within the legal information systems of the European Union. The EuroVoc taxonomy includes around 7000 concepts. In this work, we study the performance of various recent transformer-based models in combination with strategies such as generative pretraining, gradual unfreezing and discriminative learning rates in order to reach competitive classification performance, and present new state-of-the-art results of 0.661 (F1) for JRC-Acquis and $0.754$ for EURLEX57K. Furthermore, we quantify the impact of individual steps, such as language model fine-tuning or gradual unfreezing in an ablation study, and provide reference dataset splits created with an iterative stratification algorithm.
Pages: 7 to 17
Copyright: Copyright (c) IARIA, 2020
Publication date: October 25, 2020
Published in: conference
ISSN: 2308-4510
ISBN: 978-1-61208-813-6
Location: Nice, France
Dates: from October 25, 2020 to October 29, 2020