Assessing linguistic generalisation in language models: a dataset for Brazilian Portuguese

Wilkens R, Zilio L, Villavicencio A (2024)


Publication Language: English

Publication Type: Journal article

Publication year: 2024

Journal

Book Volume: 58

Pages Range: 175-201

Journal Issue: 1

DOI: 10.1007/s10579-023-09664-1

Abstract

Much recent effort has been devoted to creating large-scale language models. Nowadays, the most prominent approaches are based on deep neural networks, such as BERT. However, they lack transparency and interpretability, and are often seen as black boxes. This affects not only their applicability in downstream tasks but also the comparability of different architectures or even of the same model trained using different corpora or hyperparameters. In this paper, we propose a set of intrinsic evaluation tasks that inspect the linguistic information encoded in models developed for Brazilian Portuguese. These tasks are designed to evaluate how different language models generalise information related to grammatical structures and multiword expressions (MWEs), thus allowing for an assessment of whether the model has learned different linguistic phenomena. The dataset that was developed for these tasks is composed of a series of sentences with a single masked word and a cue phrase that helps in narrowing down the context. This dataset is divided into MWEs and grammatical structures, and the latter is subdivided into 6 tasks: impersonal verbs, subject agreement, verb agreement, nominal agreement, passive and connectors. The subset for MWEs was used to test BERTimbau Large, BERTimbau Base and mBERT. For the grammatical structures, we used only BERTimbau Large, because it yielded the best results in the MWE task. In both cases, we evaluated the results considering the best candidates and the top ten candidates. The evaluation was done both automatically (for MWEs) and manually (for grammatical structures). The results obtained for MWEs show that BERTimbau Large surpassed both the other models in predicting the correct masked element. However, the average accuracy of the best model was only 52% when only the best candidates were considered for each sentence, going up to 66% when the top ten candidates were taken into account. As for the grammatical tasks, the results presented better prediction, but also varied depending on the type of morphosyntactic agreement. On the one hand, cases such as connectors and impersonal verbs, which do not require any agreement in the produced candidates, had precision of 100% and 98.78% among the best candidates. On the other hand, tasks that require morphosyntactic agreement had results consistently below 90% overall precision, with the lowest scores being reported for nominal agreement and verb agreement, both having scores below 80% in overall precision among the best candidates. Therefore, we identified that a critical and widely adopted resource for Brazilian Portuguese NLP presents issues concerning MWE vocabulary and morphosyntactic agreement, even if it is prolific in most cases. These models are a core component in many NLP systems, and our findings demonstrate the need of additional improvements in these models and the importance of widely evaluating computational representations of language.

Authors with CRIS profile

Involved external institutions

How to cite

APA:

Wilkens, R., Zilio, L., & Villavicencio, A. (2024). Assessing linguistic generalisation in language models: a dataset for Brazilian Portuguese. Language Resources and Evaluation, 58(1), 175-201. https://doi.org/10.1007/s10579-023-09664-1

MLA:

Wilkens, Rodrigo, Leonardo Zilio, and Aline Villavicencio. "Assessing linguistic generalisation in language models: a dataset for Brazilian Portuguese." Language Resources and Evaluation 58.1 (2024): 175-201.

BibTeX: Download