Is it Possible to Re-educate RoBERTa? Expert-driven Machine Learning for Punctuation Correction

Varování

Publikace nespadá pod Pedagogickou fakultu, ale pod Filozofickou fakultu. Oficiální stránka publikace je na webu muni.cz.
Autoři

MACHURA Jakub ŽIŽKOVÁ Hana FRÉMUND Adam ŠVEC Jan

Rok publikování 2023
Druh Článek v odborném periodiku
Časopis / Zdroj Jazykovedný časopis
Fakulta / Pracoviště MU

Filozofická fakulta

Citace
www https://www.juls.savba.sk/ediela/jc/2023/1/jc23-01.pdf
Doi http://dx.doi.org/10.2478/jazcas-2023-0052
Klíčová slova comma; Czech; vocative; machine learning; RoBERTa
Popis Although Czech rule-based tools for automatic punctuation insertion rely on extensive grammar and achieve respectable precision, the pre-trained Transformers outperform rule-based systems in precision and recall [hidden reference]. The Czech pre-trained RoBERTa model achieves excellent results, yet a certain level of phenomena is ignored, and the model partially makes errors. This paper aims to investigate whether it is possible to retrain the RoBERTa language model to increase the number of sentence commas the model correctly detects. We have chosen a very specific and narrow type of sentence comma, namely the sentence comma delimiting vocative phrases, which is clearly defined in the grammar and is very often omitted by writers. The chosen approaches were further tested and evaluated on different types of texts.
Související projekty:

Používáte starou verzi internetového prohlížeče. Doporučujeme aktualizovat Váš prohlížeč na nejnovější verzi.