dc.contributor.advisor | Straka, Milan | |
dc.creator | Kondratyuk, Daniel | |
dc.date.accessioned | 2019-07-04T10:08:52Z | |
dc.date.available | 2019-07-04T10:08:52Z | |
dc.date.issued | 2019 | |
dc.identifier.uri | http://hdl.handle.net/20.500.11956/107286 | |
dc.description.abstract | Recent research has shown promise in multilingual modeling, demonstrating how a single model is capable of learning tasks across several languages. However, typical recurrent neural models fail to scale beyond a small number of related lan- guages and can be quite detrimental if multiple distant languages are grouped together for training. This thesis introduces a simple method that does not have this scaling problem, producing a single multi-task model that predicts universal part-of-speech, morphological features, lemmas, and dependency trees simultane- ously for 124 Universal Dependencies treebanks across 75 languages. By leverag- ing the multilingual BERT model pretrained on 104 languages, we apply several modifications and fine-tune it on all available Universal Dependencies training data. The resulting model, we call UDify, can closely match or exceed state-of- the-art UPOS, UFeats, Lemmas, (and especially) UAS, and LAS scores, without requiring any recurrent or language-specific components. We evaluate UDify for multilingual learning, showing that low-resource languages benefit the most from cross-linguistic annotations. We also evaluate UDify for zero-shot learning, with results suggesting that multilingual training provides strong UD predictions even for languages that neither UDify nor BERT... | en_US |
dc.language | English | cs_CZ |
dc.language.iso | en_US | |
dc.publisher | Univerzita Karlova, Matematicko-fyzikální fakulta | cs_CZ |
dc.title | Multilingual Learning using Syntactic Multi-Task Training | en_US |
dc.type | diplomová práce | cs_CZ |
dcterms.created | 2019 | |
dcterms.dateAccepted | 2019-06-11 | |
dc.description.department | Ústav formální a aplikované lingvistiky | cs_CZ |
dc.description.department | Institute of Formal and Applied Linguistics | en_US |
dc.description.faculty | Faculty of Mathematics and Physics | en_US |
dc.description.faculty | Matematicko-fyzikální fakulta | cs_CZ |
dc.identifier.repId | 211732 | |
dc.title.translated | Vícejazyčné učení pomocí víceúlohového trénování syntaxe | cs_CZ |
dc.contributor.referee | Mareček, David | |
thesis.degree.name | Mgr. | |
thesis.degree.level | navazující magisterské | cs_CZ |
thesis.degree.discipline | Matematická lingvistika | cs_CZ |
thesis.degree.discipline | Computational Linguistics | en_US |
thesis.degree.program | Informatika | cs_CZ |
thesis.degree.program | Computer Science | en_US |
uk.thesis.type | diplomová práce | cs_CZ |
uk.taxonomy.organization-cs | Matematicko-fyzikální fakulta::Ústav formální a aplikované lingvistiky | cs_CZ |
uk.taxonomy.organization-en | Faculty of Mathematics and Physics::Institute of Formal and Applied Linguistics | en_US |
uk.faculty-name.cs | Matematicko-fyzikální fakulta | cs_CZ |
uk.faculty-name.en | Faculty of Mathematics and Physics | en_US |
uk.faculty-abbr.cs | MFF | cs_CZ |
uk.degree-discipline.cs | Matematická lingvistika | cs_CZ |
uk.degree-discipline.en | Computational Linguistics | en_US |
uk.degree-program.cs | Informatika | cs_CZ |
uk.degree-program.en | Computer Science | en_US |
thesis.grade.cs | Výborně | cs_CZ |
thesis.grade.en | Excellent | en_US |
uk.abstract.en | Recent research has shown promise in multilingual modeling, demonstrating how a single model is capable of learning tasks across several languages. However, typical recurrent neural models fail to scale beyond a small number of related lan- guages and can be quite detrimental if multiple distant languages are grouped together for training. This thesis introduces a simple method that does not have this scaling problem, producing a single multi-task model that predicts universal part-of-speech, morphological features, lemmas, and dependency trees simultane- ously for 124 Universal Dependencies treebanks across 75 languages. By leverag- ing the multilingual BERT model pretrained on 104 languages, we apply several modifications and fine-tune it on all available Universal Dependencies training data. The resulting model, we call UDify, can closely match or exceed state-of- the-art UPOS, UFeats, Lemmas, (and especially) UAS, and LAS scores, without requiring any recurrent or language-specific components. We evaluate UDify for multilingual learning, showing that low-resource languages benefit the most from cross-linguistic annotations. We also evaluate UDify for zero-shot learning, with results suggesting that multilingual training provides strong UD predictions even for languages that neither UDify nor BERT... | en_US |
uk.file-availability | V | |
uk.publication.place | Praha | cs_CZ |
uk.grantor | Univerzita Karlova, Matematicko-fyzikální fakulta, Ústav formální a aplikované lingvistiky | cs_CZ |
thesis.grade.code | 1 | |