Show simple item record

Vícejazyčné učení pomocí víceúlohového trénování syntaxe
dc.contributor.advisorStraka, Milan
dc.creatorKondratyuk, Daniel
dc.date.accessioned2019-07-04T10:08:52Z
dc.date.available2019-07-04T10:08:52Z
dc.date.issued2019
dc.identifier.urihttp://hdl.handle.net/20.500.11956/107286
dc.description.abstractRecent research has shown promise in multilingual modeling, demonstrating how a single model is capable of learning tasks across several languages. However, typical recurrent neural models fail to scale beyond a small number of related lan- guages and can be quite detrimental if multiple distant languages are grouped together for training. This thesis introduces a simple method that does not have this scaling problem, producing a single multi-task model that predicts universal part-of-speech, morphological features, lemmas, and dependency trees simultane- ously for 124 Universal Dependencies treebanks across 75 languages. By leverag- ing the multilingual BERT model pretrained on 104 languages, we apply several modifications and fine-tune it on all available Universal Dependencies training data. The resulting model, we call UDify, can closely match or exceed state-of- the-art UPOS, UFeats, Lemmas, (and especially) UAS, and LAS scores, without requiring any recurrent or language-specific components. We evaluate UDify for multilingual learning, showing that low-resource languages benefit the most from cross-linguistic annotations. We also evaluate UDify for zero-shot learning, with results suggesting that multilingual training provides strong UD predictions even for languages that neither UDify nor BERT...en_US
dc.languageEnglishcs_CZ
dc.language.isoen_US
dc.publisherUniverzita Karlova, Matematicko-fyzikální fakultacs_CZ
dc.titleMultilingual Learning using Syntactic Multi-Task Trainingen_US
dc.typediplomová prácecs_CZ
dcterms.created2019
dcterms.dateAccepted2019-06-11
dc.description.departmentÚstav formální a aplikované lingvistikycs_CZ
dc.description.departmentInstitute of Formal and Applied Linguisticsen_US
dc.description.facultyFaculty of Mathematics and Physicsen_US
dc.description.facultyMatematicko-fyzikální fakultacs_CZ
dc.identifier.repId211732
dc.title.translatedVícejazyčné učení pomocí víceúlohového trénování syntaxecs_CZ
dc.contributor.refereeMareček, David
thesis.degree.nameMgr.
thesis.degree.levelnavazující magisterskécs_CZ
thesis.degree.disciplineMatematická lingvistikacs_CZ
thesis.degree.disciplineComputational Linguisticsen_US
thesis.degree.programInformatikacs_CZ
thesis.degree.programComputer Scienceen_US
uk.thesis.typediplomová prácecs_CZ
uk.taxonomy.organization-csMatematicko-fyzikální fakulta::Ústav formální a aplikované lingvistikycs_CZ
uk.taxonomy.organization-enFaculty of Mathematics and Physics::Institute of Formal and Applied Linguisticsen_US
uk.faculty-name.csMatematicko-fyzikální fakultacs_CZ
uk.faculty-name.enFaculty of Mathematics and Physicsen_US
uk.faculty-abbr.csMFFcs_CZ
uk.degree-discipline.csMatematická lingvistikacs_CZ
uk.degree-discipline.enComputational Linguisticsen_US
uk.degree-program.csInformatikacs_CZ
uk.degree-program.enComputer Scienceen_US
thesis.grade.csVýborněcs_CZ
thesis.grade.enExcellenten_US
uk.abstract.enRecent research has shown promise in multilingual modeling, demonstrating how a single model is capable of learning tasks across several languages. However, typical recurrent neural models fail to scale beyond a small number of related lan- guages and can be quite detrimental if multiple distant languages are grouped together for training. This thesis introduces a simple method that does not have this scaling problem, producing a single multi-task model that predicts universal part-of-speech, morphological features, lemmas, and dependency trees simultane- ously for 124 Universal Dependencies treebanks across 75 languages. By leverag- ing the multilingual BERT model pretrained on 104 languages, we apply several modifications and fine-tune it on all available Universal Dependencies training data. The resulting model, we call UDify, can closely match or exceed state-of- the-art UPOS, UFeats, Lemmas, (and especially) UAS, and LAS scores, without requiring any recurrent or language-specific components. We evaluate UDify for multilingual learning, showing that low-resource languages benefit the most from cross-linguistic annotations. We also evaluate UDify for zero-shot learning, with results suggesting that multilingual training provides strong UD predictions even for languages that neither UDify nor BERT...en_US
uk.file-availabilityV
uk.publication.placePrahacs_CZ
uk.grantorUniverzita Karlova, Matematicko-fyzikální fakulta, Ústav formální a aplikované lingvistikycs_CZ
thesis.grade.code1


Files in this item

Thumbnail
Thumbnail
Thumbnail
Thumbnail
Thumbnail

This item appears in the following Collection(s)

Show simple item record


© 2017 Univerzita Karlova, Ústřední knihovna, Ovocný trh 560/5, 116 36 Praha 1; email: admin-repozitar [at] cuni.cz

Za dodržení všech ustanovení autorského zákona jsou zodpovědné jednotlivé složky Univerzity Karlovy. / Each constituent part of Charles University is responsible for adherence to all provisions of the copyright law.

Upozornění / Notice: Získané informace nemohou být použity k výdělečným účelům nebo vydávány za studijní, vědeckou nebo jinou tvůrčí činnost jiné osoby než autora. / Any retrieved information shall not be used for any commercial purposes or claimed as results of studying, scientific or any other creative activities of any person other than the author.

DSpace software copyright © 2002-2015  DuraSpace
Theme by 
@mire NV