Zobrazit minimální záznam

Hladkost funkcí naučených neuronovými sítěmi
dc.contributor.advisorMusil, Tomáš
dc.creatorVolhejn, Václav
dc.date.accessioned2020-07-28T10:03:54Z
dc.date.available2020-07-28T10:03:54Z
dc.date.issued2020
dc.identifier.urihttp://hdl.handle.net/20.500.11956/119446
dc.description.abstractModern neural networks can easily fit their training set perfectly. Surprisingly, they generalize well despite being "overfit" in this way, defying the bias-variance trade-off. A prevalent explanation is that stochastic gradient descent has an implicit bias which leads it to learn functions that are simple, and these simple functions generalize well. However, the specifics of this implicit bias are not well understood. In this work, we explore the hypothesis that SGD is implicitly biased towards learning functions that are smooth. We propose several measures to formalize the intuitive notion of smoothness, and conduct experiments to determine whether these measures are implicitly being optimized for. We exclude the possibility that smoothness measures based on first derivatives (the gradient) are being implicitly optimized for. Measures based on second derivatives (the Hessian), on the other hand, show promising results. 1en_US
dc.languageEnglishcs_CZ
dc.language.isoen_US
dc.publisherUniverzita Karlova, Matematicko-fyzikální fakultacs_CZ
dc.subjectmachine learningen_US
dc.subjectneural networksen_US
dc.subjectsmoothnessen_US
dc.subjectgeneralizationen_US
dc.subjectstrojové učenícs_CZ
dc.subjectneuronové sítěcs_CZ
dc.subjecthladkostcs_CZ
dc.subjectzobecňovánícs_CZ
dc.titleSmoothness of Functions Learned by Neural Networksen_US
dc.typebakalářská prácecs_CZ
dcterms.created2020
dcterms.dateAccepted2020-07-07
dc.description.departmentInstitute of Formal and Applied Linguisticsen_US
dc.description.departmentÚstav formální a aplikované lingvistikycs_CZ
dc.description.facultyMatematicko-fyzikální fakultacs_CZ
dc.description.facultyFaculty of Mathematics and Physicsen_US
dc.identifier.repId224645
dc.title.translatedHladkost funkcí naučených neuronovými sítěmics_CZ
dc.contributor.refereeStraka, Milan
thesis.degree.nameBc.
thesis.degree.levelbakalářskécs_CZ
thesis.degree.disciplineObecná informatikacs_CZ
thesis.degree.disciplineGeneral Computer Scienceen_US
thesis.degree.programComputer Scienceen_US
thesis.degree.programInformatikacs_CZ
uk.thesis.typebakalářská prácecs_CZ
uk.taxonomy.organization-csMatematicko-fyzikální fakulta::Ústav formální a aplikované lingvistikycs_CZ
uk.taxonomy.organization-enFaculty of Mathematics and Physics::Institute of Formal and Applied Linguisticsen_US
uk.faculty-name.csMatematicko-fyzikální fakultacs_CZ
uk.faculty-name.enFaculty of Mathematics and Physicsen_US
uk.faculty-abbr.csMFFcs_CZ
uk.degree-discipline.csObecná informatikacs_CZ
uk.degree-discipline.enGeneral Computer Scienceen_US
uk.degree-program.csInformatikacs_CZ
uk.degree-program.enComputer Scienceen_US
thesis.grade.csVýborněcs_CZ
thesis.grade.enExcellenten_US
uk.abstract.enModern neural networks can easily fit their training set perfectly. Surprisingly, they generalize well despite being "overfit" in this way, defying the bias-variance trade-off. A prevalent explanation is that stochastic gradient descent has an implicit bias which leads it to learn functions that are simple, and these simple functions generalize well. However, the specifics of this implicit bias are not well understood. In this work, we explore the hypothesis that SGD is implicitly biased towards learning functions that are smooth. We propose several measures to formalize the intuitive notion of smoothness, and conduct experiments to determine whether these measures are implicitly being optimized for. We exclude the possibility that smoothness measures based on first derivatives (the gradient) are being implicitly optimized for. Measures based on second derivatives (the Hessian), on the other hand, show promising results. 1en_US
uk.file-availabilityV
uk.grantorUniverzita Karlova, Matematicko-fyzikální fakulta, Ústav formální a aplikované lingvistikycs_CZ
thesis.grade.code1
uk.publication-placePrahacs_CZ


Soubory tohoto záznamu

Thumbnail
Thumbnail
Thumbnail
Thumbnail
Thumbnail
Thumbnail

Tento záznam se objevuje v následujících sbírkách

Zobrazit minimální záznam


© 2017 Univerzita Karlova, Ústřední knihovna, Ovocný trh 560/5, 116 36 Praha 1; email: admin-repozitar [at] cuni.cz

Za dodržení všech ustanovení autorského zákona jsou zodpovědné jednotlivé složky Univerzity Karlovy. / Each constituent part of Charles University is responsible for adherence to all provisions of the copyright law.

Upozornění / Notice: Získané informace nemohou být použity k výdělečným účelům nebo vydávány za studijní, vědeckou nebo jinou tvůrčí činnost jiné osoby než autora. / Any retrieved information shall not be used for any commercial purposes or claimed as results of studying, scientific or any other creative activities of any person other than the author.

DSpace software copyright © 2002-2015  DuraSpace
Theme by 
@mire NV