Zobrazit minimální záznam

Performance in Software Development Cycle: Regression Benchmarking
dc.contributor.advisorTůma, Petr
dc.creatorKalibera, Tomáš
dc.date.accessioned2018-11-30T11:11:17Z
dc.date.available2018-11-30T11:11:17Z
dc.date.issued2006
dc.identifier.urihttp://hdl.handle.net/20.500.11956/7495
dc.description.abstractThe development cycle of large software is necessarily prone to introducing software errors that are hard to find and fix. Automated regular testing (regression testing) is a popular method used to reduce the cost of finding and fixing functionality errors, but it neglects software performance. The thesis focuses on performance errors, enabling automated detection of performance changes during software development (regression benchmarking). The key investigated problem is non-determinism in computer systems, which causes performance fluctuations. The problem is addressed by a novel benchmarking methodology based on statistical methods. The methodology is evaluated on a large open-source project Mono, detecting daily performance changes since August 2004, and on open-source CORBA implementations omniORB and TAO. The benchmark automation is a complex task in itself. As suggested by experience with compilation of weather forecast model Arpege/Aladin and implementation of component model SOFA, large systems place distinguishing demands on tasks such as automated compilation or execution. Complemented by experience from Mono benchmarking, the thesis proposes an architecture of a generic environment for automated regression benchmarking. The environment is being implemented by master students under supervision of...en_US
dc.languageEnglishcs_CZ
dc.language.isoen_US
dc.publisherUniverzita Karlova, Matematicko-fyzikální fakultacs_CZ
dc.titlePerformance in Software Development Cycle: Regression Benchmarkingen_US
dc.typedizertační prácecs_CZ
dcterms.created2006
dcterms.dateAccepted2006-09-19
dc.description.departmentKatedra softwarového inženýrstvícs_CZ
dc.description.departmentDepartment of Software Engineeringen_US
dc.description.facultyFaculty of Mathematics and Physicsen_US
dc.description.facultyMatematicko-fyzikální fakultacs_CZ
dc.identifier.repId40879
dc.title.translatedPerformance in Software Development Cycle: Regression Benchmarkingcs_CZ
dc.contributor.refereeHauswirth, Matthias
dc.contributor.refereeEeckhout, Lieven
dc.identifier.aleph000851914
thesis.degree.namePh.D.
thesis.degree.leveldoktorskécs_CZ
thesis.degree.disciplineSoftwarové systémycs_CZ
thesis.degree.disciplineSoftware Systemsen_US
thesis.degree.programInformaticsen_US
thesis.degree.programInformatikacs_CZ
uk.thesis.typedizertační prácecs_CZ
uk.taxonomy.organization-csMatematicko-fyzikální fakulta::Katedra softwarového inženýrstvícs_CZ
uk.taxonomy.organization-enFaculty of Mathematics and Physics::Department of Software Engineeringen_US
uk.faculty-name.csMatematicko-fyzikální fakultacs_CZ
uk.faculty-name.enFaculty of Mathematics and Physicsen_US
uk.faculty-abbr.csMFFcs_CZ
uk.degree-discipline.csSoftwarové systémycs_CZ
uk.degree-discipline.enSoftware Systemsen_US
uk.degree-program.csInformatikacs_CZ
uk.degree-program.enInformaticsen_US
thesis.grade.csProspěl/acs_CZ
thesis.grade.enPassen_US
uk.abstract.enThe development cycle of large software is necessarily prone to introducing software errors that are hard to find and fix. Automated regular testing (regression testing) is a popular method used to reduce the cost of finding and fixing functionality errors, but it neglects software performance. The thesis focuses on performance errors, enabling automated detection of performance changes during software development (regression benchmarking). The key investigated problem is non-determinism in computer systems, which causes performance fluctuations. The problem is addressed by a novel benchmarking methodology based on statistical methods. The methodology is evaluated on a large open-source project Mono, detecting daily performance changes since August 2004, and on open-source CORBA implementations omniORB and TAO. The benchmark automation is a complex task in itself. As suggested by experience with compilation of weather forecast model Arpege/Aladin and implementation of component model SOFA, large systems place distinguishing demands on tasks such as automated compilation or execution. Complemented by experience from Mono benchmarking, the thesis proposes an architecture of a generic environment for automated regression benchmarking. The environment is being implemented by master students under supervision of...en_US
uk.file-availabilityV
uk.publication.placePrahacs_CZ
uk.grantorUniverzita Karlova, Matematicko-fyzikální fakulta, Katedra softwarového inženýrstvícs_CZ
thesis.grade.codeP
dc.identifier.lisID990008519140106986


Soubory tohoto záznamu

Thumbnail
Thumbnail
Thumbnail
Thumbnail
Thumbnail
Thumbnail
Thumbnail

Tento záznam se objevuje v následujících sbírkách

Zobrazit minimální záznam


© 2017 Univerzita Karlova, Ústřední knihovna, Ovocný trh 560/5, 116 36 Praha 1; email: admin-repozitar [at] cuni.cz

Za dodržení všech ustanovení autorského zákona jsou zodpovědné jednotlivé složky Univerzity Karlovy. / Each constituent part of Charles University is responsible for adherence to all provisions of the copyright law.

Upozornění / Notice: Získané informace nemohou být použity k výdělečným účelům nebo vydávány za studijní, vědeckou nebo jinou tvůrčí činnost jiné osoby než autora. / Any retrieved information shall not be used for any commercial purposes or claimed as results of studying, scientific or any other creative activities of any person other than the author.

DSpace software copyright © 2002-2015  DuraSpace
Theme by 
@mire NV