[-] Show simple item record

dc.contributor.advisorLee, Yugyung, 1960-
dc.contributor.authorGharibi, Gharib
dc.date.issued2020
dc.date.submitted2020 Fall
dc.descriptionTitle from PDF of title page viewed March 1, 2021
dc.descriptionDissertation advisor: Yugyung Lee
dc.descriptionVita
dc.descriptionIncludes bibliographical references ( page 115-125)
dc.descriptionThesis (Ph.D.)--School of Computing and Engineering. University of Missouri--Kansas City, 2020
dc.description.abstractDeep learning has improved the state-of-the-art results in an ever-growing number of domains. This success heavily relies on the development of deep learning models--an experimental, iterative process that produces tens to hundreds of models before arriving at a satisfactory result. While there has been a surge in the number of tools and frameworks that aim at facilitating deep learning, the process of managing the models and their artifacts is still surprisingly challenging and time-consuming. Existing model-management solutions are either tailored for commercial platforms or require significant code changes. Moreover, most of the existing solutions address a single phase of the modeling lifecycle, such as experiment monitoring, while ignoring other essential tasks, such as model sharing and deployment. In this dissertation, we present a software system to facilitate and accelerate the deep learning lifecycle, named ModelKB. ModelKB can \textit{automatically} manage the modeling lifecycle end-to-end, including (1) monitoring and tracking experiments; (2) visualizing, searching for, and comparing models and experiments; (3) deploying models locally and on the cloud; and (4) sharing and publishing trained models. Our system also provides a stepping-stone for enhanced reproducibility. ModelKB currently supports TensorFlow 2.0, Keras, and PyTorch, and it can be extended to other deep learning frameworks easily. A video demo is available at https://youtu.be/XWiJpSM_jvA. Moreover, we study static call graphs to form a stepping-stone to facilitate the \textit{comprehension} of the overall lifecycle implementation (i.e., source code). Specifically, we introduce Code2Graph to facilitate the exploration and tracking of the implementation and its changes over time. Code2Graph is used to construct and visualize the call graph of a software codebase. We evaluate the functionality by analyzing and studying real software systems throughout their entire lifespan. The tool, evaluation results, and a video demo are available at https://goo.gl/8edZ64. Finally, we demonstrate a software system that brings together the contributions mentioned above to build a robust, open-collaborative platform for deep learning applications in the health domain, named Medl.AI. Medl.AI enables researchers and healthcare professionals to easily and efficiently: explore, share, reuse, and discuss deep learning models specific to the medical domain. We present six illustrative deep learning medical applications using Medl.AI. We conduct an online survey to assess the feasibility and benefits of Medl.AI. The user study suggests that Medl.AI provides a promising solution to open collaborative research and applications. Our live website is currently available at http://medl.ai.
dc.description.tableofcontentsIntroduction -- Background and Challenges -- Automated Management of the Modeling Lifecycle -- Facilitating Program Comprehension -- Medl.AI: An application of MODELKB -- Conclusions
dc.format.extentxiii, 125 pages
dc.identifier.urihttps://hdl.handle.net/10355/80790
dc.subject.lcshMachine learning
dc.subject.otherDissertation -- University of Missouri--Kansas City -- Computer science
dc.titleAutomated End-to-End Management of the Deep Learning Lifecycle
thesis.degree.disciplineComputer Science (UMKC)
thesis.degree.grantorUniversity of Missouri--Kansas City
thesis.degree.levelDoctoral
thesis.degree.namePh.D.


Files in this item

[PDF]

This item appears in the following Collection(s)

[-] Show simple item record