[-] Show simple item record

dc.contributor.advisorHe, Zhihaieng
dc.contributor.authorLi, Yangeng
dc.date.issued2021eng
dc.date.submitted2021 Falleng
dc.description.abstractDeep neural networks have achieved remarkable performance in many computer vision applications such as image classification, object detection, instance segmentation, image retrieval, and person re-identification. However, to achieve the desired performance, deep neural networks often need a tremendously large set of labeled training samples to learn its huge network model. Labeling a large dataset is labor-intensive, time-consuming, and sometimes requiring expert knowledge. In this research, we study the following important question: how to train deep neural networks with very few or even no labeled samples? This leads to our research tasks in the following two major areas: semi-supervised and unsupervised learning. Specifically, for semi-supervised learning, we developed two major approaches. The first one is the Snowball approach which learns a deep neural network from very few samples based on iterative model evolution and confident sample discovery. The second one is the learned model composition approach which composes more efficient master networks from student models of past iterations through a network learning process. Critical sample discovery is developed to discover new critical unlabeled samples near the model decision boundary and provide the master model with lookahead access to these samples to enhance its guidance capability. For unsupervised learning, we have explored two major ideas. The first idea is transformed attention consistency where the network is learned based on selfsupervision information across images instead of within one single image. The second one is spatial assembly networks for image representation learning. We introduce a new learnable module, called spatial assembly network (SAN), which performs a learned re-organization and assembly of feature points and improves the network capabilities in handling spatial variations and structural changes of the image scene. Our experimental results on benchmark datasets demonstrate that our proposed methods have significantly improved the state-of-the-art in semi-supervised and unsupervised learning, outperforming existing methods by large margins.eng
dc.description.bibrefIncludes bibliographical references.eng
dc.format.extentxvii, 132 pages : illustrations (color)eng
dc.identifier.urihttps://hdl.handle.net/10355/93236
dc.identifier.urihttps://doi.org/10.32469/10355/93236eng
dc.languageEnglisheng
dc.publisherUniversity of Missouri--Columbiaeng
dc.titleDeep learning with very few and no labelseng
dc.typeThesiseng
thesis.degree.disciplineElectrical and computer engineering (MU)eng
thesis.degree.levelDoctoraleng
thesis.degree.namePh. D.eng


Files in this item

[PDF]

This item appears in the following Collection(s)

[-] Show simple item record