Shared more. Cited more. Safe forever.
    • advanced search
    • submit works
    • about
    • help
    • contact us
    • login
    View Item 
    •   MOspace Home
    • University of Missouri-Columbia
    • Graduate School - MU Theses and Dissertations (MU)
    • Theses and Dissertations (MU)
    • Dissertations (MU)
    • 2021 Dissertations (MU)
    • 2021 MU Dissertations - Freely available online
    • View Item
    •   MOspace Home
    • University of Missouri-Columbia
    • Graduate School - MU Theses and Dissertations (MU)
    • Theses and Dissertations (MU)
    • Dissertations (MU)
    • 2021 Dissertations (MU)
    • 2021 MU Dissertations - Freely available online
    • View Item
    JavaScript is disabled for your browser. Some features of this site may not work without it.
    advanced searchsubmit worksabouthelpcontact us

    Browse

    All of MOspaceCommunities & CollectionsDate IssuedAuthor/ContributorTitleIdentifierThesis DepartmentThesis AdvisorThesis SemesterThis CollectionDate IssuedAuthor/ContributorTitleIdentifierThesis DepartmentThesis AdvisorThesis Semester

    Statistics

    Most Popular ItemsStatistics by CountryMost Popular AuthorsStatistics by Referrer

    Deep learning with very few and no labels

    Li, Yang
    View/Open
    [PDF] LiYangResearch.pdf (22.73Mb)
    Date
    2021
    Format
    Thesis
    Metadata
    [+] Show full item record
    Abstract
    Deep neural networks have achieved remarkable performance in many computer vision applications such as image classification, object detection, instance segmentation, image retrieval, and person re-identification. However, to achieve the desired performance, deep neural networks often need a tremendously large set of labeled training samples to learn its huge network model. Labeling a large dataset is labor-intensive, time-consuming, and sometimes requiring expert knowledge. In this research, we study the following important question: how to train deep neural networks with very few or even no labeled samples? This leads to our research tasks in the following two major areas: semi-supervised and unsupervised learning. Specifically, for semi-supervised learning, we developed two major approaches. The first one is the Snowball approach which learns a deep neural network from very few samples based on iterative model evolution and confident sample discovery. The second one is the learned model composition approach which composes more efficient master networks from student models of past iterations through a network learning process. Critical sample discovery is developed to discover new critical unlabeled samples near the model decision boundary and provide the master model with lookahead access to these samples to enhance its guidance capability. For unsupervised learning, we have explored two major ideas. The first idea is transformed attention consistency where the network is learned based on selfsupervision information across images instead of within one single image. The second one is spatial assembly networks for image representation learning. We introduce a new learnable module, called spatial assembly network (SAN), which performs a learned re-organization and assembly of feature points and improves the network capabilities in handling spatial variations and structural changes of the image scene. Our experimental results on benchmark datasets demonstrate that our proposed methods have significantly improved the state-of-the-art in semi-supervised and unsupervised learning, outperforming existing methods by large margins.
    URI
    https://hdl.handle.net/10355/93236
    Degree
    Ph. D.
    Thesis Department
    Electrical and computer engineering (MU)
    Collections
    • Electrical Engineering and Computer Science electronic theses and dissertations (MU)
    • 2021 MU Dissertations - Freely available online

    Send Feedback
    hosted by University of Missouri Library Systems
     

     


    Send Feedback
    hosted by University of Missouri Library Systems