[-] Show simple item record

dc.contributor.advisorLee, Yugyung, 1960-
dc.contributor.authorChandrashekar, Mayanka
dc.date.issued2020
dc.descriptionTitle from PDF of title page viewed November 5, 2020
dc.descriptionDissertation advisor: Yugyung Lee
dc.descriptionVita
dc.descriptionIncludes bibliographical references (pages 257-289)
dc.descriptionThesis (Ph.D.)--School of Computing and Engineering. University of Missouri--Kansas City, 2020
dc.description.abstractAn essential goal of artificial intelligence is to support the knowledge discovery process from data to the knowledge that is useful in decision making. The challenges in the knowledge discovery process are typically due to the following reasons: First, the real-world data are typically noise, sparse, or derived from heterogeneous sources. Second, it is neither easy to build robust predictive models nor to validate them with such real-world data. Third, the `black-box' approach to deep learning models makes it hard to interpret what they produce. It is essential to bridge the gap between the models and their support in decisions with something potentially understandable and interpretable. To address the gap, we focus on designing critical representatives of the discovery process from data to the knowledge that can be used to perform reasoning. In this dissertation, a novel model named Class Representative Learning (CRL) is proposed, a class-based classifier designed with the following unique contributions in machine learning, specifically for image and text classification, i) The unique design of a latent feature vector, i.e., class representative, represents the abstract embedding space projects with the features extracted from a deep neural network learned from either images or text, ii) Parallel ZSL algorithms with class representative learning; iii) A novel projection-based inferencing method uses the vector space model to reconcile the dominant difference between the seen classes and unseen classes; iv) The relationships between CRs (Class Representatives) are represented as a CR Graph where a node represents a CR, and an edge represents the similarity between two CRs.Furthermore, we designed the CR-Graph model that aims to make the models explainable that is crucial for decision-making. Although this CR-Graph does not have full reasoning capability, it is equipped with the class representatives and their inter-dependent network formed through similar neighboring classes. Additionally, semantic information and external information are added to CR-Graph to make the decision more capable of dealing with real-world data. The automated semantic information's ability to the graph is illustrated with a case study of biomedical research through the ontology generation from text and ontology-to-ontology mapping.
dc.description.tableofcontentsIntroduction -- CRL: Class Representative Learning for Image Classification -- Class Representatives for Zero-shot Learning using Purely Visual Data -- MCDD: Multi-class Distribution Model for Large Scale Classification -- Zero Shot Learning for Text Classification using Class Representative Learning -- Visual Context Learning with Big Data Analytics -- Transformation from Publications to Ontology using Topic-based Assertion Discovery -- Ontology Mapping Framework with Feature Extraction and Semantic Embeddings -- Conclusion -- Appendix A. A Comparative Evaluation with Different Similarity Measures
dc.format.extentxix, 290 pages
dc.identifier.urihttps://hdl.handle.net/10355/77963
dc.subject.lcshMachine learning
dc.subject.lcshNatural language processing
dc.subject.lcshData mining
dc.subject.otherDissertation -- University of Missouri--Kansas City -- Computer science
dc.titleDeep Open Representative Learning for Image and Text Classification
thesis.degree.disciplineComputer Science (UMKC)
thesis.degree.disciplineTelecommunications and Computer Networking (UMKC)
thesis.degree.grantorUniversity of Missouri--Kansas City
thesis.degree.levelDoctoral
thesis.degree.namePh.D. (Doctor of Philosophy)


Files in this item

[PDF]

This item appears in the following Collection(s)

[-] Show simple item record