[-] Show simple item record

dc.contributor.advisorChen, ZhiQiang
dc.contributor.authorTang, Shimin
dc.date.issued2021
dc.date.submitted2021 Spring
dc.descriptionTitle from PDF of title page viewed July 19, 2021
dc.descriptionDissertation advisor: ZhiQiang Chen
dc.descriptionVita
dc.descriptionIncludes bibliographical references (pages 153-164)
dc.descriptionThesis (Ph.D.)--School of Computing and Engineering and Department of Mathematics and Statistics. University of Missouri--Kansas City, 2021
dc.description.abstractUrban scenes are visually complex as they contain physical objects and their spatial-temporal dynamic processes, including natural disasters and human-centered socio-economic activities. Image-based understanding of urban scenes has been actively pursued in the computer and machine vision communities, as witnessed by the abundance of high-level vision methods that emerged in recent years. However, limited efforts are found for addressing the specific complex scenes related to structural damage found in civil structures and infrastructure systems that provide the backbone of our socio-economic activities. These structural damage-related complex scenes can be attributed to natural and technical hazards, and many are found in the aftermath of natural disasters. Many research challenges exist towards not only understanding the scenes (e.g., classifying them and assigning some types of categorical labels to them) but quantifying them with engineering meaningfulness. In this dissertation, I tackled two sets of research problems thanks to the latest advances in optical imaging and mobile electronics, which are: (1) mobile-imaging based structural damage detection and disaster-scene understanding; and (2) unmanned aerial vehicles (UAV) enabled real-time hyperspectral imaging and learning of structural damage. Two specific situations are considered when dealing with mobile images. First, in engineering practice, even with the abundance of imaging capabilities, semantically labeled engineering damage datasets either are too ad-hoc or lack scene complexities. By taking advantage of the facts related to structural surface scenes, where the backgrounds are often very complex, yet the cracks are morphologically simple, I proposed a novel framework to learn from very small-sized semantic datasets (e.g., up to 10 to 20 images). A scale-space theoretic data augmentation technique is proposed in this framework. To deal with the scene complexities, the deep transfer learning mechanism is adopted. Several convolutional neural network (CNN) models based on the seminal Faster R-CNN models are tested and observed with high accuracy. In the second situation, I explored the possibility of learning from crowd-based mobile images for understanding complex disaster disasters resulting from different natural hazards. A novel semantic disaster-scene dataset is created in this work partially using Internet-searched images. A set of bounding-box CNN deep learning models are tested to learn two engineering meaningful disaster-mechanics attributes: hazard-type, being the causal attribute, and damage-level, being the consequential attribute. Many vital insights are first revealed, including that hazard types are more readily classified and localized in urban disaster scenes. In contrast, damage levels are much harder to classify and localize in images. The second arena of this dissertation is to explore the untapped potential of real-time hyperspectral imaging. Traditional hyperspectral remote sensing involves considerably high-cost orbital or space-borne campaigns. Nonetheless, the spectral dimension of pixels in a hyperspectral cube provides signature information for identifying material types, which is instrumental in understanding the types of structural damage in images. To resolve this dilemma, I developed the first-of-its-kind real-time (‘snapshot’) hyperspectral remote sensing platform based on a low-cost unmanned aerial vehicle. Then a new challenge arises as hyperspectral imaging becomes low-cost to operate in the field. Towards advanced machine learning, the traditional normal of labeling by expert eyes is disrupted in that human perception cannot understand through the high spectral dimensionality of hyperspectral pixels. This ultimately leads to the extreme imbalance of labeled data against unlabeled data. In this work, I developed a unique semi-supervised deep learning framework to deal with this challenge. Specifically, I tested and verified the performance of this framework with the models learned from the datasets with different ratios of the labeled and unlabeled data. Empirically optimal ratios are suggested, and the resulting framework can facilitate the application of UAV-based real-time hyperspectral imaging and machine learning-based detection in many engineering fields.
dc.description.tableofcontentsChapters 1-9
dc.format.extentxiii, 165 pages
dc.identifier.urihttps://hdl.handle.net/10355/85285
dc.subject.lcshHyperspectral imaging
dc.subject.lcshDrone aircraft
dc.subject.lcshMachine learning
dc.subject.lcshNatural disasters -- Remote sensing
dc.subject.lcshBuildings -- Remote sensing
dc.subject.otherDissertation -- University of Missouri--Kansas City -- Engineering
dc.subject.otherDissertation -- University of Missouri--Kansas City -- Mathematics
dc.titleDisaster and infrastructure scene understanding
thesis.degree.disciplineElectrical and Computer Engineering (UMKC)
thesis.degree.disciplineMathematics (UMKC)
thesis.degree.grantorUniversity of Missouri--Kansas City
thesis.degree.levelDoctoral
thesis.degree.namePh.D. (Doctor of Philosophy)


Files in this item

[PDF]

This item appears in the following Collection(s)

[-] Show simple item record