Target concept learning from ambiguously labeled data
Metadata[+] Show full item record
The multiple instance learning problem addresses the case where training data comes with label ambiguity, i.e., the learner has access only to inaccurately labeled data. For example, in target detection from remotely sensed hyperspectral imagery, targets are usually sub-pixel and the ground truthing of the targets according to GPS coordinates could drift across several meters. Thus the locations of the targets corresponding to the hyperspectral image are inaccurate. Training a supervised algorithm or extracting target signatures from this kind of labels is intractable. This dissertation investigates the topic target concept learning from ambiguously labeled data comprehensively; reviews and proposes several methods that either learn a set of representative or discriminative target concepts. The multiple instance hybrid estimator (MI-HE) maximizes the response of the hybrid detector under a generalized mean framework and estimates a set of discriminative target concepts. MI-HE adopts a linear mixture model and iterates between estimating a set of discriminative target and non-target signatures and solving a sparse unmixing problem. MI-HE preserves bag-level label information for each positive bag and is able to estimate a target concept that is commonly shared among positive bags. Furthermore, MI-HE has the potential to learn multiple signatures to address signature variability. After learning target concept, signature based detector could be applied for target detection. The presented algorithms were tested in many applications including simulated and real hyperspectral target detection, heartbeat characterization from ballistocardiogram signals and tree species classification from remotely sensed data. The presented algorithms were proven to be effective in learning high-quality target signatures and consistently achieved superior performance over the state-of-the-art comparison algorithms.
This work is licensed under a Creative Commons Attribution-NonCommercial-NoDerivs 3.0 License.