dc.contributor.advisor | Lee, Yugyung, 1960- | |
dc.contributor.author | Mannava, Guru Teja | |
dc.date.issued | 2016 | |
dc.date.submitted | 2016 Spring | |
dc.description | Title from PDF of title page, viewed on October 24, 2016 | |
dc.description | Thesis advisor: Yugyung Lee | |
dc.description | Vita | |
dc.description | Includes bibliographical references (pages 89-91) | |
dc.description | Thesis (M.S.)--Department of Computing and Engineering. University of Missouri--Kansas City, 2016 | |
dc.description.abstract | With the increasing popularity and affordability of smartphones, there is a high demand to add
machine-learning engines to smartphones. However, Machine Learning with smartphones is typically
not feasible due to the heavy loaded computation required for processing large-scale data with
Machine Learning. The conventional Machine Learning systems do not naturally or efficiently support
some very important features for large-scale stream data. To overcome these limitations, we propose the iHear engine that aims to support lightweight Machine Learning through a collaboration between cloud and smartphones. The contributions of this
thesis are summarized as follows:
1) The iHear system architecture for achieving high performance with parallel and distributed
learning by separating cloud-based learning from smartphone-based recognition
2) The context-aware model for improvement of the accuracy and efficiency in audio recognition
and sound enhancement
3) Audio recognition with real-time data preserving data consistency.
4) An intelligent hearing app for IOS devices developed for effective and dynamic audio
recognition and enhancement depending upon users’ context for providing better hearing
experiences.
The efficiency and effectiveness of the iHear engine in terms of its continuous learning
capability were evaluated on an Apache Spark (MLlib) with audio recognition and filtering of streaming
data. We conducted experiments with multiple contexts of household traffic, offices, emergencies,
and nature with real data collected from smartphones. Our experimental results show that the
proposed framework for lightweight Machine Learning with the context aware model are very effective
and efficient in terms of real time processing with a high accuracy rate of 90%, which is 20% higher than
traditional approaches. | eng |
dc.description.tableofcontents | Introduction -- Background and related work -- Proposed framework -- Implementation and experiment setup -- Evaluations -- Conclusion and future work | |
dc.format.extent | xii, 92 pages | |
dc.identifier.uri | https://hdl.handle.net/10355/53383 | |
dc.publisher | University of Missouri–Kansas City | eng |
dc.subject.lcsh | Machine learning | |
dc.subject.lcsh | Smartphones | |
dc.subject.lcsh | Cloud computing | |
dc.subject.lcsh | Hearing aids | |
dc.subject.other | Thesis -- University of Missouri--Kansas City -- Computer science | |
dc.title | iHear – Lightweight Machine Learning Engine with Context Aware Audio Recognition Model | eng |
dc.type | Thesis | eng |
thesis.degree.discipline | Computer Science (UMKC) | |
thesis.degree.grantor | University of Missouri--Kansas City | |
thesis.degree.level | Masters | |
thesis.degree.level | M.S. | |