Multi-modal emotion detection using deep learning for interpersonal communication analytics

No Thumbnail Available

Meeting name

Sponsors

Date

Journal Title

Format

Subject

Research Projects

Organizational Units

Journal Issue

Abstract

In recent years, deep learning technologies have been increasingly applied to generate meaningful data for advanced research in humanities and sciences. Interpersonal communication skills are crucial to success in science. Communication skills, either in a small group learning environment or a large group setting, are always useful in any future workplace. In this study, we aim to analyze mutual communication and interactions between speakers/audiences from a broader perspective, including emotional and cognitive interactions, in TED talk or classroom settings. We are mainly interested in the recognition of facial and gesture emotions captured in such contexts. More specifically, we proposed a multi-modal emotion detection approach for facial expression, e.g., facial sentiment, gender, age, ethnicity, hairstyles, as well as gesture expression, e.g., sitting, standing, raising their hands, folded hands and crossed legs. The real-time feedback of the proposed system on individual or group communication can effectively be used for improving their communication skills.

Table of Contents

Introduction -- Background and related work -- Proposed framework -- Conclusion and future work

DOI

PubMed ID

Degree

M.S. (Master of Science)

Thesis Department

Rights

License