Automated agent systems are becoming more and more human-like with time. One of the major components in human-like agent systems is emotional intelligence, where communication plays a major role. Emotional-aware communication is understanding the emotion of the speaker, as well as responding to the user emotionally if needed. Since most automated agents use voice as the primary communication medium, speech emotion classification and emotional speech synthesis can be used to understand and respond, respectively. In my project advanced machine learning techniques such as Deep Learning, Reinforcement Learning is to be used in improving the performance and adaptability of speech emotion classification. Generative models are used to synthesize emotional speech as a response.
For more information, please contact the Graduate Research School.