School of Computer Science

Automatic depression and facial expression recognition

Date(s)
Wednesday 17th July 2013 (13:00-14:00)
Description

Talk 1 (1 pm): 

Title: A multimodal approach to automatic depression analysis

Abstract: Depression is a severe mental health disorder causing high societal costs. Current clinical practice depends almost exclusively on self-report and clinical opinion, risking a range of subjective biases. The long-term goal of this research is to develop assistive technologies to support clinicians and sufferers in the diagnosis and monitoring of treatment progress in a timely and easily accessible format. In the first phase, it is aimed to develop a diagnostic aid using affective sensing approaches. Starting from the proposition that the auditory and visual human communication complement each other, which is well-known in auditory-visual speech processing, this hypothesis is investigated for depression analysis. For the video data, body movements (including head / face and shoulders) and intra-facial muscle movement are analysed. In addition, the contributions of head movements and body gestures to depression analysis are evaluated and compared with the face-only case. Various audio features are also computed. A bag of visual features and a bag of audio features are generated separately. Different fusion methods are compared at feature level, score level and decision level. Current performance on the Black Dog Institute pilot study dataset show significant agreement between the proposed multimodal affective sensing approach and clinical opinion.

Bio: Jyoti Joshi is a second year PhD student at University of Canberra, Australia. Her rerseach interests are in the domain of pattern recognition, computer vision and machine learning with a storng focus on designing better affective sensing techniques. 

Talk 2 (1:30 pm): 

Title: Emotion Recognition In the Wild: From Single to Groups

Abstract: This project explores methods for emotion analysis in practical environments. The talk is divided into two parts. Thehypothesis of part A is that, close to real-world data can be extracted from movies based on a semi-automatic framework. Further, emotion analysis in movies is a stepping stone for moving to analysis in the real-world. Part B of the talk focuses on formulating a framework for mood estimation of groups based on social context information. The main contributions are: a) defining automatic frameworks for group mood; b) social features, which compute weights on expression intensities; c) an automatic face occlusion intensity detection method; and d) an `in the wild' labelled database containing images having multiple subjects from different scenarios. The experiments show that the global and local contexts provide useful information for theme expression analysis, with results similar to human perception.

Bio: Abhinav Dhall is a final year PhD student at the Australian National University. His research interest include affective computing, computer vision and machine learning. His PhD project is supported by the AusAid's Australian Leadership Award. His homepage is http://users.cecs.anu.edu.au/~ adhall

School of Computer Science

University of Nottingham
Jubilee Campus
Wollaton Road
Nottingham, NG8 1BB

For all enquires please visit:
www.nottingham.ac.uk/enquire