Researchers at the University of Nottingham have shown how facial expression and head motion data can be used to help clinicians who struggle to diagnose people with either Autism Spectrum Disorder (ASD), Attention Deficit Hyperactivity Disorder (ADHD), both, or neither.
A team of researchers from the School of Computer Sciences (Shashank Jaiswal and Dr Michel Valstar) and the Institute of Mental Health (Prof David Daley) created an algorithm that can distinguish between people from each of these four groups. The team conducted recordings at the Asperger Clinic in Nottingham, led by Dr Alinda Gillot. Video and depth recordings were collected using a Microsoft Kinect device while participants were shown a number of 'strange stories', and answered questions about them, with the hypothesis being that people with ASD will not get some of the socially or emotionally oddities presented in the strange stories, and people presenting ADHD symptoms having difficulty concentrating on all 12 stories.
Using a novel face tracker and facial muscle action detector based on Dynamic Deep Learning, a number of facial actions were detected. In addition, speed and aggregate head displacement were calculated based on the tracked facial points. After feature selection, this allowed the four groups to be separated with accuracies up to 96 %.
This research won’t take over from doctors any time soon, says Valstar in an article that appeared in New Scientist. "We are creating diagnostic tools that will speed up the diagnosis in an existing practice, but we do not believe we can remove humans. Humans add ethics and moral values to the process.”
Posted on Friday 6th January 2017