The Use and Potential of Artificial Intelligence for Supporting Clinical Observation of Child Behaviour

Professor Helen Minnis, Institute of Health and Wellbeing at University of Glasgow, and Professor Alessandro Vinciarelli, School of Computing Science at University of Glasgow, deliver a video abstract on their co-authored CAMH journal Original Article ‘The use and potential of artificial intelligence for supporting clinical observation of child behaviour’.

Authors: Helen Minnis, Alessandro Vinciarelli, Huda Alsofyani

First published: 09 May 2024

Paper: https://doi.org/10.1111/camh.12714

ACAMH Members can read the full paper:

If you are not an ACAMH Member now is a great time to join from as just £5! Take a look at the different levels of membership on offer. Don’t forget as a charity any surplus made is reinvested back as we work to our vision of ‘Sharing best evidence, improving practice’, and our mission to ‘Improve the mental health and wellbeing of young people aged 0-25’.

Professor Helen Minnis
Professor Helen Minnis

Helen Minnis is Professor of Child and Adolescent Psychiatry at the University of Glasgow. She has had a longstanding clinical and research focus on the psychiatric problems of abused and neglected children. Currently her focus is on intervention research, including a randomised controlled trial of an infant mental health service for young children in foster care and a randomised controlled trial of Dyadic Developmental Psychotherapy for primary school-aged children in adoptive or foster placements. She is also conducting behavioural genetic research focussed on the role of abuse and neglect and its overlap with neurodevelopment across the life-course. She has collaborations with colleagues at the Institute of Psychiatry, Psychology and Neuroscience at King’s College London, the Universities of Aalborg and Aarhus, Denmark and with the Gillberg Neuropsychiatry Centre, Gothenburg, Sweden. (Bio from University of Glasgow)

Professor Alessandro Vinciarelli
Professor Alessandro Vinciarelli

I am Full Professor at the School of Computing Science and Associate Academic of the Institute of Neuroscience and Psychology.

My main research interest is Social Signal Processing, the computing domain aimed at modelling, analysis and synthesis of nonverbal communication in human-human and human-machine interactions. In particular, my work aims at developing computational models capable to infer social and psychological phenomena (e.g., personality or conflict) from nonverbal behavioural cues (e.g., facial expressions and tone of voice) automatically detected in recordings of human behavior (e.g., videos) captured with multiple sensors (e.g., cameras and accelerometers). In simple terms, I help machines to understand the social landscape in the same way as humans do. The goal is to make machines socially intelligent, i.e., capable to seamlessly participate in social interactions.

Before joining the University of Glasgow in 2010, I have been PhD student and Senior Researcher at the Idiap Research Institute in Switzerland (1999-2009) and System Developer for Accenture. I have published more than 150 scientific works (see my Google Scholar profile) and I have been PI and co-PI of 15 national and international projects funded by the European Commission, the Swiss National Science Foundation, The Engineering and Physical Sciences Research Council, The Data Lab, the UK Research and Innovation agency and the Swiss Commission for Innovation and Technology. Furthermore, I am the co-founder of Klewel, a knowledge management company recognised with several national and international awards, and scientific advisor of Neurodata Lab, a top Emotion AI company. Last, but not least, I have chaired and co-chaired over 25 international scientific events and, in particular, I have been General Chair of the IEEE International Conference on Social Computing in 2012 and of the ACM International Conference on Multimoda Interaction in 2017. (Bio and Image from University of Glasgow)

Transcript

[00:00:14.826] Professor Helen Minnis: Hi, I’m Helen Minnis.

[00:00:16.612] Professor Alessandro Vinciarelli: Hello, I am Alessandro Vinciarelli.

[00:00:19.967] Professor Helen Minnis: And we’re here to talk about our new paper in Child and Adolescent Mental Health, called “The Use and Potential of Artificial Intelligence for Supporting Clinical Observation of Child Behaviour.” And Alessandro introduced me to the whole concept of artificial intelligence. I’m a Child and Adolescent Psychiatrist and I was attracted by the idea that – you know, if I remember rightly, Alessandro, you always said that it was “a way for the computer to take over the boring, repetitive tasks from the Clinician,” which sounded good to me, and “freeing the Clinician up to make the clinical decisions that only a human really can make.” So, how does artificial intelligence work in this context?

[00:01:08.589] Professor Alessandro Vinciarelli: Oh, artificial intelligence lends itself very well – its application lends itself very well to psychiatry because a lot of the psychiatric work is about observing the behaviour of the patient and the behaviour is something that we can see, we can hear, right? It’s accessible to our senses, which means that it is accessible, potentially, to normal sensors, like microphone scanners.

And once you have that data, then you can analyse it, extract information from it, and artificial intelligence can automatically map it into the judgment of Clinicians.

So, as a matter of fact, artificial intelligence can be tool as a, kind of, statistical bridge between the observations that we can make about behaviour, facial expressions, tone of voice, gestures, movements, etc., and the judgment of Clinicians that can say something about the condition of a child, right? And that’s how artificial intelligence can really work very well in this context.

[00:02:17.106] Professor Helen Minnis: Hmmm hmm, hmmm hmm. Yeah, I mean – and I mean, it was fascinating for me, because we used it to create a rating system for an attachment measure, didn’t we? I had worked for years with the Manchester Child Attachment Story Task, which is one of the – you know, it’s a well validated measure of attachment in children aged four to about the age of eight or nine. But the problem is, it’s really laborious to train and to rate and so, what we did was we turned it into a computer game. A simple computer game where the computer told the beginning of a story, an attachment related story, with a stress in it. So, for example, something like, “Alessandro doll is out playing in the garden with mummy doll, playing football, and suddenly, he falls and he hurts his knee.” So, you create a bit of stress and then, you hand the dolls to the child, and you say, “Show me and tell me what happens next.”

So, with our colleague, Stephen Brewster, he developed a lovely system where there were tangible dolls with movement sensors and if I remember rightly, it was – the algorithm that you developed to rate the – each child’s task was based on hand movements. And so, for me, that was really based on hand movements, and what was fascinating for me was that was proximity seeking between mummy doll and child doll. Have I got that right, Alessandro?

[00:03:51.886] Professor Alessandro Vinciarelli: Yes, that is exactly the point of artificial intelligence. We know that the inner conditions of children, in this case, it was securing – secured attachment, leaves physical detectable traces in the behaviour of the child. In this particular case it was the attempt to get physical proximity between mummy doll and child doll, and that can be detected, that can be measured, right? It’s something that corresponds to our physically measurable trace in the data and that is exactly what artificial intelligence is about. If there is a physical trace that accounts for the condition you are interested to detect, then it is possible to fundamentally map the measurements that account for that condition into the judgment of the Clinicians.

And we started with gestures, because there was this very high-level semantic information of the physical proximity. But then, we transfer model, sets the same approach to a lot of other traces that are maybe less semantic, but still very much important, like, also, the, like, the tone of voice, like the lexical choices, etc. And combine all of that together, right, and that allowed us, actually, to detect, with a good performance, potentially insecure children, right? And to do it, by the way, in a fast way, without intervention of – without manual intervention of operators, Medical Doctors, etc.

So, the interesting thing is that you can reduce the budget for Doctors. You cannot replace Doctors because the decision, ultimately, has to have been made by a human. But at least the easy cases can be detected and processed automatically, right?

And it will be quite a step forward, because, you know, now it’s a lot of time, so only a limited number of children get actually analysed and observed in terms of attachment, right? So, it would be a major step forward out of automation, because it allowed a large-scale screening of the population.

[00:06:07.136] Professor Helen Minnis: I mean, I think for me, that’s the big thing, because, you know, attachment theory has been so important in research, but it just hasn’t quite translated into clinical practice. So, I think it does give us some real opportunities. But I think one of the problems is that, often as Clinicians, we’re quite concerned about the ethics of AI. So, I don’t know if you have any comments on that.

[00:06:29.601] Professor Alessandro Vinciarelli: Absolutely. So, it is very much important to assess the potential harm that can come from the application of this and in particular, AI in itself, it doesn’t have intelligence, it doesn’t have ethics. The point is to monitor the application and make sure, first of all, that in the human judgments that were used to train the artificial intelligence – because artificial intelligence learns from examples that are provided, so in this case, it learns from Clinicians that provide judgments about the certain number of children. So, it’s important to ensure, for example, that there is no bias, and it is something that we measure on our data to make sure that there was no bias related to age or to gender, in particular, right? So, this is something that can be somewhat measured.

And very important, it is important from the side of the Clinicians that use these technologies, not to over rely, not to take the outcome of these technologies without critical thinking. So, in a sense, it will be important that Clinicians, but in general, users of artificial intelligence, develop a, kind of, theory of mind about…artificial intelligence, right? They try to make sense of the decision.

[00:07:46.590] Professor Helen Minnis: Yes.

[00:07:47.666] Professor Alessandro Vinciarelli: And in most cases, actually, just to confirm and say yes, the decision makes sense, but always keep a little bit of, like, of attention and care and not to over rely and not to accept critically, whatever the outcome is, right? Because these technologies make mistake, right? Hmmm.

[00:08:07.665] Professor Helen Minnis: Yes, and I guess that’s the same as any tool that we use in child mental health, and in fact, any tool that we use in health.

[00:08:14.638] Professor Alessandro Vinciarelli: Yeah.

[00:08:16.238] Professor Helen Minnis: You know, that we just – you know, we don’t allow it to take away from our own critical thinking. And we realise that it’s not a human and that it has its limits. And it has its biases.

[00:08:28.486] Professor Alessandro Vinciarelli: Absolutely.

[00:08:29.053] Professor Helen Minnis: So, you know, from that point of view, I, personally, see huge potential, and particularly in child mental health, with younger children. I’m really interested in infant mental health and of course, babies and very young children, they can’t tell us exactly how they’re feeling, and their behaviour is often symptomatic of how they’re feeling. So, anything that can actually help us to be objective about that, I think has huge potential, as long as we keep that critical thinking.

[00:09:00.726] Professor Alessandro Vinciarelli: Yes, absolutely, yes, yes.

[00:09:03.376] Professor Helen Minnis: And I think that’s probably…

[00:09:04.086] Professor Alessandro Vinciarelli: And that…

[00:09:04.086] Professor Helen Minnis: …a good way to end it, yeah.

[00:09:04.680] Professor Alessandro Vinciarelli: Yeah, AI, it is something that can help to increase efficiency, but it is certainly not something that can help to increase effectiveness.

So, it’s important not to develop the illusion that it can work better than a Doctor, right? It can make the Doctors faster. It can make the Doctors more efficient. It can reduce the time dedicated to the more tedious and repetitive parts of the job, but ultimately, it is always the Doctor that makes the diagnosis, right? So, the…

[00:09:33.622] Professor Helen Minnis: Not to the Psychologist or the Speech and Language Therapist. It’s – absolutely, it’s up to us humans, with our professional expertise, to make the decisions in the end, but this could be a really important tool. Thank you so much.

[00:09:44.572] Professor Alessandro Vinciarelli: Thanks to you.

Add a comment

Your email address will not be published. Required fields are marked *

*