The Hierarchy of Evidence: Single-Case Experimental Designs and CBT Interventions for Anxiety

Avatar photo
You can listen to this podcast directly on our website or on the following platforms; SoundCloud, iTunes, Spotify, CastBox, Deezer, Google Podcasts, Podcastaddict, JioSaavn, Listen notes, Radio Public, and Radio.com (not available in the EU).

Posted on

In this Papers Podcast, Dr. Tom Cawthorne and Professor Roz Shafran discuss their JCPP Advances paper ‘Do single-case experimental designs lead to randomised controlled trials of cognitive behavioural therapy interventions for adolescent anxiety and related disorders recommended in the National Institute of Clinical Excellence guidelines? A systematic review’ (https://doi.org/10.1002/jcv2.12181).

There is an overview of the paper, methodology, key findings, and implications for practice.

Discussion points include:

  • How the single-case experimental design (SCED) approach works and insight into the construct of the hierarchy of evidence.
  • How the review was conducted and why they focused on adolescent anxiety.
  • Adolescents as an under-researched population and the practical challenges around the SCED design.
  • The evidence that the SCED design can be a helpful approach and can provide high-quality research evidence.
  • The implications for researchers and research policymakers as well as CAMH professionals.
  • Could using SCEDs more effectively lead to future NICE guidelines better representing the adolescent population?
  • The recommendations that emerge from the paper.

In this series, we speak to authors of papers published in one of ACAMH’s three journals. These are The Journal of Child Psychology and Psychiatry (JCPP)The Child and Adolescent Mental Health (CAMH) journal; and JCPP Advances.

#ListenLearnLike

Subscribe to ACAMH mental health podcasts on your preferred streaming platform. Just search for ACAMH on; SoundCloudSpotifyCastBoxDeezerGoogle Podcasts, Podcastaddict, JioSaavn, Listen notesRadio Public, and Radio.com (not available in the EU). Plus we are on Apple Podcasts visit the link or click on the icon, or scan the QR code.

App Icon Apple Podcasts  

Tom Cawhthorne
Dr. Tom Cawthorne

Tom Cawthorne is the Senior Clinical Psychologist in the National Conduct Adoption and Fostering Team within the National & Specialist CAMHS at the Maudsley. Prior to this he completed his Doctorate in Clinical Psychology at Royal Holloway University on the Development and Preliminary Evaluation of CBT for Chronic Loneliness in Young People. This was supervised by Professor Roz Shafran, Dr Sophie Bennett and Dr Anton Käll and included a Single-Case Experimental Design (SCED) in addition to this systematic review. Tom’s clinical and research interests include the development and implementation of evidence-based interventions for children and adolescents, with a particular focus on groups of young people who are often unable to access effective support, including those who experience chronic loneliness, are adopted or fostered, or present with complex behavioural difficulties.

Professor Roz Shafran
Professor Roz Shafran

Roz Shafran is the Professor of Translational Psychology at the UCL Great Ormond Street Institute of Child Health and honorary Consultant Clinical Psychologist at UCL Great Ormond Street Institute of Child Health. Her clinical research interests include the development, evaluation, dissemination and implementation of evidence-based psychological treatments across the age range. She founded the Charlie Waller Institute of Evidenced Based Psychological Treatment in 2007 and co-founded Bespoke Mental Health. In addition to publishing over 350 academic clinical research articles, she has co-authored and co-edited four self-help books, the most recent is ‘How to Cope When Your Child Can’t: Comfort, help and hope for parents’. She is the recipient of a number of awards including Positive Practice ‘Making a Difference’ Award, British Psychological Society Award for Distinguished Contributions to Psychology in Practice and Marsh Award for Mental Health for research that has made a difference to clinical practice.

Transcript

[00:00:01.164] Jo Carlowe: Hello, welcome to the Papers Podcast series for the Association for Child and Adolescent Mental Health, or ACAMH for short.  I’m Jo Carlowe, a Freelance Journalist with a specialism in psychology.  In this series, we speak to authors of the papers published in one of ACAMH’s three journals.  These are the Journal of Child Psychology and Psychiatry, commonly known as JCPP, the Child and Adolescent Mental Health, known as CAMH, and JCPP Advances.

Today, I’m interviewing Senior Clinical Psychologist, Tom Cawthorne, of the National Conduct Adoption and Fostering Team at the Maudsley Hospital.  Tom and Roz are the leading authors of the paper, “Do Single-Case Experimental Designs Lead to Randomised Controlled Trials of Cognitive Behavioural Therapy Interventions for Adolescent Anxiety and Related Disorders Recommended in the National Institute for Health and Care Excellent guidelines?  A systematic review,” recently published in JCPP Advances.  This paper will be the focus of today’s podcast.

If you’re a fan of our Papers Podcast series, please subscribe on your preferred streaming platform, let us know how we did, with a rating or review, and do share with friends and colleagues.

Tom and Roz, welcome, thank you for joining me.  Can you each start with an introduction about who you are and what you do?

[00:01:24.701] Dr. Tom Cawthorne: Yes, thank you very much for the introductions.  I’m Tom, as you said, I’m currently the Senior Clinical Psychologist in the National Conduct Adoption and Fostering Team at the Maudsley.  And before that, as part of my doctorate in clinical psychology, I completed my thesis on the development of CBT for chronic loneliness in young people, where Roz was my Lead Supervisor, along with Sophie Bennett and Anton Käll, as well.  And, as well as developing that intervention, we then evaluated it with a single-case experimental design, which is where my interest in this area came from, and then we also completed a systemic review, as part of that project.

[00:01:56.588] Jo Carlowe: Brilliant, thank you, and Roz?

[00:01:57.780] Professor Roz Shafran: Hi, I’m Roz Shafran.  I’m a Professor of Translational Psychology, as you described, at the UCL Great Ormond Street Institute of Child Health, and I’m an Honorary Consultant Clinical Psychologist at Great Ormond Street, as well.

[00:02:09.148] Jo Carlowe: Great, thank you both very much.  So, today, we are looking at your JCPP Advances paper.  For clarity, Tom, can you give us a quick description of how the single-case experimental design, SCED, approach works?

[00:02:23.901] Dr. Tom Cawthorne: Certainly, so maybe to start with, a little bit of background.  So, currently, most interventions are evaluated in RCTs.  As most of your listeners will know, RCT stands for randomised controlled trials, and in an RCT, people are either randomised to two, kind of, intervention arms, and then we’ll measure, kind of, before and afterwards, to look to see whether there’s a difference between groups, or between, say, an intervention arm and treatment as usual.

However, the problem with the RCT approach is it’s very expensive, it’s very complex and, therefore, it takes a huge amount of money and, also, several years from conception for research evidence to then reach clinical practice.  Whereas, in comparison, the SCED is a much more straightforward, but also a very high quality approach.  So, a SCED, or a single-case experimental design, looks at participants and measures change over multiple phases.

So, there’s lots of different types of SCEDs, and I’d really recommend that people look at Kazdin’s book if they want more information, kind of, on this.  But, for example, one type of SCED, which is the one that we used in our study, for evaluating CBT for chronic loneliness, is a multiple baseline, randomised, single-case experimental design, where, within that design, participants are randomised to one of – we used, kind of, four baseline lengths, in which they would complete the primary outcome measure, kind of, each day over a period of time.

And there, for example, you would have everyone starting at the same period, and then maybe one person would complete it for a week, one for two weeks, one for three weeks, one for four weeks, for example, and then you’d measure change during that time.  Whereas, therefore, some participants would start the intervention after a week, whereas others would start it after four weeks.  So, therefore, you’d really be able to see at what point does change occur, does it change after the intervention is introduced?  Or is it that there’s something else that goes on during that period, that leads to change for participants?  And as you could see, in that way, that’d be a much more high quality than a case series, where, for example, we wouldn’t know whether if it was really the introduction of the intervention that led to that change, or some other factor.

And then you’d have the intervention phase, during which people would then complete the primary outcome measure again, for example, at each, kind of, intervention session.  And then that would be followed by a post-intervention phase, so that would allow you to see whether that, kind of, improvement, or that change, is maintained during that period of time.  And, again, because participants would enter the intervention phase at different periods, you would be allowed to, kind of, compare between them, and see whether the intervention is removed that leads to things changing, or whether, again, there are other factors going on.

At the moment, we’ve got lots of different types of ways of evaluating interventions.  So we’ve got SCEDs, we’ve got case series, and we’ve got pilot studies, as well as RCTs, but this provides a really simple, high quality approach, that is based very much on, kind of, key, core scientific principles of manipulating things and, therefore, seeing how they change.

[00:05:11.788] Jo Carlowe: Thank you, that’s really helpful.  Can you give us an overview of the paper to set the scene?  So, what did you look at and why?

[00:05:20.261] Dr. Tom Cawthorne: Yeah, certainly.  So, we know from research that CBT is the strongest evidence-based intervention for anxiety, and what we really wanted to look at is whether the current interventions, that are named in the NICE guidelines, under the evidence section for each of the anxiety disorders, were preceded by a SCEDs before they did the randomised controlled trials.  Based on this, kind of, premise really, that that would be a really sensible thing to do, because it would lead to more effective prioritisation of treatment funding, as well as provide a high quality, initial evidence base for different interventions before the RCTs had been completed.

So, we initially did a primary search of the literature, looking at all the single-case experimental designs that were CBT interventions for adolescent anxiety disorders.  And then we were looking at whether any of them resulted in subsequent RCTs, and, if so, whether those RCTs were named in the NICE guidelines.  And then we also did a backwards search, to make sure that we could catch all the papers that were out there, where we looked at the RCTs that were named in the NICE guidelines, and then we went backwards, and looked at whether any of them had a single-case experimental design before them.

Roz, would you like to add anything there?

[00:06:32.097] Professor Roz Shafran: Yeah, I think that’s a great description, and maybe the only thing to add is this construct of hierarchy of evidence.  So, the idea being that in research, different methodologies have different strength in terms of their robustness.  And when you’re thinking about a research design, then randomised controlled trial is very robust, but it has the disadvantages that Tom mentioned, in terms of the time and so on.

So, really, before you invest all the time and the money, you want to do a smaller scale study that is less robust in terms of its methodology and the conclusions, but is an important stepping stone, I think.  Because if you did, for example, a SCED, and really there was not much impact, you’d want to think about that before you went onto doing a randomised controlled trial.  You may want to think about a pilot study as an alternative design, or proof of concept, or the other ones that Tom mentioned.

And within that, sort of, hierarchy of evidence, they’re not mutually exclusive.  So, in some ways, you know, the clinical opinion is what comes first, clinical observation is what should start it all, and from that, you think, okay, this is what I think might be going on, how do I test that empirically?  And moving up the hierarchy, culminating in the strongest evidence, that suits the research question, ‘cause, of course, also, not all research questions can be answered by randomised controlled trials for ethical reasons, and so SCEDs would have a particular role to play there as well.

[00:07:57.108] Jo Carlowe: That’s really helpful, thank you.  You’ve given us a sense of how the review was conducted, but do you want to say a little more about the methodology used?

[00:08:06.341] Dr. Tom Cawthorne: Definitely.  So, I think, essentially, that was a really helpful summary by Roz, and I think we were really looking at then, is this happening now?  So, are people doing single-case experimental designs prior to RCTs named in the NICE guidelines?  But, also, whether this approach could be helpful, so even if this wasn’t happening at the moment, what do the single-case experimental designs look like that are in the literature and, therefore, would this be a helpful approach?

And, therefore, as well as doing the searches, we also used something called the RoBiNT Scale, which is a quality measure of risk of biases for SCEDs, and we included the results of this within our systemic review.

[00:08:41.868] Jo Carlowe: Going back to my earlier question with, sort of, setting the scene really, can I also ask about the why?

[00:08:47.861] Dr. Tom Cawthorne: So, in terms of why we looked at this specific population, so we do know that there is a strong evidence base for CBT interventions for anxiety disorders across childhood.  However, there is some evidence that this may be less of the case for adolescents.  There’s lots of research around this and I would really recommend that listeners look at Cathy Creswell’s recent review paper, as well as other review papers on this.

And some people have hypothesised that it could be around, you know, the specific changes that occur during adolescence, the lack of engagement, the high levels of co-occurring conditions, such as mood disorders, but therefore, this seems like a population where, whilst CBT interventions are really effective, they certainly could be more effective, and, therefore, it could be really helpful to look at whether SCEDs could be used as one way of improving the efficacy of interventions, by helping us develop new approaches, or adapt current approaches, and then evaluate them, within single-case experimental designs prior to RCTs.  So, that could really expedite the process of new interventions being out there.

[00:09:45.708] Jo Carlowe: So, your review reveals that single-case experimental designs were not followed by randomised controlled trials, or cognitive behavioural therapy interventions named in the NICE guidelines for adolescent and anxiety disorders.  Can you elaborate on this finding, why do you think this is?

[00:10:02.935] Dr. Tom Cawthorne: Well, I think that’s a really interesting question, as to why, and, actually, at this stage, it’s quite unclear exactly why that is, and I think we need to be quite cautious therefore in drawing any firm conclusions.  I think one of the reasons is that adolescents are actually a really under-researched population, in terms of interventions for anxiety disorders.  And currently, therefore, there isn’t that many RCTs out there for this population specifically, so I think that’s one of the things.

I think there are also some more practical challenges related to the design.  So, for example, with the single-case experimental design, we use a baseline period, which allows us to therefore compare across the different phases of the intervention and see whether it is effective.  But that can actually lead to some ethical implications within services, ‘cause essentially we’re looking at young people that do need support, and we’re saying, “Well, we need to wait for this baseline period, as part of the research study, before they can access it.”  And whilst, of course, that can be done, when we think about young people that are on waiting lists anyway, of course, it is slightly more challenging to roll out, and that could be one thing.

I think whilst there are statistical analyses we can use for the SCED, so, for example, the Tau-U approach, it’s much less well known, and therefore I think maybe people have less confidence in using SCED designs for that reason.  I think there’s a lot of variability at the moment in terms of how to do SCEDs, so the guidelines are slightly inconsistent.  So, for example, when we were trying to do our study, I spent a good day or two really trying to look into the power analysis research around SCEDs, to work out exactly how many assessment points we needed during the baseline period.  And it was really hard to get a clear answer on that, and I think often people do like certainty, and I think in more developed approaches, there’s a lot more certainty than with SCEDs.

But, actually, I think there’s a lot of work that can be done as well around raising awareness of the SCED approach.  Like, I think it is a really excellent design, you know, it’s really simple, but it’s also really high quality, as well.  However, I think if you asked most Researchers, or certainly most Clinicians, no-one’s really heard of it.  I also think it’s very hard to access training at the moment on a SCED.  You know, for example, if you Google, kind of, SCED training, sort of, things like that, nothing really comes up.  Whereas, of course, there are loads of different trainings you could access for RCTs, both online and at different universities, as well.

And I think a final challenge, that’s maybe less around issues with SCEDs, but also around current NICE guidelines, is that there actually aren’t NICE guidelines for several anxiety disorders in children and young people.  And there also aren’t specific NICE guidelines or recommendations around how we can approach adolescents, or adapt interventions for adolescents, despite that being something that’s clearly quite necessary, as well.

[00:12:36.240] Jo Carlowe: We’ll want to return to that in a moment, talking about research policy.  But I just want to pick out something else that I thought was interesting in the review, which is that your paper highlights the fact that CBT is effective for 60% of adolescents with anxiety disorders, only 36% are in remission post-intervention.  What are the implications of this finding?

[00:12:58.701] Dr. Tom Cawthorne: Yeah, so I think, firstly, obviously, that wasn’t a finding of our review, that was from I think Cathy Creswell’s review, and I’d really recommend people to, kind of, read that for more information on it.  I think one of the challenges in this area is, of course, anxiety disorders are comprised of lots of different types of difficulty.  And, obviously, when we talk about anxiety disorders and adolescents, it’s almost oversimplifying it slightly, ‘cause, of course, there are lots of different anxiety disorders, there are lots of different groups of adolescents.  And, therefore, I think it’s quite hard to draw any conclusions firmly on why that is, and I’m wondering if you want to add something on that, Roz?

[00:13:30.817] Professor Roz Shafran: Thanks, Tom.  I think there’s also just varying definitions of what remission is, what effectiveness is, what improvement is, and inconsistencies across studies, that make it hard to draw any firm conclusions.

[00:13:42.160] Jo Carlowe: Roz and Tom, what other findings would you like to highlight from the review?

[00:13:47.541] Dr. Tom Cawthorne: I think one finding that we saw is actually that there’s a lot of evidence that SCEDs can be really helpful.  So, if we look at the SCEDs that we did find in our review, that often they were done with groups of young people that either haven’t had RCTs, or in reality are not going to have RCTs, because they’re either quite specific samples with lots of different comorbidities, where you’re not going to find a big group of young people all with those exact same comorbidities.  Or, equally, with populations of young people like hoarders, who were maybe less prevalent and, therefore, are less likely to have an RCT.

And, actually, what the SCED design gives us is a really high quality research evidence about what could be helpful for these groups of young people, which could really therefore help them to be able to access better treatment.  And, equally, because it’s a really rich design, so it’s not just giving us that data, it’s also combined with this qualitative, more, kind of, case study, case series, style information, I think it can be a really helpful teaching tool, as well.  So, Clinicians can read it and think quite practically about how they can work with these groups of young people, or adapt interventions, as well as it just giving a summary.

I think another thing is it provides a really good model really for improving practice-based evidence.  So, at the moment, within clinical services, you know, we’re doing ROMs, but, actually, it’s not practical to do RCTs.  You know, we’ve got routine outcome measurement, we’re looking at how things are like before and after intervention, to take that jump from doing that to RCTs is not really going to happen, in the current economic climate, where resources are so stretched, and there’s not very many people working within services.

And, actually, single-case experimental designs can be run by one person, or maybe two people, if there’s someone else doing the assessment before and afterwards and, actually, you really don’t need that much resource.  And it means that, say, Clinicians are working within a service where they’ve developed a new intervention, and they’re wanting to evaluate it in a really high quality way, I think that this gives a really good, kind of, model for how that can be done.

[00:15:34.857] Professor Roz Shafran: I would just reinforce that and add to it, and it wouldn’t even be Clinicians necessarily having a new intervention, but applying an intervention in a new population, where, actually, it’s quite hard to get funding for a big randomised controlled trial, and maybe you don’t need to have a big randomised controlled trial, but you just want to know, actually, how does this apply?  Or, if we do an adaptation, how does it apply?

And I come across a lot of Clinicians who want to do research, but it’s not built into their job, it’s difficult to do it from the resources, from the time perspective, so having something that is publishable, in the way that single-case experimental designs are, that shares knowledge, and yet isn’t all-consuming, can be done within a routine clinical service, I think is a real asset and a real bonus, and worth thinking about.

But it’s one of these methodologies that seems to be slightly overlooked, and we’re not quite sure why it’s overlooked.  I’m not sure if it’s overlooked in the States to the same degree.  But certainly I think that there are a few advocates and proponents of the single-case experimental design, and I think we would just want to add our voice to that.  For Clinicians to really consider it when they’re considering doing research, as something that is practical and methodologically robust and appropriate for the stage and research question that they may be considering.

[00:16:51.788] Jo Carlowe: So, there clearly are implications for Researchers.  I mean, what should Researchers and research policymakers take from your review?

[00:17:01.680] Dr. Tom Cawthorne: I think one of the key findings from our review is, I think it shows that using single-case experimental designs prior to RCTs could be really helpful.  And I think, if we’re thinking about how we can better prioritise research funding, when RCTs cost, you know, a huge amount of money, how we can really expedite the prioritisation of this funding and identifying new interventions, or interventions for different groups of young people, I think what we’ve shown is that SCEDs could be used for that, and could be used really effectively to support with that.

And I think what we’ve shown, as well, is that SCEDs are a really valuable alternative to pilot studies for case series, to feasibility studies that have been done at the moment, and SCEDs are really, kind of, straightforward and just informed by basic scientific principles really.  I think we also have shown though that there needs to be much more awareness around SCEDs, and I think far more dissemination and far more teaching, both so that there are more SCEDs, but, equally, because we need there to be more higher quality SCEDs.  Because I think that was another one of the findings of our systemic review, is that many of the SCEDs at the moment aren’t particularly high quality and, therefore, we need better teaching and better dissemination opportunities to support with that.

[00:18:08.860] Jo Carlowe: What about the implications for CAMH professionals of your paper?

[00:18:13.341] Dr. Tom Cawthorne: You know, one of the things that the systemic review have also found is that really SCEDs are the ideal model for practice-based evidence.  They are already being used within different clinical services to evaluate either novel interventions with different groups of young people, or, equally, to evaluate existing interventions with groups of young people that they’ve not been done with before.  And, as we said, they are a design that is really quite straightforward and could definitely be done within clinical services.

[00:18:39.320] Jo Carlowe: So, do you envisage it that if SCEDs are more often applied that future NICE guidelines will better represent the adolescent population?

[00:18:50.701] Dr. Tom Cawthorne: Definitely, because I think if we were able to use SCEDs more effectively, a) we’re then going to be identifying interventions where there’s preliminary evidence of efficacy before there’s an RCT.  And, actually, SCEDs provide really quite a high quality evaluation, so even then, when we don’t have the RCT, we can say, “Well, we think that this intervention will work really well for this group of young people,” which therefore will improve access and improve outcomes.

But, equally, it will then mean that we can more rapidly conduct RCTs, because we can go, “Well, we’ve got this intervention here, but we’ve already got really good evidence from the SCED that it’s really effective, let’s, therefore, prioritise this for funding.”  Rather than trying to share out this funding across lots of different interventions, some of which we may not actually already have evidence of efficacy.  And then that will speed up the research process and, therefore, lead to better NICE guidelines, and, equally, more specific NICE guidelines for adolescents, as well.

[00:19:44.097] Professor Roz Shafran: And maybe, just to add to that, that the things that are important when you’re thinking about conducting an RCT in future, you know, you would want to make sure an intervention is feasible.  You would want to make sure that it’s acceptable, all of those things are included in SCEDs, as well, as measures, so it isn’t that you forego some of those other things that are in other designs, they are included in SCEDs.

And I don’t know if you want to say anything about the statistics needed, Tom?  ‘Cause not all SCEDs have statistics at all, and I think, for many Clinicians, it’s the idea of statistics that puts them off doing research, and just being able to do a SCED and do visual analysis.

[00:20:18.781] Dr. Tom Cawthorne: I think that’s a really good point.  Because I think that not only can you do visual analysis, either instead of the statistical approach, or alongside it, but, actually, the statistics are really easy for SCEDs.  You know, you literally put in your numbers, you press a button, and then it blurts out whether it – or not it meets a, kind of, statistical significance at the p .05 level.  Therefore, I think is really something that’s quite straightforward and could be done by many Clinicians in clinical practice, even if they don’t feel that confident in statistics.

[00:20:46.668] Jo Carlowe: What recommendations emerged from your paper?

[00:20:49.741] Dr. Tom Cawthorne: There are three main areas of recommendation really.  I think, firstly, as we’ve said, that SCEDs provide this really amazing model for practice-based evidence.  And, therefore, related to this, we need to find more ways of disseminating knowledge around the SCED, improving the quality of SCEDs that are being conducted, and so, for example, with there also being more training opportunities available for Clinicians, as well.

And then I think a third point is we need more specific NICE guidelines for children and young people and, specifically, we also need more consideration of how things can be adapted for adolescents.  Because I think anyone who’s met an adolescent, it’s not just a child that’s slightly bigger, they are very different, and their brains work in very different ways, and I think we need a lot more, kind of, clear recommendations around how Clinicians can, kind of, work with this in practice.

[00:21:33.744] Jo Carlowe: And a question for both of you, so are you planning any follow-up research, or is there anything else in the pipeline that you would like to share with us?

[00:21:41.581] Dr. Tom Cawthorne: So, in my current service where I work now, which is the National Conduct Adoption and Fostering Team, similarly, this is a group of young people, so care-experienced young people, for which there is a real lack of evidence-based interventions and a real lack of research about what kind of treatments actually work.  And so, within this role, we’re looking at how we can use the SCED design to evaluate existing evidence-based treatments, but applying it specifically to this group, to help generate, kind of, much needed evidence.

[00:22:09.144] Jo Carlowe: And Roz?

[00:22:09.617] Professor Roz Shafran: I would just say that I think that the work that Tom did, and we did, with developing the intervention for loneliness, CBT for loneliness, was a very good example I think of where we wouldn’t want to have done a randomised controlled trial.  There was an internet intervention that we were basing it on, in adults, in Sweden, and it was – wanted to, sort of, think about it from the young person’s perspective.

And then lots of different questions came up about the population, about comorbidities, and so on, and it really just reflecting on the experience of doing a SCED was that it was such a useful exercise for treatment development, for understanding personalisation of interventions, before producing a manual that would go on to be piloted, either in an internal or external pilot, and a randomised controlled trial.

I think learning from the CBT for loneliness SCED really has been inspirational in thinking, this is such a strong experimental design, and it is an experimental design, but it’s got such clinical applicability.  And it’s such an efficient research design, as well, because it means that you can make some changes, that you can think about optionality and personalisation, modularity, transdiagnostic interventions, all of those sorts of things, within a single protocol.

[00:23:27.148] Jo Carlowe: Thank you.  So, finally, Roz and Tom, what are your take home messages for our listeners?

[00:23:32.960] Dr. Tom Cawthorne: Well, I suppose a really key take home message is really, before thinking about, you know, diving into the deep end of large-scale RCTs, just really think about and consider the power of a single-case experimental design.

[00:23:44.788] Jo Carlowe: Roz?

[00:23:45.297] Professor Roz Shafran: Uniform message, yeah, absolutely, from me, as well.

[00:23:48.588] Jo Carlowe: Brilliant, thank you both so much.  For more details on Tom Cawthorne and Professor Roz Shafran, please visit the ACAMH website, www.acamh.org, and Twitter @ACAMH.  ACAMH is spelt A-C-A-M-H, and don’t forget to follow us on your preferred streaming platform, let us know if you enjoy the podcast, with a rating or review, and do share with friends and colleagues.

Add a comment

Your email address will not be published. Required fields are marked *

*