The Politics of Psychotherapy Research: Survival of the Measurable

Photo by Amy Snyder - http://www.exploratorium.edu

How do you deliver ‘placebo therapy’ to a client? By being deliberately hostile? Quantative scientific methods may not be the best ways to measure psychotherapy outcomes. But those who have the power and the money are increasingly using research results to legitimise a few techniques which are easy to measure, to the detriment of clients and therapists.

I recently attended “Integration in Psychotherapy: How to Make It Work?“, a conference held by the Polish Association For Psychotherapy Integration in Warsaw. My mind is jumping with all kinds of questions and thoughts. One of the most lively questions is about psychotherapy research, which I learned more about from leaders in the field Michael Lambert and John Norcross.

Research seems to have become integral to the field of psychotherapy on both philosophical and practical levels. The practical level concerns not only the pragmatic and essential question “what works the best for the client?” but also the question of who gets jobs, funding, or their work accepted as valid by insurance companies. In a sense, behind and beyond the arguments about integrating approaches and practices, the field is fighting for professional status, or even survival. The scent of excitement — but also the scent of fear — is in the air.

Psychotherapy research can be characterised largely as effectiveness research. It aims to isolate particular factors that lead to ‘success’ in therapy. This obviously limits the studies to factors which can be clearly defined, isolated out and measured. Where psychotherapy is concerned, there are problems already at the level of definition. At the conference, Paul Wachtel (an eminent psychoanalyst who integrates cognitive behavioural therapy into his work) mentioned a piece of research he and Schimek carried out (1970), which involved finding an operational definition of anger. It turned out to be impossible to quantify criteria rigorously enough for a definition to stand up, but it turned out that everyone involved in the experiment did in fact broadly agree on when anger was present, and there was a high degree of inter-rater reliability, which led to statistically significant correlations. This is surely an argument for qualitative research, research “from the inside out”, which there is no lack of in the psychotherapeutic community.

Try Online Counseling: Get Personally Matched

However, the studies which ‘matter’ as far as funding and prestige are concerned, those which are taken into account in influential meta-analyses of psychotherapy research, are of a quantitative nature, with the RCT, or randomised controlled trial, held up as a gold standard.

RCTs certainly seem a very effective way of measuring the effectiveness of drugs. They involve giving one group of people active drugs and another a placebo. In a randomised trial that is double-blind, neither the treatment groups nor the doctors administering the drugs know whether the drug is a placebo or not. However, as Wachtel also pointed out, how could this method possibly apply to psychotherapy? How do you deliver a “placebo” therapy? By being deliberately unhelpful to the client? How can a therapist not actually know whether they are working or not? Obviously the ethical ramifications are significant, and no one actually does this. So the studies are compromised from the start. In order to make any of the variables measurable, in fact, you have to put a lid on the therapist’s ability to respond to the client, and the client has to present one particular, discrete problem. If the therapy is for generalised anxiety, woe betide the validity of the research should a past trauma emerge. What has this to do with the real world? Why are psychotherapists pretending that they are a part of the medical model? (I wonder whether the medical model is really the only way to engage with our health, too, but I’ll leave that to medical doctors to argue).

The answer is probably something to do with politics, something embedded in culture and fundamentally about economic survival. Comparative research, flawed as it may be, does show the remarkably consistent conclusion that the effectiveness of therapy is independent of any particular school of therapy or technique (with the exception of panic disorders, which respond statistically to exposure techniques). In fact success has more to do with the client and “other factors” than any particular therapist technique. (More of this in an upcoming post.) Yet the emphasis put by those with the power and the cash — say, the British Government — on Evidence Based Practice, leads some schools of therapy to gain the respect of the establishment as Evidence Based, and the others to be seen as Not Evidence Based — which sounds a bit to the untrained ear like Therapy That Doesn’t Work.

This happens because schools such as Cognitive Behavioural Therapy are more explicitly designed than, say, psychoanalysis, to separate off variables — to deal with specifically defined therapist methods and single client problems. Progress is easier to quantify, more quantitative research is done, more is funded, the circle is complete. Some schools are legit and some are not. A lot of people rush to train in a modality which may not fit their skills as a practitioner, not out of ‘desire to integrate’ but out of fear of losing their livelihood. They do this not because of what they know from their own experience works (and I would guess that there is ‘high inter-rater reliability’ here) but because of how it looks from the outside, and how people believe it is.

This looks pretty much like the heart of the problems so many people bring to therapy: not doing what they know inside is right for them, but doing what they think they should be doing, because of how it looks to everyone else, and what might happen if they do differently (it’s a real fear, you might be excluded or made to disappear), or suffering the painful effects of doing what they must do, in order to survive.

I am not arguing that “research is bad” or irrelevant to the deeply complex world of psychotherapy. The deep complexities are part of everyday life, for all of us. What psychotherapists need, for themselves, and for their clients, is the confidence to stake out their own positions, which reflect this complexity, by measuring experience on its own terms.

References

Wachtel, P.L. and J.G. Schimek (1970) ‘An exploratory study of the effects of emotionally toned incidental stimuli’, Journal of Personality 38(4): 467-81. [PubMed result]

All clinical material on this site is peer reviewed by one or more clinical psychologists or other qualified mental health professionals. This specific article was originally published by on and was last reviewed or updated by Dr Greg Mulhauser, Managing Editor on .

12 Comments (4 Discussion Threads) on “The Politics of Psychotherapy Research: Survival of the Measurable”

Would you like to join the discussion on “The Politics of Psychotherapy Research: Survival of the Measurable”?

Overseen by an international advisory board of distinguished academic faculty and mental health professionals with decades of clinical and research experience in the US, UK and Europe, CounsellingResource.com provides peer-reviewed mental health information you can trust. Our material is not intended as a substitute for direct consultation with a qualified mental health professional. CounsellingResource.com is accredited by the Health on the Net Foundation.

Copyright © 2002-2021. All Rights Reserved.