Psychological Science?
October 26, 2009 10:14 PM   Subscribe

"Research has shown that numerous psychological interventions are efficacious, effective, and cost-effective. However, these interventions are used infrequently with patients who would benefit from them, in part because clinical psychologists have not made a convincing case for the use of these interventions ... and because clinical psychologists do not themselves use these interventions even when given the opportunity to do so." In Psychological Science in the Public Interest, psychologists Timothy Baker, Richard McFall, and Varda Shoham argue that clinical psychology needs to embrace its status as a science in order to save itself as a profession. If that's too long, Walter Mischel -- yes, the marshmallow guy -- writes an accompanying editorial. : "The disconnect between much of clinical practice and the advances in psychological science is an unconscionable embarrassment..."
posted by escabeche (16 comments total) 19 users marked this as a favorite

 
I've long believed that there should be psychological ambulances to rescue the chronically irrational in the middle of their tantrums and episodes.
posted by Brian B. at 10:59 PM on October 26, 2009


And if that's too long, there's this.
posted by evidenceofabsence at 11:08 PM on October 26, 2009


If people wanted outcome evaluations they wouldn't go to clinical psychologists or to church.
posted by srboisvert at 11:21 PM on October 26, 2009 [3 favorites]


Why psychotherapy sucks in '09.
posted by koeselitz at 11:51 PM on October 26, 2009


There is definitely a disconnect between some aspects of counseling psychology and psychological research, but most institutions that grant PhDs in clinical psychology use the 'Scientist-Practitioner Model,' in which clinical training is accompanied by heavy research requirements. Granted, there are plenty of counseling psychology programs (often offered through departments other than the psychology department), PsyD programs, and that do not require students to conduct research, but clinical psychology PhDs are usually well versed in contemporary scientific findings. There are some programs out there that only pay lip service to the Scientist-Practitioner Model, and they are definitely part of the problem.

I say this as a research psychologist who, until recently, harbored a number of misconceptions about the rigor of clinical psych in general. However, as I've gotten to know a number of clinical PhD students, I've realized that, at least at my institution, they work hard to bridge the gap between research and practice.
posted by solipsophistocracy at 12:24 AM on October 27, 2009 [1 favorite]


I find it interesting how CBT is often portrayed as more empirically validated that other psychotherapy treatments. Interesting, because of the reactions of the non-CBT practitioners (integrative, gestalt, humanistic, etc). They are put in the position of having to justify their own approaches when an alternative approach is getting much more empirical support. Publically, they tend to say that different approaches work for different people (which is true), but I get the impression there's a lot of bad feeling towards CBT from the other traditions.

And I think its good that these treatments are being scientifically tested, rather than just trumpeted based on case studies. However, there are a lot of challenges to be met when scientifically testing a pschotherapy treatment. For example, what counts as a placebo? And how do you measure success vs failure?

Some argue that when you take all the evidence on effectiveness of psychotherapy treatments together, you reach the Dodo Bird Virdict: All the different types of psychotherapies provide some benefit, but they are all equally effective. Which leads to the conclusion that (more or less) it is spending time with an interested person which mostly helps, rather than the details of the therapy model used.

Bearing all that in mind, I see the linked article as a kind of land grab. I only skimmed it, but they seem to be saying, "Look. CBT people, we have all these studies we can point at, we need to be making our case more because at the moment all the money is getting spent on pharmaceutical treatments."

In summary: Therapies should be tested scientifically, but so far the results are muddled. The linked paper could well just be CBT cheerleading, rather than news of some great advance. Over all, its very hard to tell.
posted by memebake at 12:27 AM on October 27, 2009 [3 favorites]


Well, normal generic CBT isn't going to sell very well. However Merck-Pfizer-GSK CBT+ is sure to be a hit.
posted by sien at 12:41 AM on October 27, 2009 [2 favorites]


I've long believed that there should be psychological ambulances to rescue the chronically irrational in the middle of their tantrums and episodes.


Oh, but there is.

Enter the Whambulance, dude.
posted by fourcheesemac at 2:25 AM on October 27, 2009 [1 favorite]


Habit patterns are correlated with PSYCHOLOGY FOR LIVING.
posted by twoleftfeet at 2:39 AM on October 27, 2009 [2 favorites]


The need to make science out of being human stems from the medical model of psychotherapy. Imagine if we insisted that marriages have to be scientifically validated before we allow them to happen? A couple could be given a battery of tests and declared compatible or not. The tests can be refined by seeing how past scores on individual tests correlate with the divorce rates.

Of course, a certain amount of rationality has to go into deciding who to marry but it's not the whole story, any more than the individual "techniques" used is the whole story for psychotherapy. CBT has some good techniques, but they're only techniques. And not the only ones.
posted by Obscure Reference at 4:32 AM on October 27, 2009


Not the marshmallow man I was thinking of.
posted by shothotbot at 5:53 AM on October 27, 2009


The need to make science out of being human stems from the medical model of psychotherapy.

If by 'medical model of psychotherapy', you mean the model that argues that psychotherapists will produce certain outcomes for their $x an hour, then I agree. As long as people are going to continue to claim that a certain practice, provided for pay, will make people happier/less dysfunctional, then it seems perfectly reasonable that people should attempt to rigorously evaluate those claims.

Otherwise, you might as well just talk to the barman or the priest.
posted by PeterMcDermott at 6:00 AM on October 27, 2009 [7 favorites]


If by 'medical model of psychotherapy', you mean the model that argues that psychotherapists will produce certain outcomes for their $x an hour, then I agree. As long as people are going to continue to claim that a certain practice, provided for pay, will make people happier/less dysfunctional, then it seems perfectly reasonable that people should attempt to rigorously evaluate those claims.

The medical model of psychotherapy is actually usually used to refer to the notion that specific interventions (as opposed to certain practices) will have specific affects, as does medicine. There is scant evidence for this. There is, however, a lot of evidence that some factors common to all good psychotherapies (a good empathetic relationship between the provider and the client, the expectation of positive change, and a plan for how to affect that change) work well to get people better. The problem with a medical model is that it can't see the forest (the common factors) for the specific trees, and so the research ends up getting skewed.

It isn't that, for instance, CBT does not work, it's that CBT works for reasons that exceed its being CBT. When the research is understood incorrectly what happens is that perfectly good modalities get rejected based on specious claims of specific efficacy. In addition, and this is something hard to get across to general readers (although I know you will understand it, PMcD), the research ends up being written so narrowly in quest of the specific that the usefulness of "evidence-based treatments" is quite low. The studies are frequently done, for instance, using patients with a single diagnosis, which is fairly uncommon in actual practice. So, then the clinician is left in the position of seeing a patient who is both dysthymic and anxious, but the research available on validates CBT for use with one or the other, not both diagnoses together. It can't both be necessary to use empirically validated treatments only to cast that necessity aside when it's inconvenient.

This is certainly not a plea for less evaluation of outcomes. To my mind it's much better to devise studies that look at outcomes in a way that does not presuppose that the specific factors of any given modality are responsible for change. There has been quite a bit of meta-analysis work on this, including work by Bruce Wampold and Michael Lambert, that has found that not only is the effect size of psychotherapy quite high (~0.80), but that specific constituents of therapy are much much less important than the general common factors that all psychotherapies share. This is good news, and it seems to me a mistake to retreat from it back into specific modalities, about which people have strong opinions that exceed science.

(I do think that talking to one's priest in many cases is as efficacious as therapy, for some people. I think the barman might be a stretch.)
posted by OmieWise at 7:20 AM on October 27, 2009 [10 favorites]


Guess it depends on the barman
posted by jtron at 10:02 AM on October 27, 2009


I think there are multiple reasons why the practice of psychology as a science has been held back.

One reason is undoubtedly public perception of the field of psychology in general and psychotherapy in specific. Your average individual, even a well educated person, has no idea of the difference between a psychiatrist and a psychologist, much less a clinical psychologist from someone with a Master's level degree in counseling or Licensed Marriage and Family Therapist, Licensed Master in Social Work, etc.

I have been trying to help my brother get into some therapy and my (semi-)professional opinion is that he needs some version of CBT to address some of his problems. Trying to find A) a clinical psychologist and B) someone who practices CBT or some version of CBT instead of some "holistic" other Meaningless Adjective Therapy has been nearly impossible. Finding a well trained therapist is practically a crapshoot and it's something that most people are completely unprepared to do. As a result, everyone who practices therapy is put on the same level and someone with a two year Master's degree and a state license is seen as exactly the same as someone with a seven year PHD in Clinical Psych.

One problem with the research/practitioner model is that it draws psychologists into academia, where research is rewarded more than practice. I think there are plenty of people who want to study in both research and practice, but the rewards aren't worth the work. Master's programs are far more popular because they are cheaper and quicker and allow one to theorectically do most of the same things as a PHD clinical psychologist. So only those people interested in working in academia go into clinical psych. And that's why the field is dying, IMO.

I am not a psychologist. I have just spent many years deciding not to pursue graduate level psych education.
posted by threeturtles at 11:22 AM on October 27, 2009


I think there are multiple reasons why the practice of psychology as a science has been held back.

Even at the purely scientific level, psychological research can be held back by the influence of those who aren't really up to date with the literature.

Right now, I'm working on an NSF GRFP application. I spent dozens of hours trying to craft a tight research proposal, one that is informed by contemporary findings and seeks to build upon recent studies that have had a significant impact on my particular subfield.

I sent my proposal out to some leaders in the field for advice, and they suggested that I might actually want to tone down some of the more technical aspects. This is because the people who will be evaluating my application are likely to be unfamiliar with the most recent findings, and I will have a stronger application if I target it to a more general audience.

The 'general audience' in question is a panel of supposed experts at the NSF. It's sort of a kick in the pants to have to gloss over much of the hard work I put into linking a number of recent studies, but I realize that in order to do the kind of science I want to, I'll have to make as convincing an argument as possible to those who hold the purse strings.

I suppose that it's just the nature of science that the people who allocate research funds are necessarily less intimately involved in the research than the investigators themselves are, and there ain't no sense in pining for some sort of scientific utopia in which everyone is equally informed about all facets of a discipline.

Plus, it's not like this is a high profile R-01 grant or something like that. I believe (at least I hope) that when you start talking about serious funding, the people who make the decisions are more closely tied to the work in question.
posted by solipsophistocracy at 11:59 AM on October 27, 2009


« Older Meet Giuliano Stroe. At 5 years old, he broke a Gu...  |  Klingon Propaganda [1:52](via)... Newer »


This thread has been archived and is closed to new comments