Monday, September 10, 2007

Week 3: EST Reactions

I think the article by Chambless and Hollon is an extremely important contribution to the EST literature. I thought the authors did a wonderful job specifically explaining what needs to be accomplished for a treatment to be seen as efficacious. However, one aspect of their specifications seems to require more detailed explanation. I am very interested to know more about why the authors decided that there must be only two studies done on a treatment that show significant results for the treatment to be seen as empirically supported. It is obvious why one study alone should not be viewed as enough empirical evidence: one study’s results could have been the product if a certain setting or therapist, they could result from experimenter bias, or they could just be a random fluke. I understand that these studies are expensive and time-consuming, but I still believe more than two studies should be done before a treatment is viewed as an EST. Even with all the other specifications, such as reanalyzing data, judging study design, and so forth, I believe that two seems like an arbitrarily picked (and very leniently low) number. Perhaps Chambless and Hollon could have elaborated on why it was determined that two, rather than three or four was the cut-off number. I although think it is extremely lenient for a treatment to be labeled possibly efficacious on the foundation that one study alone (or research conducted by only one team) found the treatment to be successful. Maybe I am just too strict, but if I am going to use a treatment as a clinician (or use them as a client) I want to be pretty sure that the treatment will work, and one or two studies backing it up is not going to cut it for me.

I am also very interested in the debate about whether therapist experience is important with regards to treatment outcome. I found the ways in which Chambless and Hollon refuted this suggestion to be fascinating: they immediately noted that they expect that training matters in specific interventions (a.k.a. it matters in the ones that have not been tested yet). They back this up with much less empirical evidence than I would expect, which is odd considering the whole article deals with providing empirical evidence for treatments. They are also quick to attack research that has shown experience to be unrelated to treatment outcome (i.e., Christensen & Jacobson, 1994; Strupp & Hadley, 1979). I worry that this section of the article may be more related to bias (or perhaps denial) of the authors rather than empirical support.

2 comments:

jcoan said...

I think your point about experience is a good one. In fact, one of my friends (Patrick McKnight) addressed this issue in a review paper. He discovered that the critical difference in determining the effect of experience was, in fact, the degree to which one was a "generalist" versus a "specialist." Specialists appear to benefit from experience in ways that generalists do not. (By generalist, I pretty much mean folks who take more or less anything that walks through the door).

Anonymous said...

I agree that more than 2 studies likely should have been used. It seems like 2 is too few. Your points are well articulated.
-J