Monday, September 24, 2007

Week 5: The Relationship

I was disappointed by the two articles we read this week. I didn't find them nearly as interesting as the past few weeks' readings. I usually I LOVE to talk about relationships. I guess the therapeutic one just doesn't excite me the way gossiping about my friends relationships excites me.

This brings me to the first thing I want to talk about: friendship. The Kirschenbaum and Jourdan (2005) article repeatedly pointed out that "much of the latest research on psychotherapy outcomes has demonstrated that, rather than particular approaches, it is certain 'common factors' in the therapy relationship that account for therapeutic change" (p. 44). These common factors include things such as "warmth, respect, empathy, acceptance and genuineness, positive relationship, and trust" (p. 44). Correct me if I'm wrong, but aren't these characteristics the same things you would expect out of a close friend? I know they are things I expect from my closest friends. So if it is true that different psychological approaches (such as CBT, IPT, etc...) don't matter nearly as much as these "common factors," why go to a therapist? Just go talk to your best friend for a couple of hours. At least you won't have to pay him/her.

Ok, I obviously don't believe that friends should replace therapists. Yet, I wonder if it is possible to teach somebody to have all of these "common factors". Is it possible to teach warmth? Or is it just a trait people either have or don't have? Have there been studies on this? I assume there must be...I just don't know of them.

I'm very surprised that this article repeatedly emphasized that psychology's theoretical schools "seem no better than one another" when common factors are considered, as opposed to stressing the importance of finding out how these common factors can interact with different treatments. I think that would be much more interesting (& important to our field) than what we read in this article.

Research has shown that CBT is better than other treatments for certain disorders (ie, specific phobias). According to Kirschenbaum and Jourdan, I think this would imply that CBT specialists just happen to be better at creating a therapeutic relationship than other types of therapists. I don't buy this. Why would CBT therapists be better at this than other therapists?

Alright...that is enough complaining out of me for this week. See you all on Wednesday.

Oh, on a side note, one of my favorite WashU professors (Richard Kurtz) was cited in this article (on p. 47)! If anybody is at all interested in research on hypnosis, I'd really recommend checking out his work. He does some pretty interesting stuff. And he is one of the most interesting people I've ever met.

Monday, September 17, 2007

Week 4: EST 2 Reactions

I read the Meehl article and promptly decided to add Meehl to my list of heroes. This is a big deal considering the fact that most of my heroes come from television. So Meehl is the first psychologist to officially be in Shari's list of heroes. I would love to present this article in class tomorrow, but watch...tomorrow will be the one day I'm not actually picked to present...Ok, onto a more mature reaction....

I absolutely loved that Meehl opened his article by comparing clinical experience to the "science" of diagnosing witches. It completely set the caustic tone for the rest of the piece. I believe a caustic, sarcastic tone is one of the best ways one can discuss the lack of science in the so-called science our class is entering. The sarcasm highlights how laughable the state of the field is and it emphasizes that changes need to be made if we (clinical psychologists) want to be taken seriously.

Meehl mentioned his "clinical laziness" when discussing his observations of males dreaming of fire (p. 93). From what I have read and heard about thus far, this laziness seems much too common. Imagine how much more SCIENCE (such as controlled experiments, hypothesis testing, statistical model creating) could be done if clinicians took more time to lay the groundwork by systematically organizing their observations and theories and finding a way to get them to the public! This way, those of us who work in labs wouldn't have to guess what was going on behind the closed doors of clinics and private practices. If there were more open communication between the psychologists who run experiments and the psychologists who "used" the results from these experiments, I think the field could increase at an exponentially faster rate. (I put "used" in quotes because it seems like some psychologists ignore experimental results to continue using what they know based on "experience". And I put "experience" in quotes because Meehl's article pointed out that this experience typically is garbage. Or at least "unavoidably a mixture of truths, half-truths, and falsehoods". And I put THAT in quotes because I took it word-for-word from the abstract on p. 91.)

I believe there is no excuse to using therapeutic methods that have no empirical support UNLESS these methods are being used to gain insight or make observations about a method that will be experimentally tested in the very near future. Psychologists are paid a LOT of money to help people and these people put their psychological well being in the hands of these "professionals". If they aren't using methods that are supported by cold, hard data, they don't deserve to be practicing. Perhaps I'll change my mind once I begin practicing myself...but I hope not. Because then I would be no better than the people I am currently badmouthing.

Wow, so I guess this was not my most mature writing ever. But it is definitely something I am passionate about and I am really looking forward to discussing this piece in class on Wednesday.

Monday, September 10, 2007

Week 3: EST Reactions

I think the article by Chambless and Hollon is an extremely important contribution to the EST literature. I thought the authors did a wonderful job specifically explaining what needs to be accomplished for a treatment to be seen as efficacious. However, one aspect of their specifications seems to require more detailed explanation. I am very interested to know more about why the authors decided that there must be only two studies done on a treatment that show significant results for the treatment to be seen as empirically supported. It is obvious why one study alone should not be viewed as enough empirical evidence: one study’s results could have been the product if a certain setting or therapist, they could result from experimenter bias, or they could just be a random fluke. I understand that these studies are expensive and time-consuming, but I still believe more than two studies should be done before a treatment is viewed as an EST. Even with all the other specifications, such as reanalyzing data, judging study design, and so forth, I believe that two seems like an arbitrarily picked (and very leniently low) number. Perhaps Chambless and Hollon could have elaborated on why it was determined that two, rather than three or four was the cut-off number. I although think it is extremely lenient for a treatment to be labeled possibly efficacious on the foundation that one study alone (or research conducted by only one team) found the treatment to be successful. Maybe I am just too strict, but if I am going to use a treatment as a clinician (or use them as a client) I want to be pretty sure that the treatment will work, and one or two studies backing it up is not going to cut it for me.

I am also very interested in the debate about whether therapist experience is important with regards to treatment outcome. I found the ways in which Chambless and Hollon refuted this suggestion to be fascinating: they immediately noted that they expect that training matters in specific interventions (a.k.a. it matters in the ones that have not been tested yet). They back this up with much less empirical evidence than I would expect, which is odd considering the whole article deals with providing empirical evidence for treatments. They are also quick to attack research that has shown experience to be unrelated to treatment outcome (i.e., Christensen & Jacobson, 1994; Strupp & Hadley, 1979). I worry that this section of the article may be more related to bias (or perhaps denial) of the authors rather than empirical support.

Monday, September 3, 2007

Week 2: DSM Reactions

I want to discuss two issues in this blog. The first is whether researchers should focus on psychological phenomena or psychiatric diagnoses and the second is whether clinicians should use categorical or dimensional diagnoses. For both issues, I believe BOTH aspects are equally important and BOTH need to be researched/used.

When few empirical studies have been done on a specific psychological phenomenon, I agree with Persons’ (1986) view that researchers should attempt to focus their research on the phenomenon rather than psychiatric diagnoses. However, I believe that this should only be the first step in a two step process. Once a solid foundation of empirical studies has accumulated on the phenomenon and multiple theories have been formulated and tested, the logical next step would be to study the theories created IN specific psychiatric disorders. As Persons’ noted, it is much easier to develop theories about specific psychological processes of symptoms rather than develop theories to explain the psychological processes that lead to psychiatric disorders. As a result, it makes sense to study the symptom specifically as a first step. However, is it valid to generalize theories focusing on only one symptom to people with diagnosed disorders? For instance, if a researcher is looking at loosening of associations, the researcher may have subjects who would fall in many different diagnostic categories. Although the researcher will be more likely to learn about possible etiological aspects of the loosening of associations, there will be no evidence to show that what is found will generalize to subjects who suffer from loosening of associations AND have been diagnosed with schizophrenia. It is possible that the loosening of associations common to schizophrenics might actually be very different from the loosening of associations common to other disorders. It is integral for researchers to look at the symptoms within specific diagnostic categories once theories have been tested on the symptoms alone.

Although I planned to only discuss the Persons' article, I decided I wanted to mention a quick thought on categorical vs. dimensional diagnoses. In both Widiger and Clark's (2000) and Allen's (1998) articles, the proposal to use dimensional diagnoses instead of categorical diagnoses was proposed. If this proposal was taken seriously, I think it would be a ridiculous loss (or perhaps waste) of solid past research on categorical diagnoses. I think the idea of using dimensional diagnoses has a lot to offer (it lacks arbitrary cutoffs and it has the possibility of giving more information than categorical diagnoses), but I do not think it should completely replace the use of categorical diagnoses. In my opinion, psychologists should use both dimensional and categorical diagnoses. Otto Kernberg proposed a dimensional model in which people are rated on a scale of Range I to Range V (I is normal, II is neurotic, III is upper level borderline, IV is lower level borderline, and V is psychotic). This scale takes a more universal approach to diagnosis (similar to Axis V of the multiaxial diagnoses) and it provides useful information about people than cannot be obtained from a categorical diagnosis alone. A Range IV anorexic patient is qualitatively different from a Range V anorexic patient: a Range V would have poorer reality testing and worse social reality testing, among other issues. However, if only the categorical diagnosis was used, the two anorexics would seem deceivingly similar. I believe it is important to integrate new methods of diagnosis into current methods, rather than simply choosing one or the other.