Monday, November 26, 2007

Everything I need to know, I learned in Intro to Psych

Ok, the heading is a bit of an exaggeration. I didn't learn everything I need to know in my freshman year Intro to Psychology course. However, I learned many important things in that course that many M.S./Ph.D/Psy.D wielding psychologists seemed to have forgotten.

1) To be able to make inferences about cause and effect, you need to perform a controlled experiment with a representative sample of the population.
In the Dawes, Faust, and Meehl article, the superiority of actuarial method of judgment is discussed (1989). Apparently, clinicians who use the clinical method as opposed to the actuarial method believe that they have special intuitive powers from their experience that allow them to predict outcomes of their patients. However, clinicians base their predictions on the "skewed sample of humanity" that they see, which is not a "truly representative sample" (p. 1671). As a result, "it may be difficult, if not impossible, to determine relations among variables" (p. 1671). To make accurate predictions, it is necessary that clinicians look at both people with a disorder AND people without the disorder. If clinicians want to be able to make better predictions without relying on actuarial methods, they should perform studies (or refer to published studies) that look at a truly representative sample of the population.

2) Self-fulfilling Prophecies
As I learned in my first psychology course, self-fulfilling prophecies are when someone (Person A) expects someone else (Person B) to act in a certain way. These expectations lead to subtle changes in Person A's behavior, which elicit the expected behavior in Person B. In other words, because Person A expects Person B to act in a certain way, Person B acts in that way!
In some cases, it seems that clinical judgments result in self-fulfilling prophecies (Dawest et al, 1989). This results in clinical judgments causing outcomes instead of predicting outcomes. This can be a huge problem, especially when the outcomes will result in negative repercussions, such as predicting that a person will act violently.

3) Hindsight Bias
Hindsight bias refers to the fact that outcomes seem more predictable (and obvious) when they are known vs. when they are being predicted and that past predictions are often remembered as being in line with the correct outcomes, regardless of what the person originally predicted. Clinicians often recall their original judgments to be in line with outcomes (Dawes et al, 1989). As a result, their original predictions lose their value.

4) Confirmation Bias
Confirmation bias refers to the nature of humans to look for evidence that supports their hypotheses and to reject information that is not in line with their hypotheses. Clinical judgments often result in confirmation biases, so as a result clinicians often believe they are correct in their predictions when they are not (Dawes et al, 1989)! This results in overconfidence in clinical judgment. Dawes et al point out a study "demonstrating the upper range of misappraisal, [in which] most clinicians were quite confident in their diagnosis although not one was correct" (p. 1672)!
Confirmation biases not only lead to incorrect predictions and unmerited confidence in clinicians, but it also leads to the perseverance of potentially harmful treatments (PHTs). In his 2007 article about PHTs, Lilienfeld states that "persistent beliefs concerning the efficacy of PHTs may, in turn, be mediated by attributions regarding the causes of client deterioration" (p. 64). As a result, therapies that can result in symptom worsening and sometimes even death continue to be available to an uninformed public.
Garb (1999) pointed out 8 years ago that clinicians should stop using the Rorschach inkblot test unless we are able to find solid evidence that it has good reliability, validity, and utility. Otherwise, it is likely to results in lots of wasted time on the part of clinicians and money on the part of clients. Also, it is likely that it will lead to misdiagnoses. Garb mentioned that "in many instances, the addition of the Rorshach led to a decrease in validity" (p. 315)! The confirmation bias probably plays a huge role in the continuing use of such a useless tool. Clinicians believe that the Rorshach tests work, so they point out proof that supports their claims and disregard evidence that shows the contrary.

I could keep going...the articles we read this week also showed evidence of the fundamental attribution error, such as when psychologists assume client deterioration is due to specific characteristics of an individual as opposed to the fact that a treatment is not working (Lilienfeld, 2007). They show that many clinicians do not understand base rates (Dawes et al, 1989). They also show that psychologists use tests, treatments, and predictive methods that lack empirical support (Garb, 2007; Dawes et al, 1989; Lilienfeld, 2007).

Perhaps all practicing clinicians should be required to sit in on an Introduction to Psychology course every couple of years or so. It seems like they forgot a bunch of key principles that many freshman could recite by heart. If the clinicians remembered these principles, a lot of undue suffering could probably be avoided.

4 comments:

Anonymous said...

Spot on critical analysis as usual Steinman. Good work, always a great read.
-J

Joanna said...

I like your title!

mlerner said...

Dang! You're on fire!

jcoan said...

Excellent, Shari. you've reminded me again that we all need to be well trained in basic research and critical thinking. Great post!