Ok, the heading is a bit of an exaggeration. I didn't learn everything I need to know in my freshman year Intro to Psychology course. However, I learned many important things in that course that many M.S./Ph.D/Psy.D wielding psychologists seemed to have forgotten.
1) To be able to make inferences about cause and effect, you need to perform a controlled experiment with a representative sample of the population.
In the Dawes, Faust, and Meehl article, the superiority of actuarial method of judgment is discussed (1989). Apparently, clinicians who use the clinical method as opposed to the actuarial method believe that they have special intuitive powers from their experience that allow them to predict outcomes of their patients. However, clinicians base their predictions on the "skewed sample of humanity" that they see, which is not a "truly representative sample" (p. 1671). As a result, "it may be difficult, if not impossible, to determine relations among variables" (p. 1671). To make accurate predictions, it is necessary that clinicians look at both people with a disorder AND people without the disorder. If clinicians want to be able to make better predictions without relying on actuarial methods, they should perform studies (or refer to published studies) that look at a truly representative sample of the population.
2) Self-fulfilling Prophecies
As I learned in my first psychology course, self-fulfilling prophecies are when someone (Person A) expects someone else (Person B) to act in a certain way. These expectations lead to subtle changes in Person A's behavior, which elicit the expected behavior in Person B. In other words, because Person A expects Person B to act in a certain way, Person B acts in that way!
In some cases, it seems that clinical judgments result in self-fulfilling prophecies (Dawest et al, 1989). This results in clinical judgments causing outcomes instead of predicting outcomes. This can be a huge problem, especially when the outcomes will result in negative repercussions, such as predicting that a person will act violently.
3) Hindsight Bias
Hindsight bias refers to the fact that outcomes seem more predictable (and obvious) when they are known vs. when they are being predicted and that past predictions are often remembered as being in line with the correct outcomes, regardless of what the person originally predicted. Clinicians often recall their original judgments to be in line with outcomes (Dawes et al, 1989). As a result, their original predictions lose their value.
4) Confirmation Bias
Confirmation bias refers to the nature of humans to look for evidence that supports their hypotheses and to reject information that is not in line with their hypotheses. Clinical judgments often result in confirmation biases, so as a result clinicians often believe they are correct in their predictions when they are not (Dawes et al, 1989)! This results in overconfidence in clinical judgment. Dawes et al point out a study "demonstrating the upper range of misappraisal, [in which] most clinicians were quite confident in their diagnosis although not one was correct" (p. 1672)!
Confirmation biases not only lead to incorrect predictions and unmerited confidence in clinicians, but it also leads to the perseverance of potentially harmful treatments (PHTs). In his 2007 article about PHTs, Lilienfeld states that "persistent beliefs concerning the efficacy of PHTs may, in turn, be mediated by attributions regarding the causes of client deterioration" (p. 64). As a result, therapies that can result in symptom worsening and sometimes even death continue to be available to an uninformed public.
Garb (1999) pointed out 8 years ago that clinicians should stop using the Rorschach inkblot test unless we are able to find solid evidence that it has good reliability, validity, and utility. Otherwise, it is likely to results in lots of wasted time on the part of clinicians and money on the part of clients. Also, it is likely that it will lead to misdiagnoses. Garb mentioned that "in many instances, the addition of the Rorshach led to a decrease in validity" (p. 315)! The confirmation bias probably plays a huge role in the continuing use of such a useless tool. Clinicians believe that the Rorshach tests work, so they point out proof that supports their claims and disregard evidence that shows the contrary.
I could keep going...the articles we read this week also showed evidence of the fundamental attribution error, such as when psychologists assume client deterioration is due to specific characteristics of an individual as opposed to the fact that a treatment is not working (Lilienfeld, 2007). They show that many clinicians do not understand base rates (Dawes et al, 1989). They also show that psychologists use tests, treatments, and predictive methods that lack empirical support (Garb, 2007; Dawes et al, 1989; Lilienfeld, 2007).
Perhaps all practicing clinicians should be required to sit in on an Introduction to Psychology course every couple of years or so. It seems like they forgot a bunch of key principles that many freshman could recite by heart. If the clinicians remembered these principles, a lot of undue suffering could probably be avoided.
Monday, November 26, 2007
Monday, November 12, 2007
Week 12: Is Dexter a Life-Course-Persistent Antisocial Guy?
I found the Moffitt (1993) article to be very fascinating. Her arguments for the two different types of antisocial people makes sense to me intuitively. However, intuition is not always a good thing to base scientific theories on. I am craving to know if her theories were ever supported empirically.
Moffitt's new taxonomy for antisocial behavior has HUGE implications for data collection. If Moffitt is correct and there are actually two different types of people who present with antisocial behaviors, who can easily be distinguished by when antisocial behaviors begin and whether or not they ever desist, cross-sectional data collection will never be able to distinguish between the two groups successfully! It is integral that researchers use longitudinal research methods if they want to distinguish adolescence limited from life-course persistent antisocial people. I kind of wish the second article we read this week was a follow-up on this article so we could see how Moffitt's new taxonomy faired when people attempted to back it up with empirical support. Has it been supported yet?
Moffitt said that delinquents have lower than average cognitive abilities. She stated that "this relation is not an artifact of slow-witted delinquents' greater susceptibility to detection by police; undetected delinquents have weak cogntive skills too" (p. 680). I really want to know how she knows this! If these people have not been caught, how do we know that they are engaging in delinquent behavior? Are they just very honest and frank when they self-report? I would like to know more about how she came to this conclusion. She cited a past article she wrote, but I wish she added a sentence into THIS article about how it is possible to find out information like this.
Have any of you ever seen the TV show Dexter? Well, I'm kind of obsessed with it. It's pretty amazing. Anyway, it is about a guy who works for the Miami police doing forensics. He helps the cops catch some pretty awful killers. The twist is that Dexter himself is actually a serial killer. And the other twist is that he only kills other killers. He's kind of like a "Dark Avenger"...like Batman, except much creepier. Anyway, I amuse myself by attempting to diagnose Dexter with different DSM disorders... for a while, I thought he was schizoid, because he used to seem as if he didn't need or want any human interaction and he has very flat affect. However, he has murdered close to 40 people, which leads me to believe he has a few antisocial tendencies. (haha...a few...) Anyway, Dexter would definitely be a life-course-persistent. He started off killing animals when he was young (just as many people who end up antisocial do) and he had urges to kill people since he was in elementary school. However, his delinquency is not stable across situations. He doesn't steal/cheat/pick fights or anything like that. All he does is kill people. Does that mean he isn't a life-course persistent antisocial? Dexter has lately been comparing himself to a drug addict--hinting that killing people is an addiction for him. Can you be addicted to killing people? What kind of diagnost would that warrent? If anyone watches the show, please give me your input!! And if you don't watch the show, go rent the first season right now. Stop what you are doing and go to Blockbuster. Seriously. It's by far one of the best shows on television and so fascinating to pick apart and analyze.
Interesting side note: I had no idea if Terrie Moffitt was a male or female, but for some reason, I assumed male. However, I just googled Moffitt and found that she is in fact, a woman! There is a higher prevalence rate for antisocial male behavior--do you think this is why I assumed Moffitt was a male?
Moffitt's new taxonomy for antisocial behavior has HUGE implications for data collection. If Moffitt is correct and there are actually two different types of people who present with antisocial behaviors, who can easily be distinguished by when antisocial behaviors begin and whether or not they ever desist, cross-sectional data collection will never be able to distinguish between the two groups successfully! It is integral that researchers use longitudinal research methods if they want to distinguish adolescence limited from life-course persistent antisocial people. I kind of wish the second article we read this week was a follow-up on this article so we could see how Moffitt's new taxonomy faired when people attempted to back it up with empirical support. Has it been supported yet?
Moffitt said that delinquents have lower than average cognitive abilities. She stated that "this relation is not an artifact of slow-witted delinquents' greater susceptibility to detection by police; undetected delinquents have weak cogntive skills too" (p. 680). I really want to know how she knows this! If these people have not been caught, how do we know that they are engaging in delinquent behavior? Are they just very honest and frank when they self-report? I would like to know more about how she came to this conclusion. She cited a past article she wrote, but I wish she added a sentence into THIS article about how it is possible to find out information like this.
Have any of you ever seen the TV show Dexter? Well, I'm kind of obsessed with it. It's pretty amazing. Anyway, it is about a guy who works for the Miami police doing forensics. He helps the cops catch some pretty awful killers. The twist is that Dexter himself is actually a serial killer. And the other twist is that he only kills other killers. He's kind of like a "Dark Avenger"...like Batman, except much creepier. Anyway, I amuse myself by attempting to diagnose Dexter with different DSM disorders... for a while, I thought he was schizoid, because he used to seem as if he didn't need or want any human interaction and he has very flat affect. However, he has murdered close to 40 people, which leads me to believe he has a few antisocial tendencies. (haha...a few...) Anyway, Dexter would definitely be a life-course-persistent. He started off killing animals when he was young (just as many people who end up antisocial do) and he had urges to kill people since he was in elementary school. However, his delinquency is not stable across situations. He doesn't steal/cheat/pick fights or anything like that. All he does is kill people. Does that mean he isn't a life-course persistent antisocial? Dexter has lately been comparing himself to a drug addict--hinting that killing people is an addiction for him. Can you be addicted to killing people? What kind of diagnost would that warrent? If anyone watches the show, please give me your input!! And if you don't watch the show, go rent the first season right now. Stop what you are doing and go to Blockbuster. Seriously. It's by far one of the best shows on television and so fascinating to pick apart and analyze.
Interesting side note: I had no idea if Terrie Moffitt was a male or female, but for some reason, I assumed male. However, I just googled Moffitt and found that she is in fact, a woman! There is a higher prevalence rate for antisocial male behavior--do you think this is why I assumed Moffitt was a male?
Monday, November 5, 2007
Week 11: How an article both made me really happy and really angry
I really enjoyed this week's reading, probably because it was relevant to my research. I like when my homework doubles as being possible sources for future papers. :)
In the Mineka and Zinbarg (2006) article, one point really stood out to me. While discussing the trauma phase of Posttraumatic Stress Disorder, they state that "survivors with PTSD were more likely than those without PTSD to retrospectively report having experienced mental defeat during their trauamatization" (p. 18). After reading this, I noted in the margin "good to know". In my opinion, this one sentence was one of the most important sentences I've read all year. I know that this statement was not supposed to be the part of the article that people walked away thinking about, but after reading this article, I couldn't stop thinking about how incredibly important this fact is. Mineka and Zinbarg are basically pointing out that not admitting defeat during a crisis can have serious psychological benefits. They are telling you, if you are in a crisis, DON'T GIVE UP. Do people know this??? If I am ever assaulted or in another traumatic situation, I now know what I should be thinking about in order to protect my psychological well being. I feel that I now have the power to help immunize myself from incapacitating psychological problems that I didn't have last week. This is amazing to me. I can't believe reading an article for class could empower me in such a way.
I found the Mineka and Zinbarg article to be extremely interesting and enlightening. I feel that I now have a good grasp on contemporary learning models, which I knew next to nothing about before I started reading this article. As a anxiety researcher with a cognitive background, I feel that I can incorporate what I learned from this article into my own person theories and ideas about anxiety. I do not feel like cognitive and learning models are incompatible. However, Mineka and Zinbarg do not seem to agree with me...
Now on to some complaining...
In the Conclusion of their article, Mineka and Zinbarg discussed why they felt contemporary learning theory models are better than other models (such as psychodynamic and cognitive). I give these guys full range to diss psychodynamic theories, but cognitive theories?? That's not okay with me. Why does the learning model have to be BETTER? Why can't the multiple models complement each other? I think it is very close minded of the authors to simply brush aside cognitive models (and okay...I suppose psychodynamic models probably have some good points too).
And also, if Mineka and Zinbarg are going to say that cognitive models are less comprehensive than contemporary learning models, I think they should get their facts straight. They stated that "the cognitive model is silent about the variety of different vulnerability factors that the learning theory approach explicitly addresses affecting which individuals with panic attacks are most likely to develop PD or PDA" (p. 22). This is simply not true. Cognitive researchers use questionnaire called the Anxiety Sensitivity Index to look at people's cognitions related to panic-related symptoms (it measures their fear of these symptoms). This questionnaire is a very helpful measure in identifying people who will develop PD. Nice try, Mineka and Zinbarg. Nice try.
Ok, see you all on Wednesday!
In the Mineka and Zinbarg (2006) article, one point really stood out to me. While discussing the trauma phase of Posttraumatic Stress Disorder, they state that "survivors with PTSD were more likely than those without PTSD to retrospectively report having experienced mental defeat during their trauamatization" (p. 18). After reading this, I noted in the margin "good to know". In my opinion, this one sentence was one of the most important sentences I've read all year. I know that this statement was not supposed to be the part of the article that people walked away thinking about, but after reading this article, I couldn't stop thinking about how incredibly important this fact is. Mineka and Zinbarg are basically pointing out that not admitting defeat during a crisis can have serious psychological benefits. They are telling you, if you are in a crisis, DON'T GIVE UP. Do people know this??? If I am ever assaulted or in another traumatic situation, I now know what I should be thinking about in order to protect my psychological well being. I feel that I now have the power to help immunize myself from incapacitating psychological problems that I didn't have last week. This is amazing to me. I can't believe reading an article for class could empower me in such a way.
I found the Mineka and Zinbarg article to be extremely interesting and enlightening. I feel that I now have a good grasp on contemporary learning models, which I knew next to nothing about before I started reading this article. As a anxiety researcher with a cognitive background, I feel that I can incorporate what I learned from this article into my own person theories and ideas about anxiety. I do not feel like cognitive and learning models are incompatible. However, Mineka and Zinbarg do not seem to agree with me...
Now on to some complaining...
In the Conclusion of their article, Mineka and Zinbarg discussed why they felt contemporary learning theory models are better than other models (such as psychodynamic and cognitive). I give these guys full range to diss psychodynamic theories, but cognitive theories?? That's not okay with me. Why does the learning model have to be BETTER? Why can't the multiple models complement each other? I think it is very close minded of the authors to simply brush aside cognitive models (and okay...I suppose psychodynamic models probably have some good points too).
And also, if Mineka and Zinbarg are going to say that cognitive models are less comprehensive than contemporary learning models, I think they should get their facts straight. They stated that "the cognitive model is silent about the variety of different vulnerability factors that the learning theory approach explicitly addresses affecting which individuals with panic attacks are most likely to develop PD or PDA" (p. 22). This is simply not true. Cognitive researchers use questionnaire called the Anxiety Sensitivity Index to look at people's cognitions related to panic-related symptoms (it measures their fear of these symptoms). This questionnaire is a very helpful measure in identifying people who will develop PD. Nice try, Mineka and Zinbarg. Nice try.
Ok, see you all on Wednesday!
Monday, October 29, 2007
Week 10: Feeling blue?
I just read the Coyne chapter "Thinking Interactionally about Depression: A Radical Reinstatement". Wow. That guy has a lot of nerve. I have a lot to say, but I'll narrow it down to a few of the things that bothered me the most about the chapter (and Coyne as a human).
1) At the very beginning of the article, Coyne complains about the fact that the literature still cites a paper he wrote in 1976 in which he conceptualizes an interpersonal theory of depression. Coyne states that "it has been disappointing that it is taking so long for subsequent work to move beyond my [1976] conceptualization" (p. 365). I agree that it is important for other researchers to continue producing new theories and backing them up with new empirical evidence, but I think it is a ridiculous thing to complain about. If Coyne is unhappy about this, HE should be fixing it by doing more and more research until his first ideas become obsolete because he has proved newer, better ones to be better supported by new data. I know that he HAS continued to do research, but he obviously hasn't done enough to override his original publication. So, Coyne (if you are still alive), stop complaining and get back to work.
2) I can't believe I just wrote "if you are still alive". I am extremely insensitive.
3) On page 367, Coyne mentions how "it is a lot" to ask participants to report on subtle mood shifts in a short period of time. It has been shown (by Tim Wilson, from our department!) that people generally suck at introspection. Therefore, I agree with Coyne that it is a bad idea to use self-report as the only measure in depression research. I feel that it is extremely important for researchers to find other ways to measure facets of depression, since self-report cannot always be trusted to be accurate. This leads me to my next point....
4) Coyne says one of the biggest problems with depression research is that many researchers use participants' statements about themselves as "evidence of enduring cognitive structures" (p. 368). As I mentioned in the last point, I agree with Coyne that self-report is not a great method to use. However, Coyne does not acknowledge the fact that there is a plethora of other methods available to look at cognitive structures. With current technology, we have the ability to look at memory biases, interpretation biases, attention biases, implicit associations, and much more. All of these things are validated ways to examine cognitive structures and processes. However, many of these measures are used more often by cognitive and social psychologists than clinical psychologists. I strongly believe that it is necessary for psychologists to be more integrative in their research approaches; they need to be more willing to look in other areas to find better methods. I know we talked about this a few weeks ago, but integration is what is necessary to truly advance the science of psychology.
5) Coyne seems like a jerk. I just thought I'd put that out there.
Ok, time for bed!! See you all on Wednesday!
1) At the very beginning of the article, Coyne complains about the fact that the literature still cites a paper he wrote in 1976 in which he conceptualizes an interpersonal theory of depression. Coyne states that "it has been disappointing that it is taking so long for subsequent work to move beyond my [1976] conceptualization" (p. 365). I agree that it is important for other researchers to continue producing new theories and backing them up with new empirical evidence, but I think it is a ridiculous thing to complain about. If Coyne is unhappy about this, HE should be fixing it by doing more and more research until his first ideas become obsolete because he has proved newer, better ones to be better supported by new data. I know that he HAS continued to do research, but he obviously hasn't done enough to override his original publication. So, Coyne (if you are still alive), stop complaining and get back to work.
2) I can't believe I just wrote "if you are still alive". I am extremely insensitive.
3) On page 367, Coyne mentions how "it is a lot" to ask participants to report on subtle mood shifts in a short period of time. It has been shown (by Tim Wilson, from our department!) that people generally suck at introspection. Therefore, I agree with Coyne that it is a bad idea to use self-report as the only measure in depression research. I feel that it is extremely important for researchers to find other ways to measure facets of depression, since self-report cannot always be trusted to be accurate. This leads me to my next point....
4) Coyne says one of the biggest problems with depression research is that many researchers use participants' statements about themselves as "evidence of enduring cognitive structures" (p. 368). As I mentioned in the last point, I agree with Coyne that self-report is not a great method to use. However, Coyne does not acknowledge the fact that there is a plethora of other methods available to look at cognitive structures. With current technology, we have the ability to look at memory biases, interpretation biases, attention biases, implicit associations, and much more. All of these things are validated ways to examine cognitive structures and processes. However, many of these measures are used more often by cognitive and social psychologists than clinical psychologists. I strongly believe that it is necessary for psychologists to be more integrative in their research approaches; they need to be more willing to look in other areas to find better methods. I know we talked about this a few weeks ago, but integration is what is necessary to truly advance the science of psychology.
5) Coyne seems like a jerk. I just thought I'd put that out there.
Ok, time for bed!! See you all on Wednesday!
Sunday, October 21, 2007
Week 9: Stimulus Control
I really enjoyed this week's reading on Stimulus Control. The article made it seem like SC is incredibly easy for any therapist to implement (even though the client may exhibit noncompliance). It does not seem to require much (if any) psychological training to get someone to start using it. I mean, the article mentioned that nurses and general practitioners have been successful teaching it to their patients.
I want to learn how to administer SC. Not because it seems easy and because it doesn't seem to require psychological training...but because it seems to really work. I would love to be a therapist with a high success rate. I want to help people, and if this is a way to make people more satisfied with their lives, I want to use it. I think it would be great to treat only insomniacs and only use SC and just sit back and smile as all my clients start getting long, healthy, happy, full nights of sleep.
Ok--pause. I think I have found a problem with the SC article. Never before have I read about a therapy and thought, "WOW, this seems perfect!". I think this proves that the article we read was biased and did not include any research that sheds a negative light on it. Even negative aspects of using SC (such as the high noncompliance rate) was sugar coated by emphasizing the strategies that can be used to counteract it. I am 99% positive that there has to be negative aspects of SC and I think I would have a more in depth, fuller understanding of SC if the article included more information about these negative aspects. No treatment is perfect...right? Or is SC the PERFECT treatment for insomniacs?
I have a weird question about SC. To follow the steps correctly, nothing can be done in the bed except sleep & sex. Why is sex allowed? I mean, I know sex typically occurs in the bedroom and it would probably be taboo to make therapists suggest doing it outside of the bedroom, but doesn't sex create high levels of arousal? Wouldn't SC be more effective if sex was relegated to another room (perhaps a spare bedroom?) as well? Have there been any studies done that show sex in the bedroom does not hinder the effects of SC?
Ok, now that I brought up that awkward topic, I think I'll stop before I embarrass myself anymore. See you all in class on Wednesday!
I want to learn how to administer SC. Not because it seems easy and because it doesn't seem to require psychological training...but because it seems to really work. I would love to be a therapist with a high success rate. I want to help people, and if this is a way to make people more satisfied with their lives, I want to use it. I think it would be great to treat only insomniacs and only use SC and just sit back and smile as all my clients start getting long, healthy, happy, full nights of sleep.
Ok--pause. I think I have found a problem with the SC article. Never before have I read about a therapy and thought, "WOW, this seems perfect!". I think this proves that the article we read was biased and did not include any research that sheds a negative light on it. Even negative aspects of using SC (such as the high noncompliance rate) was sugar coated by emphasizing the strategies that can be used to counteract it. I am 99% positive that there has to be negative aspects of SC and I think I would have a more in depth, fuller understanding of SC if the article included more information about these negative aspects. No treatment is perfect...right? Or is SC the PERFECT treatment for insomniacs?
I have a weird question about SC. To follow the steps correctly, nothing can be done in the bed except sleep & sex. Why is sex allowed? I mean, I know sex typically occurs in the bedroom and it would probably be taboo to make therapists suggest doing it outside of the bedroom, but doesn't sex create high levels of arousal? Wouldn't SC be more effective if sex was relegated to another room (perhaps a spare bedroom?) as well? Have there been any studies done that show sex in the bedroom does not hinder the effects of SC?
Ok, now that I brought up that awkward topic, I think I'll stop before I embarrass myself anymore. See you all in class on Wednesday!
Monday, October 1, 2007
Week 7: Behavioral Activation
So I am having a friend from out of town visit this weekend, so I decided to do next week's reading & blog this week! So lucky you, Mr. or Miss Reader...you get a double dose of Shari! If you want to see what I have to say about CBT, scroll down to the next blog post. (In case you're wondering...I love CBT.)
I enjoyed the readings this week. I found them informative and interesting. I want to applaud the Jacobson (2001) article. I think he is doing psychology the way it SHOULD be done. Jacobson was part of a team that found that a certain type of therapy had a significant effect, so he formulated theories and models based on the therapy and after this, he did a large clinical trial to test the efficacy of the therapy (Jacobson et al., 1996). This is very different than what many clinicians (I'll call them the "unscientific clinicians") do: they find a therapy that they think works...so they do that kind of therapy, without any type of rigorous testing or theorizing. Based on the article we read, I think Jacobson is a true psychological scientist. I aspire to do work that is as scientific as his.
I'm very interested in Behavioral Activation. I have to admit that I knew relatively little about it before reading this article. I really approve of any type of therapy that has its roots in something with such a large amount of empirical support, like behaviorism. I like that the idea of using Behavioral Activation as a stand alone therapy came from a scientific study, rather than a random idea of a therapist. I think random ideas can also be brilliant (so long as they are tested soon after they are generated), but I think the fact that this form of therapy sprang from research adds to its credibility.
Although I am interested in Behavioral Activation, I am still a little skeptical of it. I think it is counter-intuitive to not treat cognitions in depressed clients. I feel like targeting both behavior and cognitive problems is a more thorough way to treat depression, but if it can save money and time for the clients, I suppose it is ok to only do one of the two if it is proved to be sufficient. I am very curious to know the results of the large study Jacobson was working on at press time of this article to see if BA truly is more efficacious than CT.
Some of the "unscientific clinicians" I mentioned earlier, the ones who practice without empirical support or plans of eventual empirical support, may complain that they shouldn't have to validate their therapy with empirical evidence because they can tell their therapy works just by interacting with their clients. Well, we learned from our first set of readings that this is not the case most of the time, since clinical judgment typically sucks. So if any of these "unscientific clinicians" are reading this (which I doubt, since I assume only our class reads this and we all seem to be supporters of empirical support), I'd like to tell you to get your act together and act like a scientist. You got a Ph.D. or Psy.D., which shows you're smart, so do some research or serious theorizing and prove that you deserve your title!
Alright, that's all for now! See you all in class. :)
I enjoyed the readings this week. I found them informative and interesting. I want to applaud the Jacobson (2001) article. I think he is doing psychology the way it SHOULD be done. Jacobson was part of a team that found that a certain type of therapy had a significant effect, so he formulated theories and models based on the therapy and after this, he did a large clinical trial to test the efficacy of the therapy (Jacobson et al., 1996). This is very different than what many clinicians (I'll call them the "unscientific clinicians") do: they find a therapy that they think works...so they do that kind of therapy, without any type of rigorous testing or theorizing. Based on the article we read, I think Jacobson is a true psychological scientist. I aspire to do work that is as scientific as his.
I'm very interested in Behavioral Activation. I have to admit that I knew relatively little about it before reading this article. I really approve of any type of therapy that has its roots in something with such a large amount of empirical support, like behaviorism. I like that the idea of using Behavioral Activation as a stand alone therapy came from a scientific study, rather than a random idea of a therapist. I think random ideas can also be brilliant (so long as they are tested soon after they are generated), but I think the fact that this form of therapy sprang from research adds to its credibility.
Although I am interested in Behavioral Activation, I am still a little skeptical of it. I think it is counter-intuitive to not treat cognitions in depressed clients. I feel like targeting both behavior and cognitive problems is a more thorough way to treat depression, but if it can save money and time for the clients, I suppose it is ok to only do one of the two if it is proved to be sufficient. I am very curious to know the results of the large study Jacobson was working on at press time of this article to see if BA truly is more efficacious than CT.
Some of the "unscientific clinicians" I mentioned earlier, the ones who practice without empirical support or plans of eventual empirical support, may complain that they shouldn't have to validate their therapy with empirical evidence because they can tell their therapy works just by interacting with their clients. Well, we learned from our first set of readings that this is not the case most of the time, since clinical judgment typically sucks. So if any of these "unscientific clinicians" are reading this (which I doubt, since I assume only our class reads this and we all seem to be supporters of empirical support), I'd like to tell you to get your act together and act like a scientist. You got a Ph.D. or Psy.D., which shows you're smart, so do some research or serious theorizing and prove that you deserve your title!
Alright, that's all for now! See you all in class. :)
Week 6: CBT is great & Shari may be paranoid...
So if you haven't already figured it out, I'm a fan of CBT. I'm actually a huge fan of CBT. Which is odd, considering the fact that I have never actually READ or SEEN a CBT manual. I know what CBT is, I know that it is based on SCIENCE, when a bunch of other forms of therapies are not, so I decided I'm a huge fan of it. This week made me happy because I got to learn more about CBT and I got lots of evidence, in the form of meta-analyses, to support the efficacy/effectiveness/efficiency of CBT.
Now that I've made it clear that I am honestly a fan of CBT, I want to take a little bit of time to complain about meta-analyses. I have decided that they are a very sneaky way to report information. I don't think that the authors of meta-analyses are always purposefully being sneaky...but I think it is much easier to fudge a few facts or hide a few errors in a meta-analysis than it is in a report on a single experiment.
Here are a few instances in which I believe meta-analytic authors are being "tricky" in the Butler (2006) meta-meta-analytic article we read for class:
-It is mentioned that Parker et. al. did not report the criteria they used to select studies for their review paper and as a result, "it is difficult to interpret their conclusions" (p. 20). In my mind, this means the authors were probably doing something sneaky when selecting which articles to include. It is very possible that they only chose articles that supported their own opinions. This seems especially likely when you take into account that "researcher allegiance accounted for half the difference between CBT and other treatments" in a different meta-analytic study done by Dobson in 1989 (p.20).
-In a 1998 meta-analysis, Gloaguen et. al. found that CBT had significantly better outcomes than medication for depression. However, Gloaguen et. al. "included some early studies comparing CT with medications, which had methodological features that favored CT" (p. 23). Was this done on purpose? If so, it is a very sneaky way to prove your point; purposely include studies that have methodological advantages for whatever you favor.
-Ten studies (well, ten meta-analytic articles) were excluded from Butler's (2006) meta-meta-analysis because they were written in a foreign language. What if all ten of these articles pointed to results that were extremely different from other results? What if they would have greatly impacted the meta-meta-analysis? We will never know. We will also never know if Butler honestly did not include these studies because of a language gap or if he did not include them because there was something in these articles that he wanted to hide. I doubt this is the case, but it is a possibility. Sure it would be expensive to translate the articles to English, but I think it would have been a good idea if Butler et. al. had the funds.
Perhaps I am just paranoid and all of these things I find "sneaky" are actually very normal. However, I really do think meta-analyses are a great way to try to prove your point without really having to explain every little aspect of your procedure.
In other news, I still love CBT.
Now that I've made it clear that I am honestly a fan of CBT, I want to take a little bit of time to complain about meta-analyses. I have decided that they are a very sneaky way to report information. I don't think that the authors of meta-analyses are always purposefully being sneaky...but I think it is much easier to fudge a few facts or hide a few errors in a meta-analysis than it is in a report on a single experiment.
Here are a few instances in which I believe meta-analytic authors are being "tricky" in the Butler (2006) meta-meta-analytic article we read for class:
-It is mentioned that Parker et. al. did not report the criteria they used to select studies for their review paper and as a result, "it is difficult to interpret their conclusions" (p. 20). In my mind, this means the authors were probably doing something sneaky when selecting which articles to include. It is very possible that they only chose articles that supported their own opinions. This seems especially likely when you take into account that "researcher allegiance accounted for half the difference between CBT and other treatments" in a different meta-analytic study done by Dobson in 1989 (p.20).
-In a 1998 meta-analysis, Gloaguen et. al. found that CBT had significantly better outcomes than medication for depression. However, Gloaguen et. al. "included some early studies comparing CT with medications, which had methodological features that favored CT" (p. 23). Was this done on purpose? If so, it is a very sneaky way to prove your point; purposely include studies that have methodological advantages for whatever you favor.
-Ten studies (well, ten meta-analytic articles) were excluded from Butler's (2006) meta-meta-analysis because they were written in a foreign language. What if all ten of these articles pointed to results that were extremely different from other results? What if they would have greatly impacted the meta-meta-analysis? We will never know. We will also never know if Butler honestly did not include these studies because of a language gap or if he did not include them because there was something in these articles that he wanted to hide. I doubt this is the case, but it is a possibility. Sure it would be expensive to translate the articles to English, but I think it would have been a good idea if Butler et. al. had the funds.
Perhaps I am just paranoid and all of these things I find "sneaky" are actually very normal. However, I really do think meta-analyses are a great way to try to prove your point without really having to explain every little aspect of your procedure.
In other news, I still love CBT.
Monday, September 24, 2007
Week 5: The Relationship
I was disappointed by the two articles we read this week. I didn't find them nearly as interesting as the past few weeks' readings. I usually I LOVE to talk about relationships. I guess the therapeutic one just doesn't excite me the way gossiping about my friends relationships excites me.
This brings me to the first thing I want to talk about: friendship. The Kirschenbaum and Jourdan (2005) article repeatedly pointed out that "much of the latest research on psychotherapy outcomes has demonstrated that, rather than particular approaches, it is certain 'common factors' in the therapy relationship that account for therapeutic change" (p. 44). These common factors include things such as "warmth, respect, empathy, acceptance and genuineness, positive relationship, and trust" (p. 44). Correct me if I'm wrong, but aren't these characteristics the same things you would expect out of a close friend? I know they are things I expect from my closest friends. So if it is true that different psychological approaches (such as CBT, IPT, etc...) don't matter nearly as much as these "common factors," why go to a therapist? Just go talk to your best friend for a couple of hours. At least you won't have to pay him/her.
Ok, I obviously don't believe that friends should replace therapists. Yet, I wonder if it is possible to teach somebody to have all of these "common factors". Is it possible to teach warmth? Or is it just a trait people either have or don't have? Have there been studies on this? I assume there must be...I just don't know of them.
I'm very surprised that this article repeatedly emphasized that psychology's theoretical schools "seem no better than one another" when common factors are considered, as opposed to stressing the importance of finding out how these common factors can interact with different treatments. I think that would be much more interesting (& important to our field) than what we read in this article.
Research has shown that CBT is better than other treatments for certain disorders (ie, specific phobias). According to Kirschenbaum and Jourdan, I think this would imply that CBT specialists just happen to be better at creating a therapeutic relationship than other types of therapists. I don't buy this. Why would CBT therapists be better at this than other therapists?
Alright...that is enough complaining out of me for this week. See you all on Wednesday.
Oh, on a side note, one of my favorite WashU professors (Richard Kurtz) was cited in this article (on p. 47)! If anybody is at all interested in research on hypnosis, I'd really recommend checking out his work. He does some pretty interesting stuff. And he is one of the most interesting people I've ever met.
This brings me to the first thing I want to talk about: friendship. The Kirschenbaum and Jourdan (2005) article repeatedly pointed out that "much of the latest research on psychotherapy outcomes has demonstrated that, rather than particular approaches, it is certain 'common factors' in the therapy relationship that account for therapeutic change" (p. 44). These common factors include things such as "warmth, respect, empathy, acceptance and genuineness, positive relationship, and trust" (p. 44). Correct me if I'm wrong, but aren't these characteristics the same things you would expect out of a close friend? I know they are things I expect from my closest friends. So if it is true that different psychological approaches (such as CBT, IPT, etc...) don't matter nearly as much as these "common factors," why go to a therapist? Just go talk to your best friend for a couple of hours. At least you won't have to pay him/her.
Ok, I obviously don't believe that friends should replace therapists. Yet, I wonder if it is possible to teach somebody to have all of these "common factors". Is it possible to teach warmth? Or is it just a trait people either have or don't have? Have there been studies on this? I assume there must be...I just don't know of them.
I'm very surprised that this article repeatedly emphasized that psychology's theoretical schools "seem no better than one another" when common factors are considered, as opposed to stressing the importance of finding out how these common factors can interact with different treatments. I think that would be much more interesting (& important to our field) than what we read in this article.
Research has shown that CBT is better than other treatments for certain disorders (ie, specific phobias). According to Kirschenbaum and Jourdan, I think this would imply that CBT specialists just happen to be better at creating a therapeutic relationship than other types of therapists. I don't buy this. Why would CBT therapists be better at this than other therapists?
Alright...that is enough complaining out of me for this week. See you all on Wednesday.
Oh, on a side note, one of my favorite WashU professors (Richard Kurtz) was cited in this article (on p. 47)! If anybody is at all interested in research on hypnosis, I'd really recommend checking out his work. He does some pretty interesting stuff. And he is one of the most interesting people I've ever met.
Monday, September 17, 2007
Week 4: EST 2 Reactions
I read the Meehl article and promptly decided to add Meehl to my list of heroes. This is a big deal considering the fact that most of my heroes come from television. So Meehl is the first psychologist to officially be in Shari's list of heroes. I would love to present this article in class tomorrow, but watch...tomorrow will be the one day I'm not actually picked to present...Ok, onto a more mature reaction....
I absolutely loved that Meehl opened his article by comparing clinical experience to the "science" of diagnosing witches. It completely set the caustic tone for the rest of the piece. I believe a caustic, sarcastic tone is one of the best ways one can discuss the lack of science in the so-called science our class is entering. The sarcasm highlights how laughable the state of the field is and it emphasizes that changes need to be made if we (clinical psychologists) want to be taken seriously.
Meehl mentioned his "clinical laziness" when discussing his observations of males dreaming of fire (p. 93). From what I have read and heard about thus far, this laziness seems much too common. Imagine how much more SCIENCE (such as controlled experiments, hypothesis testing, statistical model creating) could be done if clinicians took more time to lay the groundwork by systematically organizing their observations and theories and finding a way to get them to the public! This way, those of us who work in labs wouldn't have to guess what was going on behind the closed doors of clinics and private practices. If there were more open communication between the psychologists who run experiments and the psychologists who "used" the results from these experiments, I think the field could increase at an exponentially faster rate. (I put "used" in quotes because it seems like some psychologists ignore experimental results to continue using what they know based on "experience". And I put "experience" in quotes because Meehl's article pointed out that this experience typically is garbage. Or at least "unavoidably a mixture of truths, half-truths, and falsehoods". And I put THAT in quotes because I took it word-for-word from the abstract on p. 91.)
I believe there is no excuse to using therapeutic methods that have no empirical support UNLESS these methods are being used to gain insight or make observations about a method that will be experimentally tested in the very near future. Psychologists are paid a LOT of money to help people and these people put their psychological well being in the hands of these "professionals". If they aren't using methods that are supported by cold, hard data, they don't deserve to be practicing. Perhaps I'll change my mind once I begin practicing myself...but I hope not. Because then I would be no better than the people I am currently badmouthing.
Wow, so I guess this was not my most mature writing ever. But it is definitely something I am passionate about and I am really looking forward to discussing this piece in class on Wednesday.
I absolutely loved that Meehl opened his article by comparing clinical experience to the "science" of diagnosing witches. It completely set the caustic tone for the rest of the piece. I believe a caustic, sarcastic tone is one of the best ways one can discuss the lack of science in the so-called science our class is entering. The sarcasm highlights how laughable the state of the field is and it emphasizes that changes need to be made if we (clinical psychologists) want to be taken seriously.
Meehl mentioned his "clinical laziness" when discussing his observations of males dreaming of fire (p. 93). From what I have read and heard about thus far, this laziness seems much too common. Imagine how much more SCIENCE (such as controlled experiments, hypothesis testing, statistical model creating) could be done if clinicians took more time to lay the groundwork by systematically organizing their observations and theories and finding a way to get them to the public! This way, those of us who work in labs wouldn't have to guess what was going on behind the closed doors of clinics and private practices. If there were more open communication between the psychologists who run experiments and the psychologists who "used" the results from these experiments, I think the field could increase at an exponentially faster rate. (I put "used" in quotes because it seems like some psychologists ignore experimental results to continue using what they know based on "experience". And I put "experience" in quotes because Meehl's article pointed out that this experience typically is garbage. Or at least "unavoidably a mixture of truths, half-truths, and falsehoods". And I put THAT in quotes because I took it word-for-word from the abstract on p. 91.)
I believe there is no excuse to using therapeutic methods that have no empirical support UNLESS these methods are being used to gain insight or make observations about a method that will be experimentally tested in the very near future. Psychologists are paid a LOT of money to help people and these people put their psychological well being in the hands of these "professionals". If they aren't using methods that are supported by cold, hard data, they don't deserve to be practicing. Perhaps I'll change my mind once I begin practicing myself...but I hope not. Because then I would be no better than the people I am currently badmouthing.
Wow, so I guess this was not my most mature writing ever. But it is definitely something I am passionate about and I am really looking forward to discussing this piece in class on Wednesday.
Monday, September 10, 2007
Week 3: EST Reactions
I think the article by Chambless and Hollon is an extremely important contribution to the EST literature. I thought the authors did a wonderful job specifically explaining what needs to be accomplished for a treatment to be seen as efficacious. However, one aspect of their specifications seems to require more detailed explanation. I am very interested to know more about why the authors decided that there must be only two studies done on a treatment that show significant results for the treatment to be seen as empirically supported. It is obvious why one study alone should not be viewed as enough empirical evidence: one study’s results could have been the product if a certain setting or therapist, they could result from experimenter bias, or they could just be a random fluke. I understand that these studies are expensive and time-consuming, but I still believe more than two studies should be done before a treatment is viewed as an EST. Even with all the other specifications, such as reanalyzing data, judging study design, and so forth, I believe that two seems like an arbitrarily picked (and very leniently low) number. Perhaps Chambless and Hollon could have elaborated on why it was determined that two, rather than three or four was the cut-off number. I although think it is extremely lenient for a treatment to be labeled possibly efficacious on the foundation that one study alone (or research conducted by only one team) found the treatment to be successful. Maybe I am just too strict, but if I am going to use a treatment as a clinician (or use them as a client) I want to be pretty sure that the treatment will work, and one or two studies backing it up is not going to cut it for me.
I am also very interested in the debate about whether therapist experience is important with regards to treatment outcome. I found the ways in which Chambless and Hollon refuted this suggestion to be fascinating: they immediately noted that they expect that training matters in specific interventions (a.k.a. it matters in the ones that have not been tested yet). They back this up with much less empirical evidence than I would expect, which is odd considering the whole article deals with providing empirical evidence for treatments. They are also quick to attack research that has shown experience to be unrelated to treatment outcome (i.e., Christensen & Jacobson, 1994; Strupp & Hadley, 1979). I worry that this section of the article may be more related to bias (or perhaps denial) of the authors rather than empirical support.
I am also very interested in the debate about whether therapist experience is important with regards to treatment outcome. I found the ways in which Chambless and Hollon refuted this suggestion to be fascinating: they immediately noted that they expect that training matters in specific interventions (a.k.a. it matters in the ones that have not been tested yet). They back this up with much less empirical evidence than I would expect, which is odd considering the whole article deals with providing empirical evidence for treatments. They are also quick to attack research that has shown experience to be unrelated to treatment outcome (i.e., Christensen & Jacobson, 1994; Strupp & Hadley, 1979). I worry that this section of the article may be more related to bias (or perhaps denial) of the authors rather than empirical support.
Monday, September 3, 2007
Week 2: DSM Reactions
I want to discuss two issues in this blog. The first is whether researchers should focus on psychological phenomena or psychiatric diagnoses and the second is whether clinicians should use categorical or dimensional diagnoses. For both issues, I believe BOTH aspects are equally important and BOTH need to be researched/used.
When few empirical studies have been done on a specific psychological phenomenon, I agree with Persons’ (1986) view that researchers should attempt to focus their research on the phenomenon rather than psychiatric diagnoses. However, I believe that this should only be the first step in a two step process. Once a solid foundation of empirical studies has accumulated on the phenomenon and multiple theories have been formulated and tested, the logical next step would be to study the theories created IN specific psychiatric disorders. As Persons’ noted, it is much easier to develop theories about specific psychological processes of symptoms rather than develop theories to explain the psychological processes that lead to psychiatric disorders. As a result, it makes sense to study the symptom specifically as a first step. However, is it valid to generalize theories focusing on only one symptom to people with diagnosed disorders? For instance, if a researcher is looking at loosening of associations, the researcher may have subjects who would fall in many different diagnostic categories. Although the researcher will be more likely to learn about possible etiological aspects of the loosening of associations, there will be no evidence to show that what is found will generalize to subjects who suffer from loosening of associations AND have been diagnosed with schizophrenia. It is possible that the loosening of associations common to schizophrenics might actually be very different from the loosening of associations common to other disorders. It is integral for researchers to look at the symptoms within specific diagnostic categories once theories have been tested on the symptoms alone.
Although I planned to only discuss the Persons' article, I decided I wanted to mention a quick thought on categorical vs. dimensional diagnoses. In both Widiger and Clark's (2000) and Allen's (1998) articles, the proposal to use dimensional diagnoses instead of categorical diagnoses was proposed. If this proposal was taken seriously, I think it would be a ridiculous loss (or perhaps waste) of solid past research on categorical diagnoses. I think the idea of using dimensional diagnoses has a lot to offer (it lacks arbitrary cutoffs and it has the possibility of giving more information than categorical diagnoses), but I do not think it should completely replace the use of categorical diagnoses. In my opinion, psychologists should use both dimensional and categorical diagnoses. Otto Kernberg proposed a dimensional model in which people are rated on a scale of Range I to Range V (I is normal, II is neurotic, III is upper level borderline, IV is lower level borderline, and V is psychotic). This scale takes a more universal approach to diagnosis (similar to Axis V of the multiaxial diagnoses) and it provides useful information about people than cannot be obtained from a categorical diagnosis alone. A Range IV anorexic patient is qualitatively different from a Range V anorexic patient: a Range V would have poorer reality testing and worse social reality testing, among other issues. However, if only the categorical diagnosis was used, the two anorexics would seem deceivingly similar. I believe it is important to integrate new methods of diagnosis into current methods, rather than simply choosing one or the other.
When few empirical studies have been done on a specific psychological phenomenon, I agree with Persons’ (1986) view that researchers should attempt to focus their research on the phenomenon rather than psychiatric diagnoses. However, I believe that this should only be the first step in a two step process. Once a solid foundation of empirical studies has accumulated on the phenomenon and multiple theories have been formulated and tested, the logical next step would be to study the theories created IN specific psychiatric disorders. As Persons’ noted, it is much easier to develop theories about specific psychological processes of symptoms rather than develop theories to explain the psychological processes that lead to psychiatric disorders. As a result, it makes sense to study the symptom specifically as a first step. However, is it valid to generalize theories focusing on only one symptom to people with diagnosed disorders? For instance, if a researcher is looking at loosening of associations, the researcher may have subjects who would fall in many different diagnostic categories. Although the researcher will be more likely to learn about possible etiological aspects of the loosening of associations, there will be no evidence to show that what is found will generalize to subjects who suffer from loosening of associations AND have been diagnosed with schizophrenia. It is possible that the loosening of associations common to schizophrenics might actually be very different from the loosening of associations common to other disorders. It is integral for researchers to look at the symptoms within specific diagnostic categories once theories have been tested on the symptoms alone.
Although I planned to only discuss the Persons' article, I decided I wanted to mention a quick thought on categorical vs. dimensional diagnoses. In both Widiger and Clark's (2000) and Allen's (1998) articles, the proposal to use dimensional diagnoses instead of categorical diagnoses was proposed. If this proposal was taken seriously, I think it would be a ridiculous loss (or perhaps waste) of solid past research on categorical diagnoses. I think the idea of using dimensional diagnoses has a lot to offer (it lacks arbitrary cutoffs and it has the possibility of giving more information than categorical diagnoses), but I do not think it should completely replace the use of categorical diagnoses. In my opinion, psychologists should use both dimensional and categorical diagnoses. Otto Kernberg proposed a dimensional model in which people are rated on a scale of Range I to Range V (I is normal, II is neurotic, III is upper level borderline, IV is lower level borderline, and V is psychotic). This scale takes a more universal approach to diagnosis (similar to Axis V of the multiaxial diagnoses) and it provides useful information about people than cannot be obtained from a categorical diagnosis alone. A Range IV anorexic patient is qualitatively different from a Range V anorexic patient: a Range V would have poorer reality testing and worse social reality testing, among other issues. However, if only the categorical diagnosis was used, the two anorexics would seem deceivingly similar. I believe it is important to integrate new methods of diagnosis into current methods, rather than simply choosing one or the other.
Subscribe to:
Posts (Atom)