Ok, the heading is a bit of an exaggeration. I didn't learn everything I need to know in my freshman year Intro to Psychology course. However, I learned many important things in that course that many M.S./Ph.D/Psy.D wielding psychologists seemed to have forgotten.
1) To be able to make inferences about cause and effect, you need to perform a controlled experiment with a representative sample of the population.
In the Dawes, Faust, and Meehl article, the superiority of actuarial method of judgment is discussed (1989). Apparently, clinicians who use the clinical method as opposed to the actuarial method believe that they have special intuitive powers from their experience that allow them to predict outcomes of their patients. However, clinicians base their predictions on the "skewed sample of humanity" that they see, which is not a "truly representative sample" (p. 1671). As a result, "it may be difficult, if not impossible, to determine relations among variables" (p. 1671). To make accurate predictions, it is necessary that clinicians look at both people with a disorder AND people without the disorder. If clinicians want to be able to make better predictions without relying on actuarial methods, they should perform studies (or refer to published studies) that look at a truly representative sample of the population.
2) Self-fulfilling Prophecies
As I learned in my first psychology course, self-fulfilling prophecies are when someone (Person A) expects someone else (Person B) to act in a certain way. These expectations lead to subtle changes in Person A's behavior, which elicit the expected behavior in Person B. In other words, because Person A expects Person B to act in a certain way, Person B acts in that way!
In some cases, it seems that clinical judgments result in self-fulfilling prophecies (Dawest et al, 1989). This results in clinical judgments causing outcomes instead of predicting outcomes. This can be a huge problem, especially when the outcomes will result in negative repercussions, such as predicting that a person will act violently.
3) Hindsight Bias
Hindsight bias refers to the fact that outcomes seem more predictable (and obvious) when they are known vs. when they are being predicted and that past predictions are often remembered as being in line with the correct outcomes, regardless of what the person originally predicted. Clinicians often recall their original judgments to be in line with outcomes (Dawes et al, 1989). As a result, their original predictions lose their value.
4) Confirmation Bias
Confirmation bias refers to the nature of humans to look for evidence that supports their hypotheses and to reject information that is not in line with their hypotheses. Clinical judgments often result in confirmation biases, so as a result clinicians often believe they are correct in their predictions when they are not (Dawes et al, 1989)! This results in overconfidence in clinical judgment. Dawes et al point out a study "demonstrating the upper range of misappraisal, [in which] most clinicians were quite confident in their diagnosis although not one was correct" (p. 1672)!
Confirmation biases not only lead to incorrect predictions and unmerited confidence in clinicians, but it also leads to the perseverance of potentially harmful treatments (PHTs). In his 2007 article about PHTs, Lilienfeld states that "persistent beliefs concerning the efficacy of PHTs may, in turn, be mediated by attributions regarding the causes of client deterioration" (p. 64). As a result, therapies that can result in symptom worsening and sometimes even death continue to be available to an uninformed public.
Garb (1999) pointed out 8 years ago that clinicians should stop using the Rorschach inkblot test unless we are able to find solid evidence that it has good reliability, validity, and utility. Otherwise, it is likely to results in lots of wasted time on the part of clinicians and money on the part of clients. Also, it is likely that it will lead to misdiagnoses. Garb mentioned that "in many instances, the addition of the Rorshach led to a decrease in validity" (p. 315)! The confirmation bias probably plays a huge role in the continuing use of such a useless tool. Clinicians believe that the Rorshach tests work, so they point out proof that supports their claims and disregard evidence that shows the contrary.
I could keep going...the articles we read this week also showed evidence of the fundamental attribution error, such as when psychologists assume client deterioration is due to specific characteristics of an individual as opposed to the fact that a treatment is not working (Lilienfeld, 2007). They show that many clinicians do not understand base rates (Dawes et al, 1989). They also show that psychologists use tests, treatments, and predictive methods that lack empirical support (Garb, 2007; Dawes et al, 1989; Lilienfeld, 2007).
Perhaps all practicing clinicians should be required to sit in on an Introduction to Psychology course every couple of years or so. It seems like they forgot a bunch of key principles that many freshman could recite by heart. If the clinicians remembered these principles, a lot of undue suffering could probably be avoided.
Monday, November 26, 2007
Monday, November 12, 2007
Week 12: Is Dexter a Life-Course-Persistent Antisocial Guy?
I found the Moffitt (1993) article to be very fascinating. Her arguments for the two different types of antisocial people makes sense to me intuitively. However, intuition is not always a good thing to base scientific theories on. I am craving to know if her theories were ever supported empirically.
Moffitt's new taxonomy for antisocial behavior has HUGE implications for data collection. If Moffitt is correct and there are actually two different types of people who present with antisocial behaviors, who can easily be distinguished by when antisocial behaviors begin and whether or not they ever desist, cross-sectional data collection will never be able to distinguish between the two groups successfully! It is integral that researchers use longitudinal research methods if they want to distinguish adolescence limited from life-course persistent antisocial people. I kind of wish the second article we read this week was a follow-up on this article so we could see how Moffitt's new taxonomy faired when people attempted to back it up with empirical support. Has it been supported yet?
Moffitt said that delinquents have lower than average cognitive abilities. She stated that "this relation is not an artifact of slow-witted delinquents' greater susceptibility to detection by police; undetected delinquents have weak cogntive skills too" (p. 680). I really want to know how she knows this! If these people have not been caught, how do we know that they are engaging in delinquent behavior? Are they just very honest and frank when they self-report? I would like to know more about how she came to this conclusion. She cited a past article she wrote, but I wish she added a sentence into THIS article about how it is possible to find out information like this.
Have any of you ever seen the TV show Dexter? Well, I'm kind of obsessed with it. It's pretty amazing. Anyway, it is about a guy who works for the Miami police doing forensics. He helps the cops catch some pretty awful killers. The twist is that Dexter himself is actually a serial killer. And the other twist is that he only kills other killers. He's kind of like a "Dark Avenger"...like Batman, except much creepier. Anyway, I amuse myself by attempting to diagnose Dexter with different DSM disorders... for a while, I thought he was schizoid, because he used to seem as if he didn't need or want any human interaction and he has very flat affect. However, he has murdered close to 40 people, which leads me to believe he has a few antisocial tendencies. (haha...a few...) Anyway, Dexter would definitely be a life-course-persistent. He started off killing animals when he was young (just as many people who end up antisocial do) and he had urges to kill people since he was in elementary school. However, his delinquency is not stable across situations. He doesn't steal/cheat/pick fights or anything like that. All he does is kill people. Does that mean he isn't a life-course persistent antisocial? Dexter has lately been comparing himself to a drug addict--hinting that killing people is an addiction for him. Can you be addicted to killing people? What kind of diagnost would that warrent? If anyone watches the show, please give me your input!! And if you don't watch the show, go rent the first season right now. Stop what you are doing and go to Blockbuster. Seriously. It's by far one of the best shows on television and so fascinating to pick apart and analyze.
Interesting side note: I had no idea if Terrie Moffitt was a male or female, but for some reason, I assumed male. However, I just googled Moffitt and found that she is in fact, a woman! There is a higher prevalence rate for antisocial male behavior--do you think this is why I assumed Moffitt was a male?
Moffitt's new taxonomy for antisocial behavior has HUGE implications for data collection. If Moffitt is correct and there are actually two different types of people who present with antisocial behaviors, who can easily be distinguished by when antisocial behaviors begin and whether or not they ever desist, cross-sectional data collection will never be able to distinguish between the two groups successfully! It is integral that researchers use longitudinal research methods if they want to distinguish adolescence limited from life-course persistent antisocial people. I kind of wish the second article we read this week was a follow-up on this article so we could see how Moffitt's new taxonomy faired when people attempted to back it up with empirical support. Has it been supported yet?
Moffitt said that delinquents have lower than average cognitive abilities. She stated that "this relation is not an artifact of slow-witted delinquents' greater susceptibility to detection by police; undetected delinquents have weak cogntive skills too" (p. 680). I really want to know how she knows this! If these people have not been caught, how do we know that they are engaging in delinquent behavior? Are they just very honest and frank when they self-report? I would like to know more about how she came to this conclusion. She cited a past article she wrote, but I wish she added a sentence into THIS article about how it is possible to find out information like this.
Have any of you ever seen the TV show Dexter? Well, I'm kind of obsessed with it. It's pretty amazing. Anyway, it is about a guy who works for the Miami police doing forensics. He helps the cops catch some pretty awful killers. The twist is that Dexter himself is actually a serial killer. And the other twist is that he only kills other killers. He's kind of like a "Dark Avenger"...like Batman, except much creepier. Anyway, I amuse myself by attempting to diagnose Dexter with different DSM disorders... for a while, I thought he was schizoid, because he used to seem as if he didn't need or want any human interaction and he has very flat affect. However, he has murdered close to 40 people, which leads me to believe he has a few antisocial tendencies. (haha...a few...) Anyway, Dexter would definitely be a life-course-persistent. He started off killing animals when he was young (just as many people who end up antisocial do) and he had urges to kill people since he was in elementary school. However, his delinquency is not stable across situations. He doesn't steal/cheat/pick fights or anything like that. All he does is kill people. Does that mean he isn't a life-course persistent antisocial? Dexter has lately been comparing himself to a drug addict--hinting that killing people is an addiction for him. Can you be addicted to killing people? What kind of diagnost would that warrent? If anyone watches the show, please give me your input!! And if you don't watch the show, go rent the first season right now. Stop what you are doing and go to Blockbuster. Seriously. It's by far one of the best shows on television and so fascinating to pick apart and analyze.
Interesting side note: I had no idea if Terrie Moffitt was a male or female, but for some reason, I assumed male. However, I just googled Moffitt and found that she is in fact, a woman! There is a higher prevalence rate for antisocial male behavior--do you think this is why I assumed Moffitt was a male?
Monday, November 5, 2007
Week 11: How an article both made me really happy and really angry
I really enjoyed this week's reading, probably because it was relevant to my research. I like when my homework doubles as being possible sources for future papers. :)
In the Mineka and Zinbarg (2006) article, one point really stood out to me. While discussing the trauma phase of Posttraumatic Stress Disorder, they state that "survivors with PTSD were more likely than those without PTSD to retrospectively report having experienced mental defeat during their trauamatization" (p. 18). After reading this, I noted in the margin "good to know". In my opinion, this one sentence was one of the most important sentences I've read all year. I know that this statement was not supposed to be the part of the article that people walked away thinking about, but after reading this article, I couldn't stop thinking about how incredibly important this fact is. Mineka and Zinbarg are basically pointing out that not admitting defeat during a crisis can have serious psychological benefits. They are telling you, if you are in a crisis, DON'T GIVE UP. Do people know this??? If I am ever assaulted or in another traumatic situation, I now know what I should be thinking about in order to protect my psychological well being. I feel that I now have the power to help immunize myself from incapacitating psychological problems that I didn't have last week. This is amazing to me. I can't believe reading an article for class could empower me in such a way.
I found the Mineka and Zinbarg article to be extremely interesting and enlightening. I feel that I now have a good grasp on contemporary learning models, which I knew next to nothing about before I started reading this article. As a anxiety researcher with a cognitive background, I feel that I can incorporate what I learned from this article into my own person theories and ideas about anxiety. I do not feel like cognitive and learning models are incompatible. However, Mineka and Zinbarg do not seem to agree with me...
Now on to some complaining...
In the Conclusion of their article, Mineka and Zinbarg discussed why they felt contemporary learning theory models are better than other models (such as psychodynamic and cognitive). I give these guys full range to diss psychodynamic theories, but cognitive theories?? That's not okay with me. Why does the learning model have to be BETTER? Why can't the multiple models complement each other? I think it is very close minded of the authors to simply brush aside cognitive models (and okay...I suppose psychodynamic models probably have some good points too).
And also, if Mineka and Zinbarg are going to say that cognitive models are less comprehensive than contemporary learning models, I think they should get their facts straight. They stated that "the cognitive model is silent about the variety of different vulnerability factors that the learning theory approach explicitly addresses affecting which individuals with panic attacks are most likely to develop PD or PDA" (p. 22). This is simply not true. Cognitive researchers use questionnaire called the Anxiety Sensitivity Index to look at people's cognitions related to panic-related symptoms (it measures their fear of these symptoms). This questionnaire is a very helpful measure in identifying people who will develop PD. Nice try, Mineka and Zinbarg. Nice try.
Ok, see you all on Wednesday!
In the Mineka and Zinbarg (2006) article, one point really stood out to me. While discussing the trauma phase of Posttraumatic Stress Disorder, they state that "survivors with PTSD were more likely than those without PTSD to retrospectively report having experienced mental defeat during their trauamatization" (p. 18). After reading this, I noted in the margin "good to know". In my opinion, this one sentence was one of the most important sentences I've read all year. I know that this statement was not supposed to be the part of the article that people walked away thinking about, but after reading this article, I couldn't stop thinking about how incredibly important this fact is. Mineka and Zinbarg are basically pointing out that not admitting defeat during a crisis can have serious psychological benefits. They are telling you, if you are in a crisis, DON'T GIVE UP. Do people know this??? If I am ever assaulted or in another traumatic situation, I now know what I should be thinking about in order to protect my psychological well being. I feel that I now have the power to help immunize myself from incapacitating psychological problems that I didn't have last week. This is amazing to me. I can't believe reading an article for class could empower me in such a way.
I found the Mineka and Zinbarg article to be extremely interesting and enlightening. I feel that I now have a good grasp on contemporary learning models, which I knew next to nothing about before I started reading this article. As a anxiety researcher with a cognitive background, I feel that I can incorporate what I learned from this article into my own person theories and ideas about anxiety. I do not feel like cognitive and learning models are incompatible. However, Mineka and Zinbarg do not seem to agree with me...
Now on to some complaining...
In the Conclusion of their article, Mineka and Zinbarg discussed why they felt contemporary learning theory models are better than other models (such as psychodynamic and cognitive). I give these guys full range to diss psychodynamic theories, but cognitive theories?? That's not okay with me. Why does the learning model have to be BETTER? Why can't the multiple models complement each other? I think it is very close minded of the authors to simply brush aside cognitive models (and okay...I suppose psychodynamic models probably have some good points too).
And also, if Mineka and Zinbarg are going to say that cognitive models are less comprehensive than contemporary learning models, I think they should get their facts straight. They stated that "the cognitive model is silent about the variety of different vulnerability factors that the learning theory approach explicitly addresses affecting which individuals with panic attacks are most likely to develop PD or PDA" (p. 22). This is simply not true. Cognitive researchers use questionnaire called the Anxiety Sensitivity Index to look at people's cognitions related to panic-related symptoms (it measures their fear of these symptoms). This questionnaire is a very helpful measure in identifying people who will develop PD. Nice try, Mineka and Zinbarg. Nice try.
Ok, see you all on Wednesday!
Monday, October 29, 2007
Week 10: Feeling blue?
I just read the Coyne chapter "Thinking Interactionally about Depression: A Radical Reinstatement". Wow. That guy has a lot of nerve. I have a lot to say, but I'll narrow it down to a few of the things that bothered me the most about the chapter (and Coyne as a human).
1) At the very beginning of the article, Coyne complains about the fact that the literature still cites a paper he wrote in 1976 in which he conceptualizes an interpersonal theory of depression. Coyne states that "it has been disappointing that it is taking so long for subsequent work to move beyond my [1976] conceptualization" (p. 365). I agree that it is important for other researchers to continue producing new theories and backing them up with new empirical evidence, but I think it is a ridiculous thing to complain about. If Coyne is unhappy about this, HE should be fixing it by doing more and more research until his first ideas become obsolete because he has proved newer, better ones to be better supported by new data. I know that he HAS continued to do research, but he obviously hasn't done enough to override his original publication. So, Coyne (if you are still alive), stop complaining and get back to work.
2) I can't believe I just wrote "if you are still alive". I am extremely insensitive.
3) On page 367, Coyne mentions how "it is a lot" to ask participants to report on subtle mood shifts in a short period of time. It has been shown (by Tim Wilson, from our department!) that people generally suck at introspection. Therefore, I agree with Coyne that it is a bad idea to use self-report as the only measure in depression research. I feel that it is extremely important for researchers to find other ways to measure facets of depression, since self-report cannot always be trusted to be accurate. This leads me to my next point....
4) Coyne says one of the biggest problems with depression research is that many researchers use participants' statements about themselves as "evidence of enduring cognitive structures" (p. 368). As I mentioned in the last point, I agree with Coyne that self-report is not a great method to use. However, Coyne does not acknowledge the fact that there is a plethora of other methods available to look at cognitive structures. With current technology, we have the ability to look at memory biases, interpretation biases, attention biases, implicit associations, and much more. All of these things are validated ways to examine cognitive structures and processes. However, many of these measures are used more often by cognitive and social psychologists than clinical psychologists. I strongly believe that it is necessary for psychologists to be more integrative in their research approaches; they need to be more willing to look in other areas to find better methods. I know we talked about this a few weeks ago, but integration is what is necessary to truly advance the science of psychology.
5) Coyne seems like a jerk. I just thought I'd put that out there.
Ok, time for bed!! See you all on Wednesday!
1) At the very beginning of the article, Coyne complains about the fact that the literature still cites a paper he wrote in 1976 in which he conceptualizes an interpersonal theory of depression. Coyne states that "it has been disappointing that it is taking so long for subsequent work to move beyond my [1976] conceptualization" (p. 365). I agree that it is important for other researchers to continue producing new theories and backing them up with new empirical evidence, but I think it is a ridiculous thing to complain about. If Coyne is unhappy about this, HE should be fixing it by doing more and more research until his first ideas become obsolete because he has proved newer, better ones to be better supported by new data. I know that he HAS continued to do research, but he obviously hasn't done enough to override his original publication. So, Coyne (if you are still alive), stop complaining and get back to work.
2) I can't believe I just wrote "if you are still alive". I am extremely insensitive.
3) On page 367, Coyne mentions how "it is a lot" to ask participants to report on subtle mood shifts in a short period of time. It has been shown (by Tim Wilson, from our department!) that people generally suck at introspection. Therefore, I agree with Coyne that it is a bad idea to use self-report as the only measure in depression research. I feel that it is extremely important for researchers to find other ways to measure facets of depression, since self-report cannot always be trusted to be accurate. This leads me to my next point....
4) Coyne says one of the biggest problems with depression research is that many researchers use participants' statements about themselves as "evidence of enduring cognitive structures" (p. 368). As I mentioned in the last point, I agree with Coyne that self-report is not a great method to use. However, Coyne does not acknowledge the fact that there is a plethora of other methods available to look at cognitive structures. With current technology, we have the ability to look at memory biases, interpretation biases, attention biases, implicit associations, and much more. All of these things are validated ways to examine cognitive structures and processes. However, many of these measures are used more often by cognitive and social psychologists than clinical psychologists. I strongly believe that it is necessary for psychologists to be more integrative in their research approaches; they need to be more willing to look in other areas to find better methods. I know we talked about this a few weeks ago, but integration is what is necessary to truly advance the science of psychology.
5) Coyne seems like a jerk. I just thought I'd put that out there.
Ok, time for bed!! See you all on Wednesday!
Sunday, October 21, 2007
Week 9: Stimulus Control
I really enjoyed this week's reading on Stimulus Control. The article made it seem like SC is incredibly easy for any therapist to implement (even though the client may exhibit noncompliance). It does not seem to require much (if any) psychological training to get someone to start using it. I mean, the article mentioned that nurses and general practitioners have been successful teaching it to their patients.
I want to learn how to administer SC. Not because it seems easy and because it doesn't seem to require psychological training...but because it seems to really work. I would love to be a therapist with a high success rate. I want to help people, and if this is a way to make people more satisfied with their lives, I want to use it. I think it would be great to treat only insomniacs and only use SC and just sit back and smile as all my clients start getting long, healthy, happy, full nights of sleep.
Ok--pause. I think I have found a problem with the SC article. Never before have I read about a therapy and thought, "WOW, this seems perfect!". I think this proves that the article we read was biased and did not include any research that sheds a negative light on it. Even negative aspects of using SC (such as the high noncompliance rate) was sugar coated by emphasizing the strategies that can be used to counteract it. I am 99% positive that there has to be negative aspects of SC and I think I would have a more in depth, fuller understanding of SC if the article included more information about these negative aspects. No treatment is perfect...right? Or is SC the PERFECT treatment for insomniacs?
I have a weird question about SC. To follow the steps correctly, nothing can be done in the bed except sleep & sex. Why is sex allowed? I mean, I know sex typically occurs in the bedroom and it would probably be taboo to make therapists suggest doing it outside of the bedroom, but doesn't sex create high levels of arousal? Wouldn't SC be more effective if sex was relegated to another room (perhaps a spare bedroom?) as well? Have there been any studies done that show sex in the bedroom does not hinder the effects of SC?
Ok, now that I brought up that awkward topic, I think I'll stop before I embarrass myself anymore. See you all in class on Wednesday!
I want to learn how to administer SC. Not because it seems easy and because it doesn't seem to require psychological training...but because it seems to really work. I would love to be a therapist with a high success rate. I want to help people, and if this is a way to make people more satisfied with their lives, I want to use it. I think it would be great to treat only insomniacs and only use SC and just sit back and smile as all my clients start getting long, healthy, happy, full nights of sleep.
Ok--pause. I think I have found a problem with the SC article. Never before have I read about a therapy and thought, "WOW, this seems perfect!". I think this proves that the article we read was biased and did not include any research that sheds a negative light on it. Even negative aspects of using SC (such as the high noncompliance rate) was sugar coated by emphasizing the strategies that can be used to counteract it. I am 99% positive that there has to be negative aspects of SC and I think I would have a more in depth, fuller understanding of SC if the article included more information about these negative aspects. No treatment is perfect...right? Or is SC the PERFECT treatment for insomniacs?
I have a weird question about SC. To follow the steps correctly, nothing can be done in the bed except sleep & sex. Why is sex allowed? I mean, I know sex typically occurs in the bedroom and it would probably be taboo to make therapists suggest doing it outside of the bedroom, but doesn't sex create high levels of arousal? Wouldn't SC be more effective if sex was relegated to another room (perhaps a spare bedroom?) as well? Have there been any studies done that show sex in the bedroom does not hinder the effects of SC?
Ok, now that I brought up that awkward topic, I think I'll stop before I embarrass myself anymore. See you all in class on Wednesday!
Monday, October 1, 2007
Week 7: Behavioral Activation
So I am having a friend from out of town visit this weekend, so I decided to do next week's reading & blog this week! So lucky you, Mr. or Miss Reader...you get a double dose of Shari! If you want to see what I have to say about CBT, scroll down to the next blog post. (In case you're wondering...I love CBT.)
I enjoyed the readings this week. I found them informative and interesting. I want to applaud the Jacobson (2001) article. I think he is doing psychology the way it SHOULD be done. Jacobson was part of a team that found that a certain type of therapy had a significant effect, so he formulated theories and models based on the therapy and after this, he did a large clinical trial to test the efficacy of the therapy (Jacobson et al., 1996). This is very different than what many clinicians (I'll call them the "unscientific clinicians") do: they find a therapy that they think works...so they do that kind of therapy, without any type of rigorous testing or theorizing. Based on the article we read, I think Jacobson is a true psychological scientist. I aspire to do work that is as scientific as his.
I'm very interested in Behavioral Activation. I have to admit that I knew relatively little about it before reading this article. I really approve of any type of therapy that has its roots in something with such a large amount of empirical support, like behaviorism. I like that the idea of using Behavioral Activation as a stand alone therapy came from a scientific study, rather than a random idea of a therapist. I think random ideas can also be brilliant (so long as they are tested soon after they are generated), but I think the fact that this form of therapy sprang from research adds to its credibility.
Although I am interested in Behavioral Activation, I am still a little skeptical of it. I think it is counter-intuitive to not treat cognitions in depressed clients. I feel like targeting both behavior and cognitive problems is a more thorough way to treat depression, but if it can save money and time for the clients, I suppose it is ok to only do one of the two if it is proved to be sufficient. I am very curious to know the results of the large study Jacobson was working on at press time of this article to see if BA truly is more efficacious than CT.
Some of the "unscientific clinicians" I mentioned earlier, the ones who practice without empirical support or plans of eventual empirical support, may complain that they shouldn't have to validate their therapy with empirical evidence because they can tell their therapy works just by interacting with their clients. Well, we learned from our first set of readings that this is not the case most of the time, since clinical judgment typically sucks. So if any of these "unscientific clinicians" are reading this (which I doubt, since I assume only our class reads this and we all seem to be supporters of empirical support), I'd like to tell you to get your act together and act like a scientist. You got a Ph.D. or Psy.D., which shows you're smart, so do some research or serious theorizing and prove that you deserve your title!
Alright, that's all for now! See you all in class. :)
I enjoyed the readings this week. I found them informative and interesting. I want to applaud the Jacobson (2001) article. I think he is doing psychology the way it SHOULD be done. Jacobson was part of a team that found that a certain type of therapy had a significant effect, so he formulated theories and models based on the therapy and after this, he did a large clinical trial to test the efficacy of the therapy (Jacobson et al., 1996). This is very different than what many clinicians (I'll call them the "unscientific clinicians") do: they find a therapy that they think works...so they do that kind of therapy, without any type of rigorous testing or theorizing. Based on the article we read, I think Jacobson is a true psychological scientist. I aspire to do work that is as scientific as his.
I'm very interested in Behavioral Activation. I have to admit that I knew relatively little about it before reading this article. I really approve of any type of therapy that has its roots in something with such a large amount of empirical support, like behaviorism. I like that the idea of using Behavioral Activation as a stand alone therapy came from a scientific study, rather than a random idea of a therapist. I think random ideas can also be brilliant (so long as they are tested soon after they are generated), but I think the fact that this form of therapy sprang from research adds to its credibility.
Although I am interested in Behavioral Activation, I am still a little skeptical of it. I think it is counter-intuitive to not treat cognitions in depressed clients. I feel like targeting both behavior and cognitive problems is a more thorough way to treat depression, but if it can save money and time for the clients, I suppose it is ok to only do one of the two if it is proved to be sufficient. I am very curious to know the results of the large study Jacobson was working on at press time of this article to see if BA truly is more efficacious than CT.
Some of the "unscientific clinicians" I mentioned earlier, the ones who practice without empirical support or plans of eventual empirical support, may complain that they shouldn't have to validate their therapy with empirical evidence because they can tell their therapy works just by interacting with their clients. Well, we learned from our first set of readings that this is not the case most of the time, since clinical judgment typically sucks. So if any of these "unscientific clinicians" are reading this (which I doubt, since I assume only our class reads this and we all seem to be supporters of empirical support), I'd like to tell you to get your act together and act like a scientist. You got a Ph.D. or Psy.D., which shows you're smart, so do some research or serious theorizing and prove that you deserve your title!
Alright, that's all for now! See you all in class. :)
Week 6: CBT is great & Shari may be paranoid...
So if you haven't already figured it out, I'm a fan of CBT. I'm actually a huge fan of CBT. Which is odd, considering the fact that I have never actually READ or SEEN a CBT manual. I know what CBT is, I know that it is based on SCIENCE, when a bunch of other forms of therapies are not, so I decided I'm a huge fan of it. This week made me happy because I got to learn more about CBT and I got lots of evidence, in the form of meta-analyses, to support the efficacy/effectiveness/efficiency of CBT.
Now that I've made it clear that I am honestly a fan of CBT, I want to take a little bit of time to complain about meta-analyses. I have decided that they are a very sneaky way to report information. I don't think that the authors of meta-analyses are always purposefully being sneaky...but I think it is much easier to fudge a few facts or hide a few errors in a meta-analysis than it is in a report on a single experiment.
Here are a few instances in which I believe meta-analytic authors are being "tricky" in the Butler (2006) meta-meta-analytic article we read for class:
-It is mentioned that Parker et. al. did not report the criteria they used to select studies for their review paper and as a result, "it is difficult to interpret their conclusions" (p. 20). In my mind, this means the authors were probably doing something sneaky when selecting which articles to include. It is very possible that they only chose articles that supported their own opinions. This seems especially likely when you take into account that "researcher allegiance accounted for half the difference between CBT and other treatments" in a different meta-analytic study done by Dobson in 1989 (p.20).
-In a 1998 meta-analysis, Gloaguen et. al. found that CBT had significantly better outcomes than medication for depression. However, Gloaguen et. al. "included some early studies comparing CT with medications, which had methodological features that favored CT" (p. 23). Was this done on purpose? If so, it is a very sneaky way to prove your point; purposely include studies that have methodological advantages for whatever you favor.
-Ten studies (well, ten meta-analytic articles) were excluded from Butler's (2006) meta-meta-analysis because they were written in a foreign language. What if all ten of these articles pointed to results that were extremely different from other results? What if they would have greatly impacted the meta-meta-analysis? We will never know. We will also never know if Butler honestly did not include these studies because of a language gap or if he did not include them because there was something in these articles that he wanted to hide. I doubt this is the case, but it is a possibility. Sure it would be expensive to translate the articles to English, but I think it would have been a good idea if Butler et. al. had the funds.
Perhaps I am just paranoid and all of these things I find "sneaky" are actually very normal. However, I really do think meta-analyses are a great way to try to prove your point without really having to explain every little aspect of your procedure.
In other news, I still love CBT.
Now that I've made it clear that I am honestly a fan of CBT, I want to take a little bit of time to complain about meta-analyses. I have decided that they are a very sneaky way to report information. I don't think that the authors of meta-analyses are always purposefully being sneaky...but I think it is much easier to fudge a few facts or hide a few errors in a meta-analysis than it is in a report on a single experiment.
Here are a few instances in which I believe meta-analytic authors are being "tricky" in the Butler (2006) meta-meta-analytic article we read for class:
-It is mentioned that Parker et. al. did not report the criteria they used to select studies for their review paper and as a result, "it is difficult to interpret their conclusions" (p. 20). In my mind, this means the authors were probably doing something sneaky when selecting which articles to include. It is very possible that they only chose articles that supported their own opinions. This seems especially likely when you take into account that "researcher allegiance accounted for half the difference between CBT and other treatments" in a different meta-analytic study done by Dobson in 1989 (p.20).
-In a 1998 meta-analysis, Gloaguen et. al. found that CBT had significantly better outcomes than medication for depression. However, Gloaguen et. al. "included some early studies comparing CT with medications, which had methodological features that favored CT" (p. 23). Was this done on purpose? If so, it is a very sneaky way to prove your point; purposely include studies that have methodological advantages for whatever you favor.
-Ten studies (well, ten meta-analytic articles) were excluded from Butler's (2006) meta-meta-analysis because they were written in a foreign language. What if all ten of these articles pointed to results that were extremely different from other results? What if they would have greatly impacted the meta-meta-analysis? We will never know. We will also never know if Butler honestly did not include these studies because of a language gap or if he did not include them because there was something in these articles that he wanted to hide. I doubt this is the case, but it is a possibility. Sure it would be expensive to translate the articles to English, but I think it would have been a good idea if Butler et. al. had the funds.
Perhaps I am just paranoid and all of these things I find "sneaky" are actually very normal. However, I really do think meta-analyses are a great way to try to prove your point without really having to explain every little aspect of your procedure.
In other news, I still love CBT.
Subscribe to:
Posts (Atom)