Monday, October 29, 2007

Week 10: Feeling blue?

I just read the Coyne chapter "Thinking Interactionally about Depression: A Radical Reinstatement". Wow. That guy has a lot of nerve. I have a lot to say, but I'll narrow it down to a few of the things that bothered me the most about the chapter (and Coyne as a human).

1) At the very beginning of the article, Coyne complains about the fact that the literature still cites a paper he wrote in 1976 in which he conceptualizes an interpersonal theory of depression. Coyne states that "it has been disappointing that it is taking so long for subsequent work to move beyond my [1976] conceptualization" (p. 365). I agree that it is important for other researchers to continue producing new theories and backing them up with new empirical evidence, but I think it is a ridiculous thing to complain about. If Coyne is unhappy about this, HE should be fixing it by doing more and more research until his first ideas become obsolete because he has proved newer, better ones to be better supported by new data. I know that he HAS continued to do research, but he obviously hasn't done enough to override his original publication. So, Coyne (if you are still alive), stop complaining and get back to work.

2) I can't believe I just wrote "if you are still alive". I am extremely insensitive.

3) On page 367, Coyne mentions how "it is a lot" to ask participants to report on subtle mood shifts in a short period of time. It has been shown (by Tim Wilson, from our department!) that people generally suck at introspection. Therefore, I agree with Coyne that it is a bad idea to use self-report as the only measure in depression research. I feel that it is extremely important for researchers to find other ways to measure facets of depression, since self-report cannot always be trusted to be accurate. This leads me to my next point....

4) Coyne says one of the biggest problems with depression research is that many researchers use participants' statements about themselves as "evidence of enduring cognitive structures" (p. 368). As I mentioned in the last point, I agree with Coyne that self-report is not a great method to use. However, Coyne does not acknowledge the fact that there is a plethora of other methods available to look at cognitive structures. With current technology, we have the ability to look at memory biases, interpretation biases, attention biases, implicit associations, and much more. All of these things are validated ways to examine cognitive structures and processes. However, many of these measures are used more often by cognitive and social psychologists than clinical psychologists. I strongly believe that it is necessary for psychologists to be more integrative in their research approaches; they need to be more willing to look in other areas to find better methods. I know we talked about this a few weeks ago, but integration is what is necessary to truly advance the science of psychology.

5) Coyne seems like a jerk. I just thought I'd put that out there.

Ok, time for bed!! See you all on Wednesday!

Sunday, October 21, 2007

Week 9: Stimulus Control

I really enjoyed this week's reading on Stimulus Control. The article made it seem like SC is incredibly easy for any therapist to implement (even though the client may exhibit noncompliance). It does not seem to require much (if any) psychological training to get someone to start using it. I mean, the article mentioned that nurses and general practitioners have been successful teaching it to their patients.

I want to learn how to administer SC. Not because it seems easy and because it doesn't seem to require psychological training...but because it seems to really work. I would love to be a therapist with a high success rate. I want to help people, and if this is a way to make people more satisfied with their lives, I want to use it. I think it would be great to treat only insomniacs and only use SC and just sit back and smile as all my clients start getting long, healthy, happy, full nights of sleep.

Ok--pause. I think I have found a problem with the SC article. Never before have I read about a therapy and thought, "WOW, this seems perfect!". I think this proves that the article we read was biased and did not include any research that sheds a negative light on it. Even negative aspects of using SC (such as the high noncompliance rate) was sugar coated by emphasizing the strategies that can be used to counteract it. I am 99% positive that there has to be negative aspects of SC and I think I would have a more in depth, fuller understanding of SC if the article included more information about these negative aspects. No treatment is perfect...right? Or is SC the PERFECT treatment for insomniacs?

I have a weird question about SC. To follow the steps correctly, nothing can be done in the bed except sleep & sex. Why is sex allowed? I mean, I know sex typically occurs in the bedroom and it would probably be taboo to make therapists suggest doing it outside of the bedroom, but doesn't sex create high levels of arousal? Wouldn't SC be more effective if sex was relegated to another room (perhaps a spare bedroom?) as well? Have there been any studies done that show sex in the bedroom does not hinder the effects of SC?

Ok, now that I brought up that awkward topic, I think I'll stop before I embarrass myself anymore. See you all in class on Wednesday!

Monday, October 1, 2007

Week 7: Behavioral Activation

So I am having a friend from out of town visit this weekend, so I decided to do next week's reading & blog this week! So lucky you, Mr. or Miss Reader...you get a double dose of Shari! If you want to see what I have to say about CBT, scroll down to the next blog post. (In case you're wondering...I love CBT.)

I enjoyed the readings this week. I found them informative and interesting. I want to applaud the Jacobson (2001) article. I think he is doing psychology the way it SHOULD be done. Jacobson was part of a team that found that a certain type of therapy had a significant effect, so he formulated theories and models based on the therapy and after this, he did a large clinical trial to test the efficacy of the therapy (Jacobson et al., 1996). This is very different than what many clinicians (I'll call them the "unscientific clinicians") do: they find a therapy that they think works...so they do that kind of therapy, without any type of rigorous testing or theorizing. Based on the article we read, I think Jacobson is a true psychological scientist. I aspire to do work that is as scientific as his.

I'm very interested in Behavioral Activation. I have to admit that I knew relatively little about it before reading this article. I really approve of any type of therapy that has its roots in something with such a large amount of empirical support, like behaviorism. I like that the idea of using Behavioral Activation as a stand alone therapy came from a scientific study, rather than a random idea of a therapist. I think random ideas can also be brilliant (so long as they are tested soon after they are generated), but I think the fact that this form of therapy sprang from research adds to its credibility.

Although I am interested in Behavioral Activation, I am still a little skeptical of it. I think it is counter-intuitive to not treat cognitions in depressed clients. I feel like targeting both behavior and cognitive problems is a more thorough way to treat depression, but if it can save money and time for the clients, I suppose it is ok to only do one of the two if it is proved to be sufficient. I am very curious to know the results of the large study Jacobson was working on at press time of this article to see if BA truly is more efficacious than CT.

Some of the "unscientific clinicians" I mentioned earlier, the ones who practice without empirical support or plans of eventual empirical support, may complain that they shouldn't have to validate their therapy with empirical evidence because they can tell their therapy works just by interacting with their clients. Well, we learned from our first set of readings that this is not the case most of the time, since clinical judgment typically sucks. So if any of these "unscientific clinicians" are reading this (which I doubt, since I assume only our class reads this and we all seem to be supporters of empirical support), I'd like to tell you to get your act together and act like a scientist. You got a Ph.D. or Psy.D., which shows you're smart, so do some research or serious theorizing and prove that you deserve your title!

Alright, that's all for now! See you all in class. :)

Week 6: CBT is great & Shari may be paranoid...

So if you haven't already figured it out, I'm a fan of CBT. I'm actually a huge fan of CBT. Which is odd, considering the fact that I have never actually READ or SEEN a CBT manual. I know what CBT is, I know that it is based on SCIENCE, when a bunch of other forms of therapies are not, so I decided I'm a huge fan of it. This week made me happy because I got to learn more about CBT and I got lots of evidence, in the form of meta-analyses, to support the efficacy/effectiveness/efficiency of CBT.

Now that I've made it clear that I am honestly a fan of CBT, I want to take a little bit of time to complain about meta-analyses. I have decided that they are a very sneaky way to report information. I don't think that the authors of meta-analyses are always purposefully being sneaky...but I think it is much easier to fudge a few facts or hide a few errors in a meta-analysis than it is in a report on a single experiment.

Here are a few instances in which I believe meta-analytic authors are being "tricky" in the Butler (2006) meta-meta-analytic article we read for class:

-It is mentioned that Parker et. al. did not report the criteria they used to select studies for their review paper and as a result, "it is difficult to interpret their conclusions" (p. 20). In my mind, this means the authors were probably doing something sneaky when selecting which articles to include. It is very possible that they only chose articles that supported their own opinions. This seems especially likely when you take into account that "researcher allegiance accounted for half the difference between CBT and other treatments" in a different meta-analytic study done by Dobson in 1989 (p.20).

-In a 1998 meta-analysis, Gloaguen et. al. found that CBT had significantly better outcomes than medication for depression. However, Gloaguen et. al. "included some early studies comparing CT with medications, which had methodological features that favored CT" (p. 23). Was this done on purpose? If so, it is a very sneaky way to prove your point; purposely include studies that have methodological advantages for whatever you favor.

-Ten studies (well, ten meta-analytic articles) were excluded from Butler's (2006) meta-meta-analysis because they were written in a foreign language. What if all ten of these articles pointed to results that were extremely different from other results? What if they would have greatly impacted the meta-meta-analysis? We will never know. We will also never know if Butler honestly did not include these studies because of a language gap or if he did not include them because there was something in these articles that he wanted to hide. I doubt this is the case, but it is a possibility. Sure it would be expensive to translate the articles to English, but I think it would have been a good idea if Butler et. al. had the funds.

Perhaps I am just paranoid and all of these things I find "sneaky" are actually very normal. However, I really do think meta-analyses are a great way to try to prove your point without really having to explain every little aspect of your procedure.

In other news, I still love CBT.