madinamerica
Sandra Steingard, MD October 9, 2018
I recently received an email from Psychiatric Times highlighting current articles. Psychiatric Times is a newspaper that is distributed for free to psychiatrists in the US. To put this in context, the paper appears to be heavily subsidized by pharmaceutical company advertising, and its former editor, Ronald Pies, is a psychiatrist who has been critical of views expressed on Mad In America.
It caught my eye when the first article mentioned had the title, “Antipsychotic Discontinuation: When is it OK?” I clicked on the link to find a slide show authored by Brian Miller, M.D., P.D., M.P.H. which was titled, “Antipsychotics – To Respond or Not to Respond?”
This was intriguing but confusing. Dr. Miller’s slides reviewed a paper just published in Schizophrenia Bulletin titled, “How Many Patients With Schizophrenia Do Not Respond to Antipsychotic Drugs in the Short Term? An Analysis Based on Individual Patient Data From Randomized Controlled Trials.” As pointed out in the slide show, the paper reported on a meta-analysis of 16 randomized controlled studies of antipsychotic drugs over the first 4-6 weeks of treatment. The authors found that a significant number of people do not respond or have relatively poor responses and the majority do not experience a remission of psychotic symptoms. While important, this article addressed short-term rather than long-term care. It is, nevertheless, informative. [Editor’s note: click here to see the MIA research news report on this paper.]
The senior author of the paper, Stefan Leucht, is a well-known and highly regarded expert in meta-analysis. As noted in the disclosures, he is also well-connected to many pharmaceutical companies. Meta-analysis is a statistical technique that allows for investigation of multiple studies. This provides a broader view of the available data in the field.
Since this was a study of response to drugs, it is important to understand how researchers define that term. When drug studies are conducted, subjects are assessed via rating scales which address the presence and severity of symptoms. A person is asked about a variety of experiences, such as hearing voices or feeling sad, and their responses are scored according to a predetermined rubric. Scores will thus fall on a continuum. The Positive and Negative Symptoms Scale (PANSS) is commonly used in antipsychotic drug trials. It includes 30 items and each can be scored on a scale of one (absent) to seven (extreme). Scores can therefore fall anywhere from 30 to 210.
There are many ways that researchers can analyze the myriad bits of data that are collected in these studies. Researchers are required to determine in advance how they will analyze their data, including what change would allow them to classify a person as a “responder.” Researchers can also define what would be considered a “remission.” To be counted as a responder, a person needs to have a certain percentage drop in the rating scale score from beginning to end of study. To be considered in remission, a person’s final score needs to fall below a set point on the scale. Unless one reads a study carefully, these distinctions can be missed and the notion of “response,” often cited in promotional material, can be misleading. Many studies consider a 20% reduction in score as a response. For some people, this can be a clinically insignificant change in symptoms. When large numbers of people are included in a study, it is easier to detect small differences among groups and these differences can reach statistical significance. A drug may be promoted as effective when what has been found is that the group of people who took it had a clinically minor reduction in symptoms as compared to the group on placebo.
The authors of this study, recognizing some of these challenges, set out to look more carefully at the range of response in the studies under review. They did this by not only reporting on the common 20% reduction often used as a marker for “response” but also 25%, 50%, and 75% reduction in the ratings.
The authors also analyzed the percentage of subjects who reached remission. In this case, remission was defined as not scoring above the “mildly present” rating on 8 key items of the PANSS.
The results:
Those who had no change or worsened – 19.8%
Less than 25% improvement – 43%
Less than 50% improvement – 66.5%
Less than 75% – 87%.
For those who were listed as “non-remission” – 66.9%.
To put this another way, only 33.1% of those in the studies were in remission. Only 33.5% had more than a 50% reduction in the rating scale.
The authors offer some insights into what I consider the paradoxes of common psychiatric practice as well as the problems with the way many research studies are conducted.
They begin their paper, “A considerable number of patients with schizophrenia do not respond to antipsychotic drugs.” They go on to cite what they describe as “vague statements” that “can be found in other reports and textbooks such as ‘most controlled trials continue to find a subgroup of 10-20% of patients who derive little benefit from typical neuroleptic drug therapy.’” They offer similar quotes from a variety of texts and conclude that “all of these statements are not based on firm evidence.”
In their discussion, the authors provide insight into the current state of pharmaceutical studies as well as offering their thoughts on why the response rates are so low:
“Pharmaceutical companies are trying to conduct large trials to assure statistical significance which leads to more recruitment pressure; the ‘patient clock’ is running down, thus patients are recruited quickly by professional centers; most of them are improved and stabilized on antipsychotics and enter an RCT after a short wash-out phase of a few days. As most of the antipsychotic effect occurs early on, further response may not be observed which could, at least partly, explain the relatively low number of responders. The increased ‘relapse’ rates on placebo also point to the direction that previous antipsychotics were beneficial.”
I found this rationale to be somewhat tortured. A major issue not addressed is that if people who are stabilized on neuroleptics are withdrawn abruptly and then restarted on drug or placebo, this would favor the drug since those given placebo would be experiencing withdrawal effects. But the authors bring up important issues about who gets recruited into studies these days and the extent to which they mirror the experiences of most people who are offered these drugs in clinical practice. Carl Elliott has written about this problem and it is critical to understand the context in which many drugs studies are conducted.
I still work as a psychiatrist and I know people who appear to benefit from these drugs. However, I want to use them in a way that is most helpful and minimizes harm. I also want to share the available data since this is what constitutes informed consent. What seems equally important is to provide this information to the public, including policy makers, since common misconceptions have had great influence on the structure of our system of care. Deinstitutionalization was driven by many forces but it is sustained by the notion that most people have robust responses to these drugs. We have a system of care and a societal expectation that these drugs are highly effective. When people are struggling in the community, the common response is that we need to adjust “their meds,” even though this is only likely to be helpful in a minority of cases.
Many of my colleague tend to focus on the need to find better drugs or design better studies as a way to address this problem. We tend to overlook so-called “alternative approaches,” such as the Hearing Voices Network. Oddly, given the context of this blog, these approaches are often discounted because they lack an evidence base. Sadly, adequate money to develop an evidence base is not offered because, well, they lack an evidence base. I suspect there is another bias at play.
In another recent email, this time from Medscape, there was a link to a video, “How to ‘Brand’ Psychiatry Today.” In the video, Dr. Stephen Strakowski, chair of the Department of Psychiatry at Dell Medical School at the University of Texas in Austin, proposes this definition of the specialty: “Psychiatry is a medical specialty that studies and treats disturbances in brain function that predominantly affect behavior — behavioral brain disorders.”
I appreciated Dr. Strakowski’s attempt to define our profession and I think he captures the way most psychiatrists conceptualize the field. This is informative. He both touts but then acknowledges the risks of the medical model: “We need to be careful to not confuse the medical model with using only medication for treatment. Rather, the medical model uses the medical approach to define the treatment evidence base and decide on the treatment.” Dr. Strakowski also suggests some limits to what psychiatrists should be doing and urges psychiatrists to let others work at the top of their expertise, “to allow those who are the best therapists to be the therapy providers, for example.” My major disagreement with him is that he does not challenge some of the negative consequences of applying the medical frame so broadly; even when he suggests that we consider social factors and invite others to offer psychotherapy, it is all done in the context of a medical conceptualization of the problems at hand. But that is a subject for another time.
For now, I would call upon physicians who claim to value the medical frame “to define treatment evidence and decide on treatment” to do just that. The evidence base suggests that it is time for us to reappraise the effectiveness of these drugs and shift our practice patterns accordingly.
Previous articleTwo-Thirds of Schizophrenia Patients Do Not Remit on Antipsychotics
Thank You Dr Steingard and MIA.
No comments:
Post a Comment