: It is common practice for psychiatrists to switch depressive patients between different antidepressants if their current drug does not evince a symptomatic response. Despite clinical wisdom supporting this, little empirical, controlled evidence exists to direct “switching” protocols (e.g. if a patient with Z characteristics is on drug X, is it usually better to switch to drug A, B, or C? Will switching help at all?) in the psychopharmacological treatment of depression. The NIMH-funded STAR*D (Sequenced Alternatives to Relieve Depression) study
aimed to address these questions of treatment direction in a very large (n>4000), “real-world” sample using a multi-phase treatment plan
with different drugs (and cognitive therapy) at every step to maximize chances of eventual remission. Overall, the NIMH reported that about 67% of patients eventually achieved remission
, with few differences in effectiveness between different types of treatment at each step
. However, researchers and commentators have raised concerns
regarding inconsistent reporting of outcomes, after-the-fact changes in study design and analysis
, and other issues that may have inflated, partially invalidated, or misrepresented widely reported treatment outcomes. These inequities may also have implications for the secondary moderator analyses (i.e. does trait A predict switching to X or Y is better?) that were a major reason for the study.
Criticisms of STAR*D include (but are not limited to):
* Switching primary outcome at the study's conclusion from the widely used (for better or for worse
) Hamilton Rating Scale for Depression
to the Quick Inventory of Depressive Symptomatology
, ostensibly due to high patient dropout preventing acquisition of HRSD assessments (though robust statistical methods for imputing/estimating missing data in this sort of study exist). While the HRSD was performed by blinded assessors independent of the immediate treatment situation, the QIDS-SR was actually taken as part of the treatment-guiding process, leading to conjecture that there was more demand on patients to respond positively to the QIDS-SR versus the HRSD. (There is some odd confusion over whether the QIDS-SR was administered in-person as the study protocol would indicate, or over a computerized telephone check-up system.) Notably, the QIDS-SR reported higher rates of remission than the HRSD. However, some articles on individual steps of treatment did report on HRSD outcomes.
* A large number of patients (607) entered treatment at milder levels of depression than the minimum set by study protocols (i.e. with HRSD scores less than 14, with remission defined as HRSD less than 7), but were treated and included in the summary findings of the study. Previous research has suggested that milder depression may respond somewhat similarly to placebo or antidepressant treatment.
* Lack of placebo control leading to diminished ability to conclude to what degree apparent remission at each step is spontaneous or treatment-driven (i.e. if you remit at step 2, is it because of the new drug or because depression can spontaneously remit, especially for more moderate depressions).
* Considering success-trending dropouts in many cases as treatment successes despite front-end guidelines to the contrary for some types of dropout, to the result of raising treatment success rates. (Though Pigott and critics often take the reverse interpretation 100%, equating dropout always with lack of clinical efficacy, which may be overly biased in the other direction.) Dropouts in the study over the course of follow-up, though always expected, were astonishingly frequent in STAR*D (perhaps as high as around 90%).
* Confusion over different STAR*D papers publishing different rates of suicidality for the Celexa step of the study
(each using slightly different sampling from the STAR*D cohort), some an order of magnitude higher than others. The higher rates were reported in a paper examining a particular gene variant’s ability to predict suicidality on the drug, which was associated with a patent for screening this gene variant.
* Lack of publication on several pre-established secondary outcome measures (e.g. general assessment of functioning, work productivity) several years after the study finished leading to suspicions regarding treatment failure.
At least one STAR*D investigator may agree with some of these criticisms, Dr. Maurizio Fava (see remarks at end)
. Dr. Fava, while not at all dismissive of the helpful role of psychopharmacology in psychiatry (rather the contrary), has in the past raised concerns regarding lack of research on the efficacy
and side-effects of long-term antidepressant use
[paywalled, summarized partially here
Medical journalist and psychotropic drug critic Robert Whitaker (previously
) has also blogged his criticisms regarding STAR*D
, largely based on the findings of Dr. Pigott’s group above.
A large number of papers have been published examining data from the STAR*D study
(bibliography incompletely updated), many exploring the differential contribution of genetic profiles to antidepressant response
, to varying degrees of success. And it would be incorrect to say STAR*D got everything wrong: for example, one unique and praised feature of the STAR*D trial over other clinical trials, as noted by Neuroskeptic
, is the inclusion of patients who have co-morbidities and depressive features that may exclude them from more traditional clinical trials. These more complexly symptomatic patients may be representative of depression within real-world populations compared to patients found in antidepressant trials. Analyses of the STAR*D data suggest that “clinical trial” patients (who composed a minority of the STAR*D cohort) responded significantly better to treatment than other patients
. The investigators of STAR*D themselves conjectured from this result that standard clinical trials may overestimate the effects of antidepressants.