Week 22 – Effect of Early vs. Deferred Therapy for HIV (NA-ACCORD)

“Effect of Early versus Deferred Antiretroviral Therapy for HIV on Survival”

N Engl J Med. 2009 Apr 30;360(18):1815-26 [free full text]

The optimal timing of initiation of antiretroviral therapy (ART) in asymptomatic patients with HIV has been a subject of investigation since the advent of antiretrovirals. Guidelines in 1996 recommended starting ART for all HIV-infected patients with CD4 count < 500, but over time provider concerns regarding resistance, medication nonadherence, and adverse effects of medications led to more restrictive prescribing. In the mid-2000s, guidelines recommended ART initiation in asymptomatic HIV patients with CD4 < 350. However, contemporary subgroup analysis of RCT data and other limited observational data suggested that deferring initiation of ART increased rates of progression to AIDS and mortality. Thus the NA-ACCORD authors sought to retrospectively analyze their large dataset to investigate the mortality effect of early vs. deferred ART initiation.


Treatment-naïve patients with HIV and no hx of AIDS-defining illness, treated 1996-2005

Two subpopulations analyzed retrospectively:
1. CD4 count 351-500
2. CD4 count 500+


Intervention: none

Outcome: within each CD4 sub-population, mortality in patients treated with ART within 6 months after the first CD4 count within the range of interest vs. mortality in patients for whom ART was deferred until the CD4 count fell below the range of interest

8362 eligible patients had a CD4 count of 351-500, and of these, 2084 (25%) initiated ART within 6 months, whereas 6278 (75%) patients deferred therapy until CD4 < 351.

9155 eligible patients had a CD4 count of 500+, and of these, 2220 (24%) initiated ART within 6 months, whereas 6935 (76%) patients deferred therapy until CD4 < 500.

In both CD4 subpopulations, patients in the early-ART group were older, more likely to be white, more likely to be male, less likely to have HCV, and less likely to have a history of injection drug use. Cause of death information was obtained in only 16% of all deceased patients. The majority of these deaths in the both early- and deferred-therapy groups were from non-AIDS-defining conditions.

In the CD4 351-500 subpopulation, there were 137 deaths in the early-therapy group vs. 238 deaths in the deferred-therapy group. Relative risk of death for deferred therapy was 1.69 (95% CI 1.26-2.226, p < 0.001) per Cox regression stratified by year. After adjustment for history of injection drug use, RR = 1.28 (95% CI 0.85-1.93, p = 0.23). In an unadjusted analysis, HCV infection was a risk factor for mortality (RR 1.85, p= 0.03). After exclusion of patients with HCV infection, RR for deferred therapy = 1.52 (95% CI 1.01-2.28, p = 0.04).

In the CD4 500+ subpopulation, there were 113 deaths in the early-therapy group vs. 198 in the deferred-therapy group. Relative risk of death for deferred therapy was 1.94 (95% CI 1.37-2.79, p < 0.001). After adjustment for history of injection drug use, RR = 1.73 (95% CI 1.08-2.78, p = 0.02). Again, HCV infection was a risk factor for mortality (RR = 2.03, p < 0.001). After exclusion of patients with HCV infection, RR for deferred therapy = 1.90 (95% CI 1.14-3.18, p = 0.01).

In a large retrospective study, deferred initiation of antiretrovirals in asymptomatic HIV infection was associated with higher mortality.

This was the first retrospective study of early initiation of ART in HIV that was large enough to power mortality as an endpoint while controlling for covariates. However, it is limited significantly by its observational, non-randomized design that introduced substantial unmeasured confounders. A notable example is the absence of socioeconomic confounders (e.g. insurance status). Perhaps early-initiation patients were more well-off, and their economic advantage was what drove the mortality benefit rather than the early initiation of ART. This study also made no mention of the tolerability of ART or adverse reactions to it.

In the years that followed this trial, NIH and WHO consensus guidelines shifted the trend toward earlier treatment of HIV. In 2015, the INSIGHT START trial (the first large RCT of immediate vs. deferred ART) showed a definitive mortality benefit of immediate initiation of ART in CD4 500+ patients. Since that time, the standard of care has been to treat “essentially all” HIV-infected patients with ART [UpToDate].

Further Reading/References:
1. Wiki Journal Club
2. 2 Minute Medicine
3. INSIGHT START (2015), Pubmed, NEJM PDF
4. UpToDate, “When to initiate antiretroviral therapy in HIV-infected patients”

Summary by Duncan F. Moore, MD

Week 21 – HACA

“Mild Therapeutic Hypothermia to Improve the Neurologic Outcome After Cardiac Arrest”

by the Hypothermia After Cardiac Arrest Study Group

N Engl J Med. 2002 Feb 21;346(8):549-56. [free full text]

Neurologic injury after cardiac arrest is a significant source of morbidity and mortality. It is hypothesized that brain reperfusion injury (via the generation of free radicals and other inflammatory mediators) following ischemic time is the primary pathophysiologic basis. Animal models and limited human studies have demonstrated that patients treated with mild hypothermia following cardiac arrest have improved neurologic outcome. The 2002 HACA study sought to prospectively evaluate the utility of therapeutic hypothermia in reducing neurologic sequelae and mortality post-arrest.

Population: European patients who achieve return of spontaneous circulation after presenting to the ED in cardiac arrest

inclusion criteria: witnessed arrest, ventricular fibrillation or non-perfusing ventricular tachycardia as initial rhythm, estimated interval 5 to 15 min from collapse to first resuscitation attempt, no more than 60 min from collapse to ROSC, age 18-75

pertinent exclusion: pt already < 30ºC on admission, comatose state prior to arrest d/t CNS drugs, response to commands following ROSC

Intervention: Cooling to target temperature 32-34ºC with maintenance for 24 hrs followed by passive rewarming. Pts received pancuronium for neuromuscular blockade to prevent shivering.

Comparison: Standard intensive care


Primary: a “favorable neurologic outcome” at 6 months defined as Pittsburgh cerebral-performance scale category 1 (good recovery) or 2 (moderate disability). (Of note, the examiner was blinded to treatment group allocation.)

– all-cause mortality at 6 months
– specific complications within the first 7 days: bleeding “of any severity,” pneumonia, sepsis, pancreatitis, renal failure, pulmonary edema, seizures, arrhythmias, and pressure sores

3551 consecutive patients were assessed for enrollment and ultimately 275 met inclusion criteria and were randomized. The normothermia group had more baseline DM and CAD and were more likely to have received BLS from a bystander prior to the ED.

Regarding neurologic outcome at 6 months, 75 of 136 (55%) of the hypothermia group had a favorable neurologic outcome, versus 54/137 (39%) in the normothermia group (RR 1.40, 95% CI 1.08-1.81, p = 0.009; NNT = 6). After adjusting for all baseline characteristics, the RR increased slightly to 1.47 (95% CI 1.09-1.82).

Regarding death at 6 months, 41% of the hypothermia group had died, versus 55% of the normothermia group (RR 0.74, 95% CI 0.58-0.95, p = 0.02; NNT = 7). After adjusting for all baseline characteristics, RR = 0.62 (95% CI 0.36-0.95). There was no difference among the two groups in the rate of any complication or in the total number of complications during the first 7 days.

In ED patients with Vfib or pulseless VT arrest who did not have meaningful response to commands after ROSC, immediate therapeutic hypothermia reduced the rate of neurologic sequelae and mortality at 6 months.

Corresponding practice point from Dr. Sonti and Dr. Vinayak and their Georgetown Critical Care Top 40: “If after ROSC your patient remains unresponsive and does not have refractory hypoxemia/hypotension/coagulopathy, you should initiate therapeutic hypothermia even if the arrest was PEA. The benefit seen was substantial and any proposed biologic mechanism would seemingly apply to all causes of cardiac arrest. The investigators used pancuronium to prevent shivering; [at MGUH] there is a ‘shivering’ protocol in place and if refractory, paralytics can be used.”

This trial, as well as a concurrent publication by Benard et al., ushered in a new paradigm of therapeutic hypothermia or “targeted temperature management” (TTM) following cardiac arrest. Numerous trials in related populations and with modified interventions (e.g. target temperature 36º C) were performed over the following decade, and ultimately led to the current standard of practice.

Per UpToDate, the collective trial data suggest that “active control of the post-cardiac arrest patient’s core temperature, with a target between 32 and 36ºC, followed by active avoidance of fever, is the optimal strategy to promote patient survival.” TTM should be undertaken in all patients who do not follow commands or have purposeful movements following ROSC. Expert opinion at UpToDate recommends maintaining temperature control for at least 48 hours. There is no strict contraindication to TTM.

Further Reading/References:
1. 2 Minute Medicine
2. Wiki Journal Club
3. Georgetown Critical Care Top 40, page 23 (Jan. 2016)
4. PulmCCM.org, “Hypothermia did not help after out-of-hospital cardiac arrest, in largest study yet
5. Cochrane Review, “Hypothermia for neuroprotection in adults after cardiopulmonary resuscitation”
6. The NNT, “Mild Therapeutic Hypothermia for Neuroprotection Following CPR”
7. UpToDate, “Post-cardiac arrest management in adults”

Summary by Duncan F. Moore, MD

Week 19 – RAVE

“Rituximab versus Cyclophosphamide for ANCA-Associated Vasculitis”

by the Rituximab in ANCA-Associated Vasculitis-Immune Tolerance Network (RAVE-ITN) Research Group

N Engl J Med. 2010 Jul 15;363(3):221-32. [free full text]

ANCA-associated vasculitides, such as granulomatosis with polyangiitis (GPA, formerly Wegener’s granulomatosis) and microscopic polyangiitis (MPA) are often rapidly progressive and highly morbid. Mortality in untreated generalized GPA can be as high as 90% at 2 years (PMID 1739240). Since the early 1980s, cyclophosphamide (CYC) with corticosteroids has been the best treatment option for induction of disease remission in GPA and MPA. Unfortunately, the immediate and delayed adverse effect profile of CYC can be burdensome. The role of B lymphocytes in the pathogenesis of these diseases has been increasingly appreciated over the past 20 years, and this association inspired uncontrolled treatment studies with the anti-CD20 agent rituximab that demonstrated promising preliminary results. Thus the RAVE trial was performed to compare rituximab to cyclophosphamide, the standard of care.

ANCA-positive patients with “severe” GPA or MPA and a Birmingham Vasculitis Activity Score for Wegener’s Granulomatosis (BVAS/WG) of 3+.

notable exclusion: patients intubated due to alveolar hemorrhage, patients with Cr > 4.0

rituximab 375mg/m2 IV weekly x4 + daily placebo-CYC + pulse-dose corticosteroids with oral maintenance and then taper

placebo-rituximab infusion weekly x4 + daily CYC + pulse-dose corticosteroids with oral maintenance and then taper

primary end point = clinical remission, defined as a BVAS/WG of 0 and successful completion of prednisone taper

primary outcome = noninferiority of rituximab relative to CYC in reaching 1º end point

authors specified non-inferiority margin as a -20 percentage point difference in remission rate

subgroup analyses (pre-specified) = type of ANCA-associated vasculitis, type of ANCA, “newly-diagnosed disease,” relapsing disease, alveolar hemorrhage, and severe renal disease

secondary outcomes: rate of disease flares, BVAS/WG of 0 during treatment with prednisone at a dose of less than 10mg/day, cumulative glucocorticoid dose, rates of adverse events, SF-36 scores

197 patients were randomized, and baseline characteristics were similar among the two groups (e.g. GPA vs. MPA, relapsed disease, etc.). 75% of patients had GPA. 64% of the patients in the rituximab group reached remission, while 53% of the control patients did. This 11 percentage point difference among the treatment groups was consistent with non-inferiority (p < 0.001). However, although more rituximab patients reached the primary endpoint, the difference between the two groups was statistically insignificant, and thus superiority of rituximab could not be established (95% CI -3.2 – 24.3 percentage points difference, p = 0.09). Subgroup analysis was notable only for superiority of rituximab in relapsed patients (67% remission rate vs. 42% in controls, p=0.01). Rates of adverse events and treatment discontinuation were similar among the two groups.

Rituximab + steroids is as effective as cyclophosphamide + steroids in inducing remission in severe GPA and MPA.

This study initiated a major paradigm shift in the standard of care of ANCA-associated vasculitis. The following year, the FDA approved rituximab + steroids as the first-ever treatment regimen approved for GPA and MPA.  It spurred numerous follow up trials, and to this day expert opinion is split over whether CYC or rituximab should be the initial immunosuppressive therapy in GPA/MPA with “organ-threatening or life-threatening disease” (UpToDate).

Further Reading/References:
1. “Wegener granulomatosis: an analysis of 158 patients” (1992)
2. RAVE at ClinicalTrials.gov
3. “Challenges in the Design and Interpretation of Noninferiority Trials,” NEJM (2017)
4. “Clinical Trials – Non-inferiority Trials”
5. UpToDate,“Initial Immunosuppressive Therapy in Granulomatosis with Polyangiitis and Microscopic Polyangiitis
6. Wiki Journal Club
7. 2 Minute Medicine

Summary by Duncan F. Moore, MD

Week 18 – VERT

“Effects of Risedronate Treatment on Vertebral and Nonvertebral Fractures in Women With Postmenopausal Osteoporosis”

by the Vertebral Efficacy with Risedronate Therapy (VERT) Study Group

JAMA. 1999 Oct 13;282(14):1344-52. [free full text]

Bisphosphonates are a highly effective and relatively safe class of medications for the prevention of fractures in patients with osteoporosis. The VERT trial published in 1999 was a landmark trial that demonstrated this protective effect with the daily oral bisphosphonate risedronate.

Population: post-menopausal women with either 2 or more vertebral fractures per radiography or 1 vertebral fracture with decreased lumbar spine bone mineral density

Intervention: risedronate 2.5mg mg PO daily or risedronate 5mg PO daily

Comparison: placebo PO daily

1. prevalence of new vertebral fracture at 3 years follow-up, per annual imaging
2. prevalence of new non-vertebral fracture at 3 years follow-up, per annual imaging
3. change in bone mineral density, per DEXA q6 months

2458 patients were randomized. During the course of the study, “data from other trials indicated that the 2.5mg risedronate dose was less effective than the 5mg dose,” and thus the authors discontinued further data collection on the 2.5mg treatment arm at 1 year into the study. All treatment groups had similar baseline characteristics. 55% of the placebo group and 60% of the 5mg risedronate group completed 3 years of treatment. The prevalence of new vertebral fracture within 3 years was 11.3% in the risedronate group and 16.3% in the placebo group (RR 0.59, 95% CI 0.43-0.82, p = 0.003; NNT = 20). The prevalence of new non-vertebral fractures at 3 years was 5.2% in the treatment arm and 8.4% in the placebo arm (RR 0.6, 95% CI 0.39-0.94, p = 0.02; NNT = 31). Regarding bone mineral density (BMD), see Figure 4 for a visual depiction of the changes in BMD by treatment group at the various 6-month timepoints. Notably, change from baseline BMD of the lumbar spine and femoral neck was significantly higher (and positive) in the risedronate 5mg group at all follow-up timepoints relative to the placebo group and at all timepoints except 6 months for the femoral trochanter measurements. Regarding adverse events, there was no difference in the incidence of upper GI adverse events among the two groups. GI complaints “were the most common adverse events associated with study discontinuance,” and GI events lead to 42% of placebo withdrawals but only 36% of the 5mg risedronate withdrawals.

Oral risedronate reduces the risk of vertebral and non-vertebral fractures in patients with osteoporosis while increasing bone mineral density.

Overall, this was a large, well-designed RCT that demonstrated a concrete treatment benefit. As a result, oral bisphosphonate therapy has become the standard of care both for treatment and prevention of osteoporosis. This study, as well as others, demonstrated that such therapies are well-tolerated with relatively few side effects.

A notable strength of this study is that it did not exclude patients with GI comorbidities.  One weakness is the modification of the trial protocol to eliminate the risedronate 2.5mg treatment arm after 1 year of study. Although this arm demonstrated a reduction in vertebral fracture at 1 year relative to placebo (p = 0.02), its elimination raises suspicion that the pre-specified analyses were not yielding the anticipated results during the interim analysis and thus the less-impressive treatment arm was discarded.

Further Reading/References:
1. Weekly alendronate vs. weekly risedronate
2. Comparative effectiveness of pharmacologic treatments to prevent fractures: an updated systematic review (2014)

Summary by Duncan F. Moore, MD


“Effect of carvedilol on survival in severe chronic heart failure”

by the Carvedilol Prospective Randomized Cumulative Survival (COPERNICUS) Study Group

N Engl J Med. 2001 May 31;344(22):1651-8. [free full text]

We are all familiar with the role of beta-blockers in the management of heart failure with reduced ejection fraction. In the late 1990s, a growing body of excellent RCTs demonstrated that metoprolol succinate, bisoprolol, and carvedilol improved morbidity and mortality in patients with mild to moderate HFrEF, while the only trial of beta-blockade (with bucindolol) in patients with severe HFrEF failed to demonstrate a mortality benefit. In 2001, the COPERNICUS trial further elucidated the mortality benefit of carvedilol in patients with severe HFrEF.

Population: patients with severe CHF (NYHA class III-IV symptoms and LVEF < 25%) despite “appropriate conventional therapy”

Intervention: carvedilol with protocolized uptitration (in addition to pt’s usual meds)

Comparison: placebo with protocolized uptitration (in addition to pt’s usual meds)

Outcomes: all-cause mortality and combined risk of death or hospitalization for any cause

2289 patients were randomized before the trial was stopped early due to higher than expected mortality benefit in the carvedilol arm. Mean follow-up was 10.4 months. Regarding mortality: 190 (16.8%) of placebo patients died, while only 130 (11.2%) of carvedilol patients died (p = 0.0014) (NNT = 17.9). Regarding mortality or hospitalization: 507 (44.7%) of placebo patients died or were hospitalized, while only 425 (36.8%) of carvedilol patients died or were hospitalized (NNT = 12.6). Both outcomes were found to be of similar directions and magnitudes in subgroup analyses (age, sex, LVEF < 20% or >20%, ischemic vs. non-ischemic CHF, study site location, and no CHF hospitalization within year preceding randomization).

In severe heart failure with reduced ejection fraction, carvedilol significantly reduces mortality and hospitalization risk.

This was a straightforward, well-designed, double-blind RCT with a compelling conclusion. In addition, the dropout rate was higher in the placebo arm than the carvedilol arm! Despite longstanding clinician fears that beta-blockade would be ineffective or even harmful in patients with already advanced (but compensated) HFrEF, this trial definitively established the role for beta-blockade in such patients.

Per the 2013 ACCF/AHA guidelines, “use of one of the three beta blockers proven to reduce mortality (e.g. bisoprolol, carvedilol, and sustained-release metoprolol succinate) is recommended for all patients with current or prior symptoms of HFrEF, unless contraindicated.”

Of note, there are two COPERNICUS trials. This is the first reported study, in NEJM from 2001, which reports only the mortality and mortality + hospitalization results, again in the context of a highly anticipated trial that was terminated early due to mortality benefit. A year later, the full results were published in Circulation, which described findings such as a decreased number of hospitalizations, fewer total hospitalization days, fewer days hospitalized for CHF, improved subjective scores, and fewer serious adverse events (e.g. sudden death, cardiogenic shock, VT) in the carvedilol arm.

Further Reading/References:
1. 2013 ACCF/AHA Guideline for the Management of Heart Failure
2. COPERNICUS, 2002 Circulation version
3. Wiki Journal Club (describes 2001 NEJM, cites 2002 Circulation)
4. 2 Minute Medicine (describes and cites 2002 Circulation)

Summary by Duncan F. Moore, MD

Week 14 – ARDSNet aka ARMA

“Ventilation with Lower Tidal Volumes as Compared with Traditional Tidal Volumes for Acute Lung Injury and the Acute Respiratory Distress Syndrome”

by the Acute Respiratory Distress Syndrome Network (ARDSNet)

N Engl J Med. 2000 May 4;342(18):1301-8. [free full text]

Acute respiratory distress syndrome (ARDS) is an inflammatory and highly morbid lung injury found in many critically ill patients. In the 1990s, it was hypothesized that overdistention of aerated lung volumes and elevated airway pressures might contribute to the severity of ARDS, and indeed some work in animal models supported this theory. Prior to the ARDSNet study, four randomized trials had been conducted investigating the possible protective effect of ventilation with lower tidal volumes, but their results were conflicting.

Population: patients with ARDS diagnosed within < 36 hrs
Intervention: initial tidal volume 6 ml/kg predicted body weight, downtitrated as necessary to maintain plateau pressure ≤ 30 cm of water
Comparison: initial tidal volume 12 ml/kg predicted body weight, downtitrated as necessary to maintain plateau pressure ≤ 50 cm of water


1) in-hospital mortality
2) ventilator-free days within the first 28 days

1) number of days without organ failure
2) occurrence of barotrauma
3) reduction in IL-6 concentration from day 0 to day 3


861 patients were randomized before the trial was stopped early due to the increased mortality in the control arm noted during interim analysis. In-hospital mortality was 31.0% in the lower tidal volume group and 39.8% in the traditional tidal volume group (p = 0.007, NNT = 11.4). Ventilator free days were 12±11 in the lower tidal volume group vs. 10±11 in the traditional group (n = 0.007). The lower tidal volume group had more days without organ failure (15±11 vs. 12±11, p = 0.006). There was no difference in rates of barotrauma among the two groups. IL-6 concentration decrease between days 0 and 3 was greater in the low tidal volume group (p < 0.001), and IL-6 concentration at day 3 was lower in the low tidal volume group (p = 0.002).

Low tidal volume ventilation decreases mortality in ARDS relative to “traditional” tidal volumes.

The authors felt that this study confirmed the results of prior animal models and conclusively answered the question of whether or not low tidal volume ventilation provided a mortality benefit. In fact, in the years following, low tidal volume ventilation became the standard of care, and a robust body of literature followed this study to further delineate a “lung protective strategy.”

Critics of the study noted that at the time of the study the standard of care/“traditional” tidal volume in ARDS was less than the 12 ml/kg used in the comparison arm. (Non-enrolled patients at the participating centers were receiving a mean tidal volume of 10.3 ml/kg.) Thus not only was the trial making a comparison to a faulty control, but it was also potentially harming patients in the control arm. Here is an excellent summary of the ethical issues and debate regarding this specific issue and regarding control arms of RCTs in general.

Corresponding practice point from Dr. Sonti and Dr. Vinayak and their Georgetown Critical Care Top 40: “Low tidal volume ventilation is the standard of care in patients with ARDS (P/F < 300). Use ≤ 6 ml/kg predicted body weight, follow plateau pressures, and be cautious of mixed modes in which you set a tidal volume but the ventilator can adjust and choose a larger one.”

PulmCCM is an excellent blog, and they have a nice page reviewing this topic and summarizing some of the research and guidelines that have followed.

Further Reading/References:
1. Wiki Journal Club
2. 2 Minute Medicine
3. PulmCCM “Mechanical Ventilation in ARDS: Research Update”
4. Georgetown Critical Care Top 40, page 6

Summary by Duncan F. Moore, MD

Week 13 – CURB-65

“Defining community acquired pneumonia severity on presentation to hospital: an international derivation and validation study”

Thorax. 2003 May;58(5):377-82. [free full text]

Community-acquired pneumonia (CAP) is frequently encountered by the admitting medicine team. Ideally, the patient’s severity at presentation and risk for further decompensation should determine the appropriate setting for further care, whether as an outpatient, on an inpatient ward, or in the ICU. At the time of this 2003 study, the predominant decision aid was the 20-variable Pneumonia Severity Index. The authors of this study sought to develop a simpler decision aid for determining the appropriate level of care at presentation.

Population: adults admitted for CAP via the ED at three non-US academic medical centers

Intervention/Comparison: none

Outcome: 30-day mortality

Additional details about methodology: This study analyzed the aggregate data from three previous CAP cohort studies. 80% of the dataset was analyzed as a derivation cohort – meaning it was used to identify statistically significant, clinically relevant prognostic factors that allowed for mortality risk stratification. The resulting model was applied to the remaining 20% of the dataset (the validation cohort) in order to assess the accuracy of its predictive ability.

The following variables were integrated into the final model (CURB-65):

  1. Confusion
  2. Urea > 19mg/dL (7 mmol/L)
  3. Respiratory rate ≥ 30 breaths/min
  4. low Blood pressure (systolic BP < 90 mmHg or diastolic BP < 60 mmHg)
  5. age ≥ 65

1068 patients were analyzed. 821 (77%) were in the derivation cohort. 86% of patients received IV antibiotics, 5% were admitted to the ICU, and 4% were intubated. 30-day mortality was 9%. 9 of 11 clinical features examined in univariate analysis were statistically significant (see Table 2).

Ultimately, using the above-described CURB-65 model, in which 1 point is assigned for each clinical characteristic, patients with a CURB-65 score of 0 or 1 had 1.5% mortality, patients with a score of 2 had 9.2% mortality, and patients with a score of 3 or more had 22% mortality. Similar values were demonstrated in the validation cohort. Table 5 summarizes the sensitivity, specificity, PPVs, and NPVs of each CURB-65 score for 30-day mortality in both cohorts. As we would expect from a good predictive model, the sensitivity starts out very high and decreases with increasing score, while the specificity starts out very low and increases with increasing score. For the clinical application of their model, the authors selected the cut points of 1, 2, and 3 (see Figure 2).

CURB-65 is a simple 5-variable decision aid that is helpful in the initial stratification of mortality risk in patients with CAP.

The wide range of specificities and sensitivities at different values of the CURB-65 score makes it a robust tool for risk stratification. The authors felt that patients with a score of 0-1 were “likely suitable for home treatment,” patients with a score of 2 should have “hospital-supervised treatment,” and patients with score of  ≥ 3 had “severe pneumonia” and should be admitted (with consideration of ICU admission if score of 4 or 5).

Following the publication of the CURB-65 Score, the author of the Pneumonia Severity Index (PSI) published a prospective cohort study of CAP that examined the discriminatory power (area under the receiver operating characteristic curve) of the PSI vs. CURB-65. His study found that the PSI “has a higher discriminatory power for short-term mortality, defines a greater proportion of patients at low risk, and is slightly more accurate in identifying patients at low risk” than the CURB-65 score.

Expert opinion at UpToDate prefers the PSI over the CURB-65 score based on its more robust base of confirmatory evidence. Of note, the author of the PSI is one of the authors of the relevant UpToDate article. In an important contrast from the CURB-65 authors, these experts suggest that patients with a CURB-65 score of 0 be managed as outpatients, while patients with a score of 1 and above “should generally be admitted.”

Further Reading/References:
1. Original publication of the PSI, NEJM (1997)
2. PSI vs. CURB-65 (2005)
3. Wiki Journal Club
4. 2 Minute Medicine
5. UpToDate, “CAP in adults: assessing severity and determining the appropriate level of care”

Summary by Duncan F. Moore, MD

Week 12 – Early Palliative Care in NSCLC

“Early Palliative Care for Patients with Metastatic Non-Small-Cell Lung Cancer”

N Engl J Med. 2010 Aug 19;363(8):733-42 [free full text]

Ideally, palliative care improves a patient’s quality of life while facilitating appropriate usage of healthcare resources. However, initiating palliative care late in a disease course or in the inpatient setting may limit these beneficial effects. This 2010 study by Temel et al. sought to demonstrate benefits of early integrated palliative care on patient-reported quality of life outcomes and resource utilization.

Population: outpatients with metastatic NSCLC diagnosed < 8 weeks ago and ECOG performance status 0-2

Intervention: “early palliative care” – met with palliative MD/ARNP within 3 weeks of enrollment and at least monthly afterward

Comparison: standard oncologic care


Primary – change in Trial Outcome Index (TOI) from baseline to 12 weeks

TOI = sum of the lung-cancer, physical well-being, and functional well-being subscales of the Functional Assessment of Cancer Therapy­–Lung (FACT-L) scale (scale range 0-84, higher score = better function)


  • change in FACT-L score at 12 weeks (scale range 0-136)
  • change in lung-cancer subscale of FACT-L at 12 weeks (scale range 0-28)
  • “aggressive care,” meaning one of the following: chemo within 14 days before death, lack of hospice care, or admission to hospice ≤ 3 days before death
  • documentation of resuscitation preference in outpatient records
  • prevalence of depression at 12 weeks per HADS and PHQ-9
  • median survival

151 patients were randomized. There were no significant difference in baseline characteristics among the two groups. Palliative-care patients (n=77) had a mean TOI increase of 2.3 points, versus a 2.3-point decrease in the standard-care group (n=73) (p=0.04).

Secondary outcomes:

  • ∆ FACT-L score at 12 weeks: +4.2± 13.8 in the palliative group vs. -0.4 ±13.8 in the standard group (p=0.09 for difference between the two groups)
  • ∆ lung-cancer subscale at 12 weeks: +0.8±3.6 in palliative vs. +0.3±4.0 in standard (p=0.50)
  • aggressive end-of-life care was received in 33% of palliative patients vs. 53% of standard patients (p=0.05)
  • resuscitation preferences were documented in 53% of palliative patients vs. 28% of standard patients (p=0.05)
  • depression at 12 weeks per PHQ-9 was 4% in palliative patients vs. 17% in standard patients (p = 0.04)
  • median survival was 11.6 months in the palliative group versus 8.9 months in the standard group (p=0.02). (See Figure 3 on page 741 for the Kaplan-Meier curve.)

Early palliative care in patients with metastatic non-small cell lung cancer improved quality of life and mood, decreased aggressive end-of-life care, and improved survival.

This is a landmark study, both for its quantification of the quality-of-life (QoL) benefits of palliative intervention and for its seemingly counterintuitive finding that early palliative care actually improved survival.

The authors hypothesized that the demonstrated QoL and mood improvements may have led to the increased survival, as prior studies had associated lower QoL and depressed mood with decreased survival. However, I find more compelling their hypotheses that “the integration of palliative care with standard oncologic care may facilitate the optimal and appropriate administration of anticancer therapy, especially during the final months of life” and earlier referral to a hospice program may result in “better management of symptoms, leading to stabilization of [the patient’s] condition and prolonged survival.”

In practice, this study and those that followed have further spurred the integration of palliative care into many standard outpatient oncology workflows, including features such as co-located palliative care teams and palliative-focused checklists/algorithms for primary oncology providers.

Limitations of this study: 1) a complex subjective primary endpoint, 2) non-blinded, 3) single-center, minimally diverse patient population.

Further Reading/References:
1. ClinicalTrials.gov
2. Wiki Journal Club
3. Profile of first author Dr. Temel
4. UpToDate, “Benefits, services, and models of subspecialty palliative care”

Summary by Duncan F. Moore, MD

Week 11 – CAST

“Mortality and Morbidity in Patients Receiving Encainide, Flecainide, or Placebo”

The Cardiac Arrhythmia Suppression Trial (CAST) [free full text]

N Engl J Med. 1991 Mar 21;324(12):781-8.

Ventricular arrhythmias are common following MI, and studies have demonstrated that PVCs and other arrhythmias such as non-sustained ventricular tachycardia (NSVT) are independent risk factors for cardiac mortality following MI. As such, by the late 1980s, many patients with PVCs post-MI were treated with antiarrhythmic drugs in an attempt to reduce mortality. The 1991 CAST trial sought to prove what predecessor trials had failed to prove – that suppression of such rhythms post-MI would improve survival.


·       post-MI patients with ≥ 6 asymptomatic PVCs per hour and no runs of VT ≥ 15 beats, LVEF < 55% if within 90 days of MI, or LVEF < 40% if greater than 90 days since MI

o   patients were further selected by an open-label titration period in which patients were assigned to treatment with encainide, flecainide, or moricizine

o   “responders” had at least 80% suppression of PVCs and 90% suppression of runs of VT

Intervention: continuation of antiarrhythmic drug assigned during titration period

Comparison: transition from titration antiarrhythmic drug to placebo


Primary – death or cardiac arrest with resuscitation “either of which was due to arrhythmia”

1. all-cause mortality or cardiac arrest
2. cardiac death or cardiac arrest due to any cardiac cause
3. VT ≥ 15 or more beats at rate ≥ 120 bpm
4. syncope
5. permanent pacemaker implantation
6. recurrent MI
7. CHF
8. angina pectoris
9. coronary artery revascularization

The trial was terminated early due to increased mortality in the encainide and flecainide treatment groups. 1498 patients were randomized following successful titration during the open-label period, and they were reported in this paper. The results of the moricizine arm were reported later in a different paper (CAST-II).

RR of death or cardiac arrest due to arrhythmia was 2.64 (95% CI 1.60–4.36). The number needed to harm was 28.2. See Figure 1 on page 783 for a striking Kaplan-Meier curve.

RR of death or cardiac arrest due to all causes was 2.38 (95% CI 1.59–3.57). The number needed to harm was 20.6. See Figure 2 on page 784 for the relevant Kaplan-Meier curve.

Regarding the other secondary outcomes, cardiac death/arrest due to any cardiac cause was similarly elevated in the treatment group, and there were no significant differences in non-lethal endpoints among the treatment and placebo arms.

Treatment of asymptomatic ventricular arrhythmias with encainide and flecainide in patients with LV dysfunction following MI results in increased mortality.

This study is a classic example of how a treatment that is thought to make intuitive sense based on observational data (i.e. PVCs and NSVT are associated with cardiac death post-MI, thus reducing these arrhythmias will reduce death) can be easily and definitively disproven with a placebo-controlled trial with hard endpoints (e.g. death). Correlation does not equal causation.

Modern expert opinion at UpToDate notes no role for suppression of asymptomatic PVCs or NSVT in the peri-infarct period. Indeed such suppression may increase mortality. As noted on Wiki Journal Club, modern ACC/AHA guidelines “do not comment on the use of antiarrhythmic medications in ACS care.”

Further Reading:
1. CAST-I Trial at ClinicalTrials.gov
2. CAST-II trial publication, NEJM (1992)
3. Wiki Journal Club
4. 2 Minute Medicine
5. UpToDate “Clinical features and treatment of ventricular arrhythmias during acute myocardial infarction”

Summary by Duncan F. Moore, MD

Week 10 – MELD

“A Model to Predict Survival in Patients With End-Stage Liver Disease”

Hepatology. 2001 Feb;33(2):464-70. [free full text]

Prior to the adoption of the Model for End-Stage Liver Disease (MELD) score for the allocation of liver transplants, determination of medical urgency was dependent on the Child-Pugh score. The Child-Pugh score was limited by the inclusion of two subjective variables (severity of ascites and severity of encephalopathy), limited discriminatory ability, and a ceiling effect of laboratory abnormalities. Stakeholders sought an objective, continuous, generalizable index that more accurately and reliably represented disease severity. The MELD score had originally been developed in 2000 to estimate the survival of patients undergoing TIPS. The authors of this 2001 study hypothesized that the MELD score would accurately estimate short-term survival in a wide range of severities and etiologies of liver dysfunction and thus serve as a suitable replacement measure for the Child-Pugh score in the determination of medical urgency in transplant allocation.

This study reported a series of retrospective validation cohorts for the use of MELD in prediction of mortality in advanced liver disease.



  1. cirrhotic inpatients, Mayo Clinic, 1994-1999, n = 282 (see exclusion criteria)
  2. ambulatory patients with noncholestatic cirrhosis, newly-diagnosed, single-center in Italy, 1981-1984, n = 491 consecutive patients
  3. ambulatory patients with primary biliary cirrhosis, Mayo Clinic, 1973-1984, n = 326 (92 lacked all necessary variables for calculation of MELD)
  4. cirrhotic patients, Mayo Clinic, 1984-1988, n = 1179 patients with sufficient follow-up (≥ 3 months) and laboratory data

Index MELD score was calculated for each patient. Death during follow-up was assessed by chart review.

MELD score = 3.8*ln([bilirubin]) + 11.2*ln(INR) + 9.6*ln([Cr])+6.4*(etiology: 0 if cholestatic or alcoholic, 1 otherwise)

Primary study outcome was the concordance c-statistic between MELD score and 3-month survival. The c-statistic is equivalent to the area under receiver operating characteristic (AUROC). Per the authors, “a c-statistic between 0.8 and 0.9 indicates excellent diagnostic accuracy and a c-statistic greater than 0.7 is generally considered as a useful test.” (See page 455 for further explanation.)

There was no reliable comparison statistic (e.g. c-statistic of MELD vs. Child-Pugh in all groups).



  • hospitalized Mayo patients (late 1990s): c-statistic for prediction of 3-month survival = 0.87 (95% CI 0.82-0.92)
  • ambulatory, non-cholestatic Italian patients: c-statistic for 3-month survival = 0.80 (95% CI 0.69-0.90)
  • ambulatory PBC patients at Mayo: c-statistic for 3-month survival = 0.87 (95% CI 0.83-0.99)
  • cirrhotic patients at Mayo (1980s): c-statistic for 3-month survival = 0.78 (95% CI 0.74-0.81)


  • There was minimal improvement in the c-statistics for 3-month survival with the individual addition of SBP, variceal bleed, ascites, and encephalopathy to the MELD score (see Table 4, highest increase in c-statistic was 0.03).
  • When the etiology of liver disease was excluded from the MELD score, there was minimal change in the c-statistics (see Table 5, all paired CIs overlap).
  • C-statistics for 1-week mortality ranged from 0.80 to 0.95.

The MELD score is an excellent predictor of short-term mortality in patients with end-stage liver disease of diverse etiology and severity.

Despite the retrospective nature of this study, this study represented a significant improvement upon the Child-Pugh score in determining medical urgency in patients who require liver transplant.

In 2002, the United Network for Organ Sharing (UNOS) adopted a modified version of the MELD score for the prioritization of deceased-donor liver transplants in cirrhosis.

Concurrent with the 2001 publication of this study, Wiesner et al. performed a prospective validation of the use of MELD in the allocation of liver transplantation. When published in 2003, it demonstrated that MELD score accurately predicted 3-month mortality among patients with chronic liver disease on the waitlist.

The MELD score has also been validated in other conditions such as alcoholic hepatitis, hepatorenal syndrome, and acute liver failure (see UpToDate).

Subsequent additions to the MELD score have come out over the years. In 2006, the MELD Exception Guidelines offered extra points for severe comorbidities (e.g HCC, hepatopulmonary syndrome). In January 2016, the MELDNa score was adopted and is now used for liver transplant prioritization.

References and Further Reading:
1. “A model to predict poor survival in patients undergoing transjugular intrahepatic portosystemic shunts” (2000)
2. MDCalc “MELD Score”
3. Wiesner et al. “Model for end-stage liver disease (MELD) and allocation of donor livers” (2003)
4. Freeman Jr. et al. “MELD exception guidelines” (2006) 
5. 2 Minute Medicine
6. UpToDate “Model for End-stage Liver Disease (MELD)”

Summary by Duncan F. Moore, MD