Week 20 – MELD

“A Model to Predict Survival in Patients With End-Stage Liver Disease”

Hepatology. 2001 Feb;33(2):464-70. [free full text]

Prior to the adoption of the Model for End-Stage Liver Disease (MELD) score for the allocation of liver transplants, the determination of medical urgency was dependent on the Child-Pugh score. The Child-Pugh score was limited by the inclusion of two subjective variables (severity of ascites and severity of encephalopathy), limited discriminatory ability, and a ceiling effect of laboratory abnormalities. Stakeholders sought an objective, continuous, generalizable index that more accurately and reliably represented disease severity. The MELD score had originally been developed in 2000 to estimate the survival of patients undergoing TIPS. The authors of this 2001 study hypothesized that the MELD score would accurately estimate short-term survival in a wide range of severities and etiologies of liver dysfunction and thus serve as a suitable replacement measure for the Child-Pugh score in the determination of medical urgency in transplant allocation.

This study reported a series of four retrospective validation cohorts for the use of MELD in prediction of mortality in advanced liver disease. The index MELD score was calculated for each patient. Death during follow-up was assessed by chart review.

MELD score = 3.8*ln([bilirubin])+11.2*ln(INR)+9.6*ln([Cr])+6.4*(etiology: 0 if cholestatic or alcoholic, 1 otherwise)

The primary study outcome was the concordance c-statistic between MELD score and 3-month survival. The c-statistic is equivalent to the area under receiver operating characteristic (AUROC). Per the authors, “a c-statistic between 0.8 and 0.9 indicates excellent diagnostic accuracy and a c-statistic greater than 0.7 is generally considered as a useful test.” (See page 455 for further explanation.) There was no reliable comparison statistic (e.g. c-statistic of MELD vs. that of Child-Pugh in all groups).

C-statistic for 3-month survival in the four cohorts ranged from 0.78 to 0.87 (no 95% CIs exceeded 1.0). There was minimal improvement in the c-statistics for 3-month survival with the individual addition of spontaneous bacterial peritonitis, variceal bleed, ascites, and encephalopathy to the MELD score (see Table 4, highest increase in c-statistic was 0.03). When the etiology of liver disease was excluded from the MELD score, there was minimal change in the c-statistics (see Table 5, all paired CIs overlap). C-statistics for 1-week mortality ranged from 0.80 to 0.95.

In conclusion, the MELD score is an excellent predictor of short-term mortality in patients with end-stage liver disease of diverse etiology and severity. Despite the retrospective nature of this study, this study represented a significant improvement upon the Child-Pugh score in determining medical urgency in patients who require liver transplant. In 2002, the United Network for Organ Sharing (UNOS) adopted a modified version of the MELD score for the prioritization of deceased-donor liver transplants in cirrhosis. Concurrent with the 2001 publication of this study, Wiesner et al. performed a prospective validation of the use of MELD in the allocation of liver transplantation. When published in 2003, it demonstrated that MELD score accurately predicted 3-month mortality among patients with chronic liver disease on the waitlist. The MELD score has also been validated in other conditions such as alcoholic hepatitis, hepatorenal syndrome, and acute liver failure (see UpToDate). Subsequent additions to the MELD score have come out over the years. In 2006, the MELD Exception Guidelines offered extra points for severe comorbidities (e.g HCC, hepatopulmonary syndrome). In January 2016, the MELDNa score was adopted and is now used for liver transplant prioritization.

References and Further Reading:
1. “A model to predict poor survival in patients undergoing transjugular intrahepatic portosystemic shunts” (2000)
2. MDCalc “MELD Score”
3. Wiesner et al. “Model for end-stage liver disease (MELD) and allocation of donor livers” (2003)
4. Freeman Jr. et al. “MELD exception guidelines” (2006)
5. MELD @ 2 Minute Medicine
6. UpToDate “Model for End-stage Liver Disease (MELD)”

Summary by Duncan F. Moore, MD

Image Credit: Ed Uthman, CC-BY-2.0, via WikiMedia Commons

Week 19 – RALES

“The effect of spironolactone on morbidity and mortality in patients with severe heart failure”

by the Randomized Aldactone Evaluation Study Investigators

N Engl J Med. 1999 Sep 2;341(10):709-17. [free full text]

Inhibition of the renin-angiotensin-aldosterone system (RAAS) is a tenet of the treatment of heart failure with reduced ejection fraction (see post from Week 12 – SOLVD). However, physiologic evidence exists that suggests ACEis only partially inhibit aldosterone production. It had been hypothesized that aldosterone receptor blockade (e.g. with spironolactone) in conjunction with ACE inhibition could synergistically improve RAAS blockade; however, there was substantial clinician concern about the risk of hyperkalemia. In 1996, the RALES investigators demonstrated that the addition of spironolactone 12.5 or 25mg daily in combination with ACEi resulted in laboratory evidence of increased RAAS inhibition at 12 weeks with an acceptable increased risk of hyperkalemia. The 1999 RALES study was thus designed to evaluate prospectively the mortality benefit and safety of the addition of relatively low-dose aldosterone treatment to the standard HFrEF treatment regimen.

The study enrolled patients with severe HFrEF (LVEF ≤ 35% and NYHA class IV symptoms within the past 6 months and class III or IV symptoms at enrollment) currently being treated with an ACEi (if tolerated) and a loop diuretic. Patients were randomized to the addition of spironolactone 25mg PO daily or placebo. (The dose could be increased at 8 weeks to 50mg PO daily if the patient showed signs or symptoms of progression of CHF without evidence of hyperkalemia.) The primary outcome was all-cause mortality. Secondary outcomes included death from cardiac causes, hospitalization for cardiac causes, change in NYHA functional class, and incidence of hyperkalemia.

1663 patients were randomized. The trial was stopped early (mean follow-up of 24 months) due to the marked improvement in mortality among the spironolactone group. Among the placebo group, 386 (46%) patients died, whereas only 284 (35%) patients among the spironolactone group died (RR 0.70, 95% CI 0.60 to 0.82, p < 0.001; NNT = 8.8). See the dramatic Kaplan-Meier curve in Figure 1. Relative to placebo, spironolactone treatment reduced deaths secondary to cardiac causes by 31% and hospitalizations for cardiac causes by 30% (p < 0.001 for both). In placebo patients, NYHA class improved in 33% of cases, was unchanged in 18%, and worsened in 48% of patients; in spironolactone patients, the NYHA class improved in 41%, was unchanged in 21%, and worsened in 38% of patients (p < 0.001 for group difference by Wilcoxon test). “Serious hyperkalemia” occurred in 10 (1%) of placebo patients and 14 (2%) of spironolactone patients (p = 0.42). Treatment discontinuation rates were similar among the two groups.

Among patients with severe HFrEF, the addition of spironolactone improved mortality, reduced hospitalizations for cardiac causes, and improved symptoms without conferring an increased risk of serious hyperkalemia. The authors hypothesized that spironolactone “can prevent progressive heart failure by averting sodium retention and myocardial fibrosis” and can “prevent sudden death from cardiac causes by averting potassium loss and by increasing the myocardial uptake of norepinephrine.” Myocardial fibrosis is thought to be reduced via blocking the role aldosterone plays in collagen formation. Overall, this was a well-designed double-blind RCT that built upon the safety data of the safe-dose-finding 1996 RALES trial and ushered in the era of routine use of aldosterone receptor blockade in severe HFrEF. In 2003, the EPHESUS trial trial demonstrated a mortality benefit of aldosterone antagonism (with eplerenone) among patients with LV dysfunction following acute MI, and in 2011, the EMPHASIS-HF trial demonstrated a reduction in CV death or HF hospitalization with eplerenone use among patients with EF ≤ 35% and NYHA class II symptoms (and notably among patients with a much higher prevalence of beta-blocker use than those of the mid-1990s RALES cohort). The 2014 TOPCAT trial demonstrated that, among patients with HFpEF, spironolactone does not reduce a composite endpoint of CV mortality, aborted cardiac arrest, or HF hospitalizations.

The 2013 ACCF/AHA Guideline for the Management of Heart Failure recommends the use of aldosterone receptor antagonists in patients with NYHA class II-IV symptoms with LVEF ≤ 35% and following an acute MI in patients with LVEF ≤ 40% with symptomatic HF or with a history of diabetes mellitus. Contraindications include Cr ≥ 2.5 or K ≥ 5.0.

Further Reading/References:
1. “Effectiveness of spironolactone added to an angiotensin-converting enzyme inhibitor and a loop diuretic for severe chronic congestive heart failure (the Randomized Aldactone Evaluation Study [RALES]).” American Journal of Cardiology, 1996.
2. RALES @ Wiki Journal Club
3. RALES @ 2 Minute Medicine
4. EPHESUS @ Wiki Journal Club
5. EMPHASIS-HF @ Wiki Journal Club
6. TOPCAT @ Wiki Journal Club
7. 2013 ACCF/AHA Guideline for the Management of Heart Failure

Summary by Duncan F. Moore, MD

Image Credit: Spirono, CC0 1.0, via Wikimedia Commons

Week 18 – Rivers Trial

“Early Goal-Directed Therapy in the Treatment of Severe Sepsis and Septic Shock”

N Engl J Med. 2001 Nov 8;345(19):1368-77. [free full text]

Sepsis is common and, in its more severe manifestations, confers a high mortality risk. Fundamentally, sepsis is a global mismatch between oxygen demand and delivery. Around the time of this seminal study by Rivers et al., there was increasing recognition of the concept of the “golden hour” in sepsis management – “where definitive recognition and treatment provide maximal benefit in terms of outcome” (1368). Rivers and his team created a “bundle” of early sepsis interventions that targeted preload, afterload, and contractility, dubbed early goal-directed therapy (EGDT). They evaluated this bundle’s effect on mortality and end-organ dysfunction.

The “Rivers trial” randomized adults presenting to a single US academic center ED with ≥ 2 SIRS criteria and either SBP ≤ 90 after a crystalloid challenge of 20-30ml/kg over 30min or lactate > 4mmol/L to either treatment with the EGDT bundle or to the standard of care.

Intervention: early goal-directed therapy (EGDT)

      • Received a central venous catheter with continuous central venous O2 saturation (ScvO2) measurement
      • Treated according to EGDT protocol (see Figure 2, or below) in ED for at least six hours
        • 500ml bolus of crystalloid q30min to achieve CVP 8-12mm
        • Vasopressors to achieve MAP ≥ 65
        • Vasodilators to achieve MAP ≤ 90
        • If ScvO2 < 70%, transfuse RBCs to achieve Hct ≥ 30
        • If, after CVP, MAP, and Hct were optimized as above and ScvO2 remained < 70%, dobutamine was added and uptitrated to achieve ScvO2 ≥ 70 or until max dose 20 μg/kg/min
          • dobutamine was de-escalated if MAP < 65 or HR > 120
        • Patients in whom hemodynamics could not be optimized were intubated and sedated, in order to decrease oxygen consumption
      • Patients were transferred to inpatient ICU bed as soon as able, and upon transfer ScvO2 measurement was discontinued
      • Inpatient team was blinded to treatment group assignment

The primary outcome was in-hospital mortality. Secondary endpoints included: resuscitation end points, organ-dysfunction scores, coagulation-related variables, administered treatments, and consumption of healthcare resources.

130 patients were randomized to EGDT, and 133 to standard therapy. There were no differences in baseline characteristics. There was no group difference in the prevalence of antibiotics given within the first 6 hours. Standard-therapy patients spent 6.3 ± 3.2 hours in the ED, whereas EGDT patients spent 8.0 ± 2.1 (p < 0.001).

In-hospital mortality was 46.5% in the standard-therapy group, and 30.5% in the EGDT group (p = 0.009, NNT 6.25). 28-day and 60-day mortalities were also improved in the EGDT group. See Table 3.

During the initial six hours of resuscitation, there was no significant group difference in mean heart rate or CVP. MAP was higher in the EGDT group (p < 0.001), but all patients in both groups reached a MAP ≥ 65. ScvO2 ≥ 70% was met by 60.2% of standard-therapy patients and 94.9% of EGDT patients (p < 0.001). A combination endpoint of achievement of CVP, MAP, and UOP (≥ 0.5cc/kg/hr) goals was met by 86.1% of standard-therapy patients and 99.2% of EGDT patients (p < 0.001). Standard-therapy patients had lower ScvO2 and greater base deficit, while lactate and pH values were similar in both groups.

During the period of 7 to 72 hours, the organ-dysfunction scores of APACHE II, SAPS II, and MODS were higher in the standard-therapy group (see Table 2). The prothrombin time, fibrin-split products concentration, and d-dimer concentrations were higher in the standard-therapy group, while PTT, fibrinogen concentration, and platelet counts were similar.

During the initial six hours, EGDT patients received significantly more fluids, pRBCs, and inotropic support than standard-therapy patients. Rates of vasopressor use and mechanical ventilation were similar. During the period of 7 to 72 hours, standard-therapy patients received more fluids, pRBCs, and vasopressors than the EGDT group, and they were more likely to be intubated and to have pulmonary-artery catheterization. Rates of inotrope use were similar. Overall, during the first 72 hrs, standard-therapy patients were more likely to receive vasopressors, be intubated, and undergo pulmonary-artery catheterization. EGDT patients were more likely to receive pRBC transfusion. There was no group difference in total volume of fluid administration or inotrope use. Regarding utilization, there were no group differences in mean duration of vasopressor therapy, mechanical ventilation, or length of stay. Among patients who survived to discharge, standard-therapy patients spent longer in the hospital than EGDT patients (18.4 ± 15.0 vs. 14.6 ± 14.5 days, respectively, p = 0.04).

In conclusion, early goal-directed therapy reduced in-hospital mortality in patients presenting to the ED with severe sepsis or septic shock when compared with usual care. In their discussion, the authors note that “when early therapy is not comprehensive, the progression to severe disease may be well under way at the time of admission to the intensive care unit” (1376).

The Rivers trial has been cited over 11,000 times. It has been widely discussed and dissected for decades. Most importantly, it helped catalyze a then-ongoing paradigm shift of what “usual care” in sepsis is. As noted by our own Drs. Sonti and Vinayak and in their Georgetown Critical Care Top 40: “Though we do not use the ‘Rivers protocol’ as written, concepts (timely resuscitation) have certainly infiltrated our ‘standard of care’ approach.” The Rivers trial evaluated the effect of a bundle (multiple interventions). It was a relatively complex protocol, and it has been recognized that the transfusion of blood to Hgb > 10 may have caused significant harm. In aggregate, the most critical elements of the modern initial resuscitation in sepsis are early administration of antibiotics (notably not protocolized by Rivers) within the first hour and the aggressive administration of IV fluids (now usually 30cc/kg of crystalloid within the first 3 hours of presentation).

More recently, there have been three large RCTs of EGDT versus usual care and/or protocols that used some of the EGDT targets: ProCESS (2014, USA), ARISE (2014, Australia), and ProMISe (2015, UK). In general terms, EGDT provided no mortality benefit compared to usual care. Prospectively, the authors of these three trials planned a meta-analysis – the 2017 PRISM study – which concluded that “EGDT did not result in better outcomes than usual care and was associated with higher hospitalization costs across a broad range of patient and hospital characteristics.” Despite patients in the Rivers trial being sicker than those of ProCESS/ARISE/ProMISe, it was not found in the subgroup analysis of PRISM that EGDT was more beneficial in sicker patients. Overall, the PRISM authors noted that “it remains possible that general advances in the provision of care for sepsis and septic shock, to the benefit of all patients, explain part or all of the difference in findings between the trial by Rivers et al. and the more recent trials.”

Further Reading/References:
1. Rivers trial @ Wiki Journal Club
2. Rivers trial @ 2 Minute Medicine
3. “Early Goal Directed Therapy in Septic Shock” @ Life in The Fast Lane
4. Georgetown Critical Care Top 40
5. “A randomized trial of protocol-based care for early septic shock” (ProCESS). NEJM 2014.
6. “Goal-directed resuscitation for patients with early septic shock” (ARISE). NEJM 2014.
7. “Trial of early, goal-directed resuscitation for septic shock” (ProMISe). NEJM 2015.
8. “Early, Goal-Directed Therapy for Septic Shock – A Patient-level Meta-Analysis” PRISM. NEJM 2017.
9. “Hour-1 Bundle,” Surviving Sepsis Campaign
10. UpToDate, “Evaluation and management of suspected sepsis and septic shock in adults”

Summary by Duncan F. Moore, MD

Image Credit: by Clinical_Cases, CC BY-SA 2.5 , via Wikimedia Commons

Week 17 – CURB-65

“Defining community acquired pneumonia severity on presentation to hospital: an international derivation and validation study”

Thorax. 2003 May;58(5):377-82. [free full text]

Community-acquired pneumonia (CAP) is frequently encountered by the admitting medicine team. Ideally, the patient’s severity at presentation and risk for further decompensation should determine the appropriate setting for further care, whether as an outpatient, on an inpatient ward, or in the ICU. At the time of this 2003 study, the predominant decision aid was the 20-variable Pneumonia Severity Index. The authors of this study sought to develop a simpler decision aid for determining the appropriate level of care at presentation.

The study examined the 30-day mortality rates of adults admitted for CAP via the ED at three non-US academic medical centers (data from three previous CAP cohort studies). 80% of the dataset was analyzed as a derivation cohort – meaning it was used to identify statistically significant, clinically relevant prognostic factors that allowed for mortality risk stratification. The resulting model was applied to the remaining 20% of the dataset (the validation cohort) in order to assess the accuracy of its predictive ability.

The following variables were integrated into the final model (CURB-65):

      1. Confusion
      2. Urea > 19mg/dL (7 mmol/L)
      3. Respiratory rate ≥ 30 breaths/min
      4. low Blood pressure (systolic BP < 90 mmHg or diastolic BP < 60 mmHg)
      5. age ≥ 65

1068 patients were analyzed. 821 (77%) were in the derivation cohort. 86% of patients received IV antibiotics, 5% were admitted to the ICU, and 4% were intubated. 30-day mortality was 9%. 9 of 11 clinical features examined in univariate analysis were statistically significant (see Table 2).

Ultimately, using the above-described CURB-65 model, in which 1 point is assigned for each clinical characteristic, patients with a CURB-65 score of 0 or 1 had 1.5% mortality, patients with a score of 2 had 9.2% mortality, and patients with a score of 3 or more had 22% mortality. Similar values were demonstrated in the validation cohort. Table 5 summarizes the sensitivity, specificity, PPVs, and NPVs of each CURB-65 score for 30-day mortality in both cohorts. As we would expect from a good predictive model, the sensitivity starts out very high and decreases with increasing score, while the specificity starts out very low and increases with increasing score. For the clinical application of their model, the authors selected the cut points of 1, 2, and 3 (see Figure 2).

In conclusion, CURB-65 is a simple 5-variable decision aid that is helpful in the initial stratification of mortality risk in patients with CAP.

The wide range of specificities and sensitivities at different values of the CURB-65 score makes it a robust tool for risk stratification. The authors felt that patients with a score of 0-1 were “likely suitable for home treatment,” patients with a score of 2 should have “hospital-supervised treatment,” and patients with score of  ≥ 3 had “severe pneumonia” and should be admitted (with consideration of ICU admission if score of 4 or 5).

Following the publication of the CURB-65 Score, the creator of the Pneumonia Severity Index (PSI) published a prospective cohort study of CAP that examined the discriminatory power (area under the receiver operating characteristic curve) of the PSI vs. CURB-65. His study found that the PSI “has a higher discriminatory power for short-term mortality, defines a greater proportion of patients at low risk, and is slightly more accurate in identifying patients at low risk” than the CURB-65 score.

Expert opinion at UpToDate prefers the PSI over the CURB-65 score based on its more robust base of confirmatory evidence. Of note, the author of the PSI is one of the authors of the relevant UpToDate article. In an important contrast from the CURB-65 authors, these experts suggest that patients with a CURB-65 score of 0 be managed as outpatients, while patients with a score of 1 and above “should generally be admitted.”

Further Reading/References:
1. Original publication of the PSI, NEJM (1997)
2. PSI vs. CURB-65 (2005)
3. CURB-65 @ Wiki Journal Club
4. CURB-65 @ 2 Minute Medicine
5. UpToDate, “CAP in adults: assessing severity and determining the appropriate level of care”

Summary by Duncan F. Moore, MD

Week 16 – National Lung Screening Trial (NLST)

“Reduced Lung-Cancer Mortality with Low-Dose Computed Tomographic Screening”

by the National Lung Cancer Screening Trial (NLST) Research Team

N Engl J Med. 2011 Aug 4;365(5):395-409 [free full text]

Despite a reduction in smoking rates in the United States, lung cancer remains the number one cause of cancer death in the United States as well as worldwide. Earlier studies of plain chest radiography for lung cancer screening demonstrated no benefit, and in 2002 the National Lung Screening Trial (NLST) was undertaken to determine whether then recent advances in CT technology could lead to an effective lung cancer screening method.

The study enrolled adults age 55-74 with 30+ pack-years of smoking (if former smokers, they must have quit within the past 15 years). Patients were randomized to either the intervention of three annual screenings for lung cancer with low-dose CT or to the comparator/control group to receive three annual screenings for lung cancer with PA chest radiograph. The primary outcome was mortality from lung cancer. Notable secondary outcomes were all-cause mortality and the incidence of lung cancer.

53,454 patients were randomized, and both groups had similar baseline characteristics. The low-dose CT group sustained 247 deaths from lung cancer per 100,000 person-years, whereas the radiography group sustained 309 deaths per 100,000 person-years. A relative reduction in rate of death by 20.0% was seen in the CT group (95% CI 6.8 – 26.7%, p = 0.004). The number needed to screen with CT to prevent one lung cancer death was 320. There were 1877 deaths from any cause in the CT group and 2000 deaths in the radiography group, so CT screening demonstrated a risk reduction of death from any cause of 6.7% (95% CI 1.2% – 13.6%, p = 0.02). Incidence of lung cancer in the CT group was 645 per 100,000 person-years and 941 per 100,000 person-years in the radiography group (RR 1.13, 95% CI 1.03 – 1.23).

Lung cancer screening with low-dose CT scan in high-risk patients provides a significant mortality benefit. This trial was stopped early because the mortality benefit was so high. The benefit was driven by the reduction in deaths attributed to lung cancer, and when deaths from lung cancer were excluded from the overall mortality analysis, there was no significant difference among the two arms. Largely on the basis of this study, the 2013 USPSTF guidelines for lung cancer screening recommend annual low-dose CT scan in patients who meet NLST inclusion criteria. However, it must be noted that, even in the “ideal” circumstances of this trial performed at experienced centers, 96% of abnormal CT screening results in this trial were actually false positives. Of all positive results, 11% led to invasive studies.

Per UpToDate, since NSLT, there have been several European low-dose CT screening trials published. However, all but one (NELSON) appear to be underpowered to demonstrate a possible mortality reduction. Meta-analysis of all such RCTs could allow for further refinement in risk stratification, frequency of screening, and management of positive screening findings.

No randomized trial has ever demonstrated a mortality benefit of plain chest radiography for lung cancer screening. The Prostate, Lung, Colorectal, and Ovarian (PLCO) Cancer Screening Trial tested this modality vs. “community care,” and because the PLCO trial was ongoing at the time of creation of the NSLT, the NSLT authors trial decided to compare their intervention (CT) to plain chest radiography in case the results of plain chest radiography in PLCO were positive. Ultimately, they were not.

Further Reading:
1. USPSTF Guidelines for Lung Cancer Screening (2013)
2. NLST @ ClinicalTrials.gov
3. NLST @ Wiki Journal Club
4. NLST @ 2 Minute Medicine
5. UpToDate, “Screening for lung cancer”

Summary by Duncan F. Moore, MD

Image Credit: Yale Rosen, CC BY-SA 2.0, via Wikimedia Commons

Week 15 – COPERNICUS

“Effect of carvedilol on survival in severe chronic heart failure”

by the Carvedilol Prospective Randomized Cumulative Survival (COPERNICUS) Study Group

N Engl J Med. 2001 May 31;344(22):1651-8. [free full text]

We are all familiar with the role of beta-blockers in the management of heart failure with reduced ejection fraction. In the late 1990s, a growing body of excellent RCTs demonstrated that metoprolol succinate, bisoprolol, and carvedilol improved morbidity and mortality in patients with mild to moderate HFrEF. However, the only trial of beta-blockade (with bucindolol) in patients with severe HFrEF failed to demonstrate a mortality benefit. In 2001, the COPERNICUS trial further elucidated the mortality benefit of carvedilol in patients with severe HFrEF.

The study enrolled patients with severe CHF (NYHA class III-IV symptoms and LVEF < 25%) despite “appropriate conventional therapy” and randomized them to treatment with carvedilol with protocolized uptitration (in addition to pt’s usual meds) or placebo with protocolized uptitration (in addition to pt’s usual meds). The major outcomes measured were all-cause mortality and the combined risk of death or hospitalization for any cause.

2289 patients were randomized before the trial was stopped early due to higher than expected survival benefit in the carvedilol arm. Mean follow-up was 10.4 months. Regarding mortality, 190 (16.8%) of placebo patients died, while only 130 (11.2%) of carvedilol patients died (p = 0.0014) (NNT = 17.9). Regarding mortality or hospitalization, 507 (44.7%) of placebo patients died or were hospitalized, but only 425 (36.8%) of carvedilol patients died or were hospitalized (NNT = 12.6). Both outcomes were found to be of similar directions and magnitudes in subgroup analyses (age, sex, LVEF < 20% or >20%, ischemic vs. non-ischemic CHF, study site location, and no CHF hospitalization within year preceding randomization).

Implication/Discussion:
In severe HFrEF, carvedilol significantly reduces mortality and hospitalization risk.

This was a straightforward, well-designed, double-blind RCT with a compelling conclusion. In addition, the dropout rate was higher in the placebo arm than the carvedilol arm! Despite longstanding clinician fears that beta-blockade would be ineffective or even harmful in patients with already advanced (but compensated) HFrEF, this trial definitively established the role for beta-blockade in such patients.

Per the 2013 ACCF/AHA guidelines, “use of one of the three beta blockers proven to reduce mortality (e.g. bisoprolol, carvedilol, and sustained-release metoprolol succinate) is recommended for all patients with current or prior symptoms of HFrEF, unless contraindicated.”

Please note that there are two COPERNICUS trials. This is the first reported study (NEJM 2001) which reports only the mortality and mortality + hospitalization results, again in the context of a highly anticipated trial that was terminated early due to mortality benefit. A year later, the full results were published in Circulation, which described findings such as a decreased number of hospitalizations, fewer total hospitalization days, fewer days hospitalized for CHF, improved subjective scores, and fewer serious adverse events (e.g. sudden death, cardiogenic shock, VT) in the carvedilol arm.

Further Reading/References:
1. 2013 ACCF/AHA Guideline for the Management of Heart Failure
2. 2017 ACC/AHA/HFSA Focused Update of the 2013 ACCF/AHA Guideline for the Management of Heart Failure
3. COPERNICUS, 2002 Circulation version
4. Wiki Journal Club (describes 2001 NEJM, cites 2002 Circulation)
5. 2 Minute Medicine (describes and cites 2002 Circulation)

Summary by Duncan F. Moore, MD

Week 14 – IDNT

“Renoprotective Effect of the Angiotensin-Receptor Antagonist Irbesartan in Patients with Nephropathy Due to Type 2 Diabetes”

aka the Irbesartan Diabetic Nephropathy Trial (IDNT)

N Engl J Med. 2001 Sep 20;345(12):851-60. [free full text]

Diabetes mellitus is the most common cause of ESRD in the US. In 1993, a landmark study in NEJM demonstrated that captopril (vs. placebo) slowed the deterioration in renal function in patients with T1DM. However, prior to this 2002 study, no study had addressed definitively whether a similar improvement in renal outcomes could be achieved with RAAS blockade in patients with T2DM. Irbesartan (Avapro) is an angiotensin II receptor blocker that was first approved in 1997 for the treatment of hypertension. Its marketer, Bristol-Meyers Squibb, sponsored this trial in hopes of broadening the market for its relatively new drug.

This trial randomized patients with T2DM, hypertension, and nephropathy (per proteinuria and elevated Cr) to treatment with either irbesartan, amlodipine, or placebo. The drug in each arm was titrated to achieve a target SBP ≤ 135, and all patients were allowed non-ACEi/non-ARB/non-CCB drugs as needed. The primary outcome was a composite of the doubling of serum Cr, onset of ESRD, or all-cause mortality. Secondary outcomes included individual components of the primary outcome and a composite cardiovascular outcome.

1715 patients were randomized. The mean blood pressure after the baseline visit was 140/77 in the irbesartan group, 141/77 in the amlodipine group, and 144/80 in the placebo group (p = 0.001 for pairwise comparisons of MAP between irbesartan or amlodipine and placebo). Regarding the primary composite renal endpoint, the unadjusted relative risk was 0.80 (95% CI 0.66-0.97, p = 0.02) for irbesartan vs. placebo, 1.04 (95% CI 0.86-1.25, p = 0.69) for amlodipine vs. placebo, and 0.77 (0.63-0.93, p = 0.006) for irbesartan vs. amlodipine. The groups also differed with respect to individual components of the primary outcome. The unadjusted relative risk of creatinine doubling was 33% lower among irbesartan patients than among placebo patients (p = 0.003) and was 37% lower than among amlodipine patients (p < 0.001). The relative risks of ESRD and all-cause mortality did not differ significantly among the groups. There were no significant group differences with respect to the composite cardiovascular outcome. Importantly, a sensitivity analysis was performed which demonstrated that the conclusions of the primary analysis were not impacted significantly by adjustment for mean arterial pressure achieved during follow-up.

In summary, irbesartan treatment in T2DM resulted in superior renal outcomes when compared to both placebo and amlodipine. This beneficial effect was independent of blood pressure lowering. This was a well-designed, double-blind, randomized, controlled trial. However, it was industry-sponsored, and in retrospect, its choice of study drug seems quaint. The direct conclusion of this trial is that irbesartan is renoprotective in T2DM. In the discussion of IDNT, the authors hypothesize that “the mechanism of renoprotection by agents that block the action of angiotensin II may be complex, involving hemodynamic factors that lower the intraglomerular pressure, the beneficial effects of diminished proteinuria, and decreased collagen formation that may be related to decreased stimulation of transforming growth factor beta by angiotensin II.” In September 2002, on the basis of this trial, the FDA broadened the official indication of irbesartan to include the treatment of type 2 diabetic nephropathy. This trial was published concurrently in NEJM with the RENAAL trial [https://www.wikijournalclub.org/wiki/RENAAL]. RENAAL was a similar trial of losartan vs. placebo in T2DM and demonstrated a similar reduction in the doubling of serum creatinine as well as a 28% reduction in progression to ESRD. In conjunction with the original 1993 ACEi in T1DM study, these two 2002 ARB in T2DM studies led to the overall notion of a renoprotective class effect of ACEis/ARBs in diabetes. Enalapril and lisinopril’s patents expired in 2000 and 2002, respectively. Shortly afterward, generic, once-daily ACE inhibitors entered the US market. Ultimately, such drugs ended up commandeering much of the diabetic-nephropathy-in-T2DM market share for which irbesartan’s owners had hoped.

Further Reading/References:
1. “The effect of angiotensin-converting-enzyme inhibition on diabetic nephropathy. The Collaborative Study Group.” NEJM 1993.
2. CSG Captopril Trial @ Wiki Journal Club
3. IDNT @ Wiki Journal Club
4. IDNT @ 2 Minute Medicine
5. US Food and Drug Administration, New Drug Application #020757
6. RENAAL @ Wiki Journal Club
7. RENAAL @ 2 Minute Medicine

Summary by Duncan F. Moore, MD

Image Credit: Skirtick, CC BY-SA 4.0, via Wikimedia Commons

Week 13 – VERT

“Effects of Risedronate Treatment on Vertebral and Nonvertebral Fractures in Women With Postmenopausal Osteoporosis”

by the Vertebral Efficacy with Risedronate Therapy (VERT) Study Group

JAMA. 1999 Oct 13;282(14):1344-52. [free full text]

Bisphosphonates are a highly effective and relatively safe class of medications for the prevention of fractures in patients with osteoporosis. The VERT trial published in 1999 was a landmark trial that demonstrated this protective effect with the daily oral bisphosphonate risedronate.

The trial enrolled post-menopausal women with either 2 or more vertebral fractures per radiography or 1 vertebral fracture with decreased lumbar spine bone mineral density. Patients were randomized to the treatment arm (risedronate 2.5mg PO daily or risedronate 5mg PO daily) to the daily PO placebo control arm. Measured outcomes included: 1) the prevalence of new vertebral fracture at 3 years follow-up, per annual imaging, 2) the prevalence of new non-vertebral fracture at 3 years follow-up, per annual imaging, and 3) change in bone mineral density, per DEXA q6 months.

2458 patients were randomized. During the course of the study, “data from other trials indicated that the 2.5mg risedronate dose was less effective than the 5mg dose,” and thus the authors discontinued further data collection on the 2.5mg treatment arm at 1 year into the study. All treatment groups had similar baseline characteristics. 55% of the placebo group and 60% of the 5mg risedronate group completed 3 years of treatment. The prevalence of new vertebral fracture within 3 years was 11.3% in the risedronate group and 16.3% in the placebo group (RR 0.59, 95% CI 0.43-0.82, p = 0.003; NNT = 20). The prevalence of new non-vertebral fractures at 3 years was 5.2% in the treatment arm and 8.4% in the placebo arm (RR 0.6, 95% CI 0.39-0.94, p = 0.02; NNT = 31). Regarding bone mineral density (BMD), see Figure 4 for a visual depiction of the changes in BMD by treatment group at the various 6-month timepoints. Notably, change from baseline BMD of the lumbar spine and femoral neck was significantly higher (and positive) in the risedronate 5mg group at all follow-up timepoints relative to the placebo group and at all timepoints except 6 months for the femoral trochanter measurements. Regarding adverse events, there was no difference in the incidence of upper GI adverse events among the two groups. GI complaints “were the most common adverse events associated with study discontinuance,” and GI events lead to 42% of placebo withdrawals but only 36% of the 5mg risedronate withdrawals.

Oral risedronate reduces the risk of vertebral and non-vertebral fractures in patients with osteoporosis while increasing bone mineral density. Overall, this was a large, well-designed RCT that demonstrated a concrete treatment benefit. As a result, oral bisphosphonate therapy has become the standard of care both for treatment and prevention of osteoporosis. This study, as well as others, demonstrated that such therapies are well-tolerated with relatively few side effects. A notable strength of this study is that it did not exclude patients with GI comorbidities.  One weakness is the modification of the trial protocol to eliminate the risedronate 2.5mg treatment arm after 1 year of study. Although this arm demonstrated a reduction in vertebral fracture at 1 year relative to placebo (p = 0.02), its elimination raises suspicion that the pre-specified analyses were not yielding the anticipated results during the interim analysis and thus the less-impressive treatment arm was discarded.

Further Reading/References:
1. Weekly alendronate vs. weekly risedronate
2. Comparative effectiveness of pharmacologic treatments to prevent fractures: an updated systematic review (2014)

Summary by Duncan F. Moore, MD

Image Credit: Nick Smith, CC BY-SA 3.0, via Wikimedia Commons

Week 12 – SOLVD

“Effect of Enalapril on Survival in Patients with Reduced Left Ventricular Ejection Fractions and Congestive Heart Failure”

by the Studies of Left Ventricular Dysfunction (SOLVD) Investigators

N Engl J Med. 1991 Aug 1;325(5):293-302. [free full text]

Heart failure with reduced ejection fraction (HFrEF) is a very common and highly morbid condition. We now know that blockade of the renin-angiotensin-aldosterone system (RAAS) with an ACEi or ARB is a cornerstone of modern HFrEF treatment. The 1991 SOLVD trial played an integral part in demonstrating the benefit of and broadening the indication for RAAS blockade in HFrEF.

The trial enrolled patients with HFrEF and LVEF ≤ 35% who were already on treatment (but not on an ACEi) and had Cr ≤ 2.0 and randomized them to treatment with enalapril BID (starting at 2.5mg and uptitrated as tolerated to 20mg BID) or treatment with placebo BID (again, starting at 2.5mg and uptitrated as tolerated to 20mg BID). Of note, there was a single-blind run-in period with enalapril in all patients, followed by a single-blind placebo run-in period. Finally, the patient was randomized to his/her actual study drug in a double-blind fashion. The primary outcomes were all-cause mortality and death from or hospitalization for CHF. Secondary outcomes included hospitalization for CHF, all-cause hospitalization, cardiovascular mortality, and CHF-related mortality.

2569 patients were randomized. Follow-up duration ranged from 22 to 55 months. 510 (39.7%) placebo patients died during follow-up compared to 452 (35.2%) enalapril patients (relative risk reduction of 16% per log-rank test, 95% CI 5-26%, p = 0.0036). See Figure 1 for the relevant Kaplan-Meier curves. 736 (57.3%) placebo patients died or were hospitalized for CHF during follow-up compared to 613 (47.7%) enalapril patients (relative risk reduction 26%, 95% CI 18-34, p < 0.0001). Hospitalizations for heart failure, all-cause hospitalizations, cardiovascular deaths, and deaths due to heart failure were all significantly reduced in the enalapril group. 320 placebo patients discontinued the study drug versus only 182 patients in the enalapril group. Enalapril patients were significantly more likely to report dizziness, fainting, and cough. There was no difference in the prevalence of angioedema.

Treatment of HFrEF with enalapril significantly reduced mortality and hospitalizations for heart failure. The authors note that for every 1000 study patients treated with enalapril, approximately 50 premature deaths and 350 heart failure hospitalizations were averted. The mortality benefit of enalapril appears to be immediate and increases for approximately 24 months. Per the authors, “reductions in deaths and rates of hospitalization from worsening heart failure may be related to improvements in ejection fraction and exercise capacity, to a decrease in signs and symptoms of congestion, and also to the known mechanism of action of the agent – i.e., a decrease in preload and afterload when the conversion of angiotensin I to angiotensin II is blocked.” Strengths of this study include its double-blind, randomized design, large sample size, and long follow-up. The fact that the run-in period allowed for the exclusion prior to randomization of patients who did not immediately tolerate enalapril is a major limitation of this study.

Prior to SOLVD, studies of ACEi in HFrEF had focused on patients with severe symptoms. The 1987 CONSENSUS trial was limited to patients with NYHA class IV symptoms. SOLVD broadened the indication of ACEi treatment to a wider group of symptoms and correlating EFs. Per the current 2013 ACCF/AHA guidelines for the management of heart failure, ACEi/ARB therapy is a Class I recommendation in all patients with HFrEF in order to reduce morbidity and mortality.

Further Reading/References:
1. Wiki Journal Club
2. 2 Minute Medicine
3. Effects of enalapril on mortality in severe congestive heart failure – Results of the Cooperative North Scandinavian Enalapril Survival Study (CONSENSUS). 1987.
4. 2013 ACCF/AHA guideline for the management of heart failure: executive summary

Summary by Duncan F. Moore, MD

Week 11 – Varenicline vs. Bupropion and Placebo for Smoking Cessation

“Varenicline, an α2β2 Nicotinic Acetylcholine Receptor Partial Agonist, vs Sustained-Release Bupropion and Placebo for Smoking Cessation”

JAMA. 2006 Jul 5;296(1):56-63. [free full text]

Assisting our patients in smoking cessation is a fundamental aspect of outpatient internal medicine. At the time of this trial, the only approved pharmacotherapies for smoking cessation were nicotine replacement therapy and bupropion. As the α2β2 nicotinic acetylcholine receptor (nAChR) was thought to be crucial to the reinforcing effects of nicotine, it was hypothesized that a partial agonist for this receptor could yield sufficient effect to satiate cravings and minimize withdrawal symptoms but also limit the reinforcing effects of exogenous nicotine. Thus Pfizer designed this large phase 3 trial to test the efficacy of its new α2β2 nAChR partial agonist varenicline (Chantix) against the only other non-nicotine pharmacotherapy at the time (bupropion) as well as placebo.

The trial enrolled adult smokers (10+ cigarettes per day) with fewer than three months of smoking abstinence in the past year (notable exclusion criteria included numerous psychiatric and substance use comorbidities). Patients were randomized to 12 weeks of treatment with either varenicline uptitrated by day 8 to 1mg BID, bupropion SR uptitrated by day 4 to 150mg BID, or placebo BID. Patients were also given a smoking cessation self-help booklet at the index visit and encouraged to set a quit date of day 8. Patients were followed at weekly clinic visits for the first 12 weeks (treatment duration) and then a mixture of clinic and phone visits for weeks 13-52. Non-smoking status during follow-up was determined by patient self-report combined with exhaled carbon monoxide < 10ppm. The primary endpoint was the 4-week continuous abstinence rate for study weeks 9-12 (as confirmed by exhaled CO level). Secondary endpoints included the continuous abstinence rate for weeks 9-24 and for weeks 9-52.

1025 patients were randomized. Compliance was similar among the three groups and the median duration of treatment was 84 days. Loss to follow-up was similar among the three groups. CO-confirmed continuous abstinence during weeks 9-12 was 44.0% among the varenicline group vs. 17.7% among the placebo group (OR 3.85, 95% CI 2.70–5.50, p < 0.001) vs. 29.5% among the bupropion group (OR vs. varenicline group 1.93, 95% CI 1.40–2.68, p < 0.001). (OR for bupropion vs. placebo was 2.00, 95% CI 1.38–2.89, p < 0.001.) Continuous abstinence for weeks 9-24 was 29.5% among the varenicline group vs. 10.5% among the placebo group (p < 0.001) vs. 20.7% among the bupropion group (p = 0.007). Continuous abstinence rates weeks 9-52 were 21.9% among the varenicline group vs. 8.4% among placebo group (p < 0.001) vs. 16.1% among the bupropion group (p = 0.057). Subgroup analysis of the primary outcome by sex did not yield significant differences in drug efficacy by sex.

This study demonstrated that varenicline was superior to both placebo and bupropion in facilitating smoking cessation at up to 24 weeks. At greater than 24 weeks, varenicline remained superior to placebo but was similarly efficacious as bupropion. This was a well-designed and executed large, double-blind, placebo- and active-treatment-controlled multicenter US trial. The trial was completed in April 2005 and a new drug application for varenicline (Chantix) was submitted to the FDA in November 2005. Of note, an “identically designed” (per this study’s authors), manufacturer-sponsored phase 3 trial was performed in parallel and reported very similar results in the in the same July 2006 issue of JAMA (PMID: 16820547) as the above study by Gonzales et al. These robust, positive-outcome pre-approval trials of varenicline helped the drug rapidly obtain approval in May 2006.

Per expert opinion at UpToDate, varenicline remains a preferred first-line pharmacotherapy for smoking cessation. Bupropion is a suitable, though generally less efficacious, alternative, particularly when the patient has comorbid depression. Per UpToDate, the recent (2016) EAGLES trial demonstrated that “in contrast to earlier concerns, varenicline and bupropion have no higher risk of associated adverse psychiatric effects than [nicotine replacement therapy] in smokers with comorbid psychiatric disorders.”

Further Reading/References:
1. This trial @ ClinicalTrials.gov
2. Sister trial: “Efficacy of varenicline, an alpha4beta2 nicotinic acetylcholine receptor partial agonist, vs placebo or sustained-release bupropion for smoking cessation: a randomized controlled trial.” JAMA. 2006 Jul 5;296(1):56-63.
3. Chantix FDA Approval Letter 5/10/2006
4. Rigotti NA. Pharmacotherapy for smoking cessation in adults. Post TW, ed. UpToDate. Waltham, MA: UpToDate Inc.
5. “Neuropsychiatric safety and efficacy of varenicline, bupropion, and nicotine patch in smokers with and without psychiatric disorders (EAGLES): a double-blind, randomised, placebo-controlled clinical trial.” Lancet. 2016 Jun 18;387(10037):2507-20.
6. 2 Minute Medicine: “Varenicline and bupropion more effective than varenicline alone for tobacco abstinence”
7. 2 Minute Medicine: “Varenicline safe for smoking cessation in patients with stable major depressive disorder”

Summary by Duncan F. Moore, MD

Image Credit: Сергей Фатеев, CC BY-SA 3.0, via Wikimedia Commons