Week 35 – PLCO

“Mortality Results from a Randomized Prostate-Cancer Screening Trial”

by the Prostate, Lung, Colorectal, and Ovarian (PLCO) Cancer Screening Trial project team

N Engl J Med. 2009 Mar 26;360(13):1310-9. [free full text]

The use of prostate-specific-antigen (PSA) testing to screen for prostate cancer has been a contentious subject for decades. Prior to the 2009 PLCO trial, there were no high-quality prospective studies of the potential benefit of PSA testing.

Population: men ages 55-74 enrolled at 10 US academic centers

exclusion criteria – hx of prostate, lung, or colorectal cancer, current cancer tx, and > 1 PSA test in past 3 years

Intervention: annual PSA testing for 6 years with annual digital rectal exam (DRE) for 4 years

Comparison: usual care

Primary – prostate-cancer-attributable death rate
Secondary – incidence of prostate cancer

Subgroup analyses of primary outcome:

  • patients with no more than 1 PSA test prior to enrollment
  • patients with 2+ PSA tests prior to enrollment

38,343 patients were randomized to the screening group, and 38,350 were randomized to the usual-care group. Baseline characteristics were similar in both groups. Median follow-up duration was 11.5 years. Patients in the screening group were 85% compliant with PSA testing and 86% compliant with DRE. In the usual-care group, 40% of patients received a PSA test within the first year, and 52% received a PSA test by the sixth year. Cumulative DRE rates in the control group were between 40-50%.

By seven years, there was no significant difference in rates of death attributable to prostate cancer. There were 50 deaths in the screening group and only 44 in the usual-care group (rate ratio 1.13, 95% CI 0.75 – 1.70). At ten years, there were 92 and 82 deaths in the respective groups (rate ratio 1.11, 95% CI 0.83–1.50).

By seven years, there was a higher rate of prostate cancer detection in the screening group. 2820 patients were diagnosed in the screening group, but only 2322 were diagnosed in the usual-care group (rate ratio 1.22, 95% CI 1.16–1.29). By ten years, there were 3452 and 2974 diagnoses in the respective groups (rate ratio 1.17, 95% CI 1.11–1.22).

Treatment-related complications (e.g. infection, incontinence, impotence) were not reported in this study.

Yearly PSA screening increased the prostate cancer diagnosis rate but did not impact prostate-cancer mortality when compared to the standard of care.

However, there were relatively high rates of PSA testing in the usual-care group (40-50%). The authors cite this finding as a probable major contributor to the lack of mortality difference. Other factors that may have biased to a null result were prior PSA testing and advances in treatments for prostate cancer during the trial. Regarding the former, 44% of men in both groups had already had one or more PSA tests prior to study enrollment. Prior PSA testing likely contributed to selection bias.

PSA screening recommendations prior to this 2009 study:

  • American Urological Association and American Cancer Society – recommended annual PSA and DRE, starting at age 50 if normal risk and earlier in high-risk men
  • National Comprehensive Cancer Network: “a risk-based screening algorithm, including family history, race, and age”
  • 2008 USPSTF Guidelines: insufficient evidence to determine balance between risks/benefits of PSA testing in men younger than 75; recommended against screening in age 75+ (Grade I Recommendation)

The authors of this study conclude that their results “support the validity of the recent [2008] recommendations of the USPSTF, especially against screening all men over the age of 75.”

However, the conclusions of the European Randomized Study of Screening for Prostate Cancer (ERSPC), which was published concurrently with PLCO in NEJM, differed. In ERSPC, PSA was screened every 4 years. The authors found an increased rate of detection of prostate cancer, but, more importantly, they found that screening decreased prostate cancer mortality (adjusted rate ratio 0.80, 95% CI 0.65–0.98, p = 0.04; NNT 1410 men receiving 1.7 screening visits over 9 years). Like PLCO, this study did not report treatment harms that may have been associated with overly zealous diagnosis.

The USPSTF reexamined its PSA guidelines in 2012. Given the lack of mortality benefit in PLCO, the pitiful mortality benefit in ERSPC, and the assumed harm from over-diagnosis and excessive intervention in patients who would ultimately not succumb to prostate cancer, the USPSTF concluded that PSA-based screening for prostate cancer should not be offered (Grade D Recommendation).

However, this guideline is under active consideration as of March 2018. See https://screeningforprostatecancer.org/. The draft recommendations encourage men ages 55-69 to have an informed discussion with their physician about potential benefits and harms of PSA-based screening (Grade C Recommendation). The USPSTF continues to recommend against screening in patients over 70 years old.

Screening for prostate cancer remains a complex and controversial topic. While we await further guidelines, we should continue to provide our patients with the aforementioned informed discussion. UpToDate has a nice summary of talking points culled from several sources.

Further Reading/References:
1. 2 Minute Medicine
2. ERSPC @ Wiki Journal Club
3. UpToDate, Screening for Prostate Cancer

Summary by Duncan F. Moore, MD

Week 29 – ALLHAT

“Major Outcomes in High-Risk Hypertensive Patients Randomized to Angiotensin-Converting Enzyme Inhibitor or Calcium Channel Blocker vs. Diuretic”

The Antihypertensive and Lipid-Lowering Treatment to Prevent Heart Attack Trial (ALLHAT)

JAMA. 2002 Dec 18;288(23):2981-97. [free full text]

Hypertension is a ubiquitous disease, and the cardiovascular and mortality benefits of BP control have been well described. However, as the number of available antihypertensive classes proliferated in the past several decades, a head-to-head comparison of different antihypertensive regimens was necessary to determine the optimal first-step therapy. The 2002 ALLHAT trial was a landmark trial in this effort.

33,357 patients aged 55 years or older with hypertension and at least one other coronary heart disease (CHD) risk factor (previous MI or stroke, LVH by ECG or echo, T2DM, current cigarette smoking, HDL < 35 mg/dL, or documentation of other atherosclerotic cardiovascular disease (CVD)). Notable exclusion criteria: history of hospitalization for CHF, history of treated symptomatic CHF, or known LVEF < 35%.

Prior antihypertensives were discontinued upon initiation of the study drug. Patients were randomized to one of three study drugs in a double-blind fashion. Study drugs and additional drugs were added in a step-wise fashion to achieve a goal BP <140/90 mmHg.

Step 1: titrate assigned study drug

  • chlorthalidone: 12.5 –> (sham titration) –> 25 mg/day
  • amlodipine: 2.5 –> 5 –> 10 mg/day
  • lisinopril: 10 –> 20 –> 40 mg/day

Step 2: add open-label agents at treating physician’s discretion (atenolol, clonidine, or reserpine)

  • atenolol: 25 to 100 mg/day
  • reserpine: 0.05 to 0.2 mg/day
  • clonidine: 0.1 to 0.3 mg BID

Step 3: add hydralazine 25 to 100 mg BID

Pairwise comparisons with respect to outcomes of chlorthalidone vs. either amlodipine or lisinopril. A doxazosin arm existed initially, but it was terminated early due to an excess of CV events, primarily driven by CHF.


Primary –  combined fatal CAD or nonfatal MI


  • all-cause mortality
  • fatal and nonfatal stroke
  • combined CHD (primary outcome, PCI, or hospitalized angina)
  • combined CVD (CHD, stroke, non-hospitalized treated angina, CHF [fatal, hospitalized, or treated non-hospitalized], and PAD)

Over a mean follow-up period of 4.9 years, there was no difference between the groups in either the primary outcome or all-cause mortality.

When compared with chlorthalidone at 5 years, the amlodipine and lisinopril groups had significantly higher systolic blood pressures (by 0.8 mmHg and 2 mmHg, respectively). The amlodipine group had a lower diastolic blood pressure when compared to the chlorthalidone group (0.8 mmHg).

When comparing amlodipine to chlorthalidone for the pre-specified secondary outcomes, amlodipine was associated with an increased risk of heart failure (RR 1.38; 95% CI 1.25-1.52).

When comparing lisinopril to chlorthalidone for the pre-specified secondary outcomes, lisinopril was associated with an increased risk of stroke (RR 1.15; 95% CI 1.02-1.30), combined CVD (RR 1.10; 95% CI 1.05-1.16), and heart failure (RR 1.20; 95% CI 1.09-1.34). The increased risk of stroke was mostly driven by 3 subgroups: women (RR 1.22; 95% CI 1.01-1.46), blacks (RR 1.40; 95% CI 1.17-1.68), and non-diabetics (RR 1.23; 95% CI 1.05-1.44). The increased risk of CVD was statistically significant in all subgroups except in patients aged less than 65. The increased risk of heart failure was statistically significant in all subgroups.

In patients with hypertension and one risk factor for CAD, chlorthalidone, lisinopril, and amlodipine performed similarly in reducing the risks of fatal CAD and nonfatal MI.

The study has several strengths: a large and diverse study population, a randomized, double-blind structure, and the rigorous evaluation of three of the most commonly prescribed “newer” classes of antihypertensives. Unfortunately, neither an ARB nor an aldosterone antagonist was included in the study. Additionally, the step-up therapies were not reflective of contemporary practice. (Instead, patients would likely be prescribed one or more of the primary study drugs.)

The ALLHAT study is one of the hallmark studies of hypertension and has played an important role in hypertension guidelines since it was published. Following the publication of ALLHAT, thiazide diuretics became widely used as first line drugs in the treatment of hypertension. The low cost of thiazides and their limited side-effect profile are particularly attractive class features. While ALLHAT looked specifically at chlorthalidone, in practice the positive findings were attributed to HCTZ, which has been more often prescribed. The authors of ALLHAT argued that the superiority of thiazides was likely a class effect, but according to the analysis at Wiki Journal Club, “there is little direct evidence that HCTZ specifically reduces the incidence of CVD among hypertensive individuals.” Furthermore, a 2006 study noted that that HCTZ has worse 24-hour BP control than chlorthalidone due to a shorter half-life. The ALLHAT authors note that “since a large proportion of participants required more than 1 drug to control their BP, it is reasonable to infer that a diuretic be included in all multi-drug regimens, if possible.” The 2017 ACC/AHA High Blood Pressure Guidelines state that, of the four thiazide diuretics on the market, chlorthalidone is preferred because of a prolonged half-life and trial-proven reduction of CVD (via the ALLHAT study).

Further Reading / References:
1. 2017 ACC Hypertension Guidelines
2. Wiki Journal Club
3. 2 Minute Medicine
4. Ernst et al, “Comparative antihypertensive effects of hydrochlorothiazide and chlorthalidone on ambulatory and office blood pressure.” (2006)
5. Gillis Pharmaceuticals: https://www.youtube.com/watch?v=HOxuAtehumc
6. Concepts in Hypertension, Volume 2 Issue 6

Summary by Ryan Commins, MD

Week 27 – UPLIFT

“A 4-Year Trial of Tiotropium in Chronic Obstructive Pulmonary Disease”

by the Understanding Potential Impacts on Function with Tiotropium (UPLIFT) investigators

N Engl J Med. 2008 October 9; 359(15):1543-1554 [free full text]

The 2008 UPLIFT trial was a four-year, randomized, double-blind, prospective study investigating whether or not tiotropium could reduce the rate of decline of FEV1 (a common metric for COPD progression).  A previous retrospective study had shown a reduced rate of FEV1 decline at one year with daily tiotropium. However, this finding had not been shown in any prospective study. As of 2008, smoking cessation was the only intervention demonstrated prospectively to decrease the rate of decline in FEV1.

Population:  Patients were selected from 490 investigational centers in 37 countries

Inclusion: COPD, age ≥ 40, ≥ 10 pack-year smoking history, post-bronchodilator FEV1 ≤70% of predicted value, and FEV1/FVC ≤70%

Exclusion: history of asthma, COPD exacerbation or respiratory infection within the past 4 weeks, history of pulmonary resection, or use of supplemental O2 for more than 12 hours per day

Intervention: daily tiotropium 18mcg + usual respiratory medications

Control: daily placebo + usual respiratory medications

(Of note, in both arms, the usual respiratory medications could not include an anticholinergic.)



  • Rate of decline in mean FEV1 before bronchodilation
  • Rate of decline in mean FEV1 after bronchodilation


  • Rate of decline in FVC
  • Quality of life as measured by St. George’s Respiratory Questionnaire (SGRQ, ranges 0-100 with lower scores indicating improved quality)
  • Rate of COPD exacerbations
  • All-cause mortality

2987 patients were assigned to receive tiotropium, and 3006 were assigned to receive placebo. Baseline characteristics were similar between the two groups. 44.6% of placebo and 36.2% of tiotropium patients did not complete at least 45 months of treatment.

The primary outcomes of decline in mean FEV1 either before or after bronchodilation were not significantly different between the two groups. Before bronchodilation, the difference in mean decline was 0 ml/year (p=0.95). After bronchodilation, the mean decline with tiotropium was 2 ml/year less than with placebo (p=0.21)

Regarding secondary outcomes:
There was no significant difference in rate of decline of FVC. The SGRQ was significantly lower (better) at all time points in the tiotropium group and, on average, was 2.7 points lower than in the placebo group (95% CI 2.0-3.3, p<0.001). The number of COPD exacerbations per year in the tiotropium group was 0.73 vs. 0.85 in the placebo group (RR 0.86, 95% CI 0.81-0.91; p<0.001), and the median time to first exacerbation was longer in the tiotropium group (16.7 months vs. 12.5 months, 95% CI 11.5-13.8,). All-cause mortality was not significantly different among the two groups (14.9% vs. 16.5%, HR 0.89; 95% CI 0.79-1.02; p=0.09). Respiratory failure developed in 88 patients in the tiotropium group vs. 120 in the placebo group (RR 0.67, 95% CI 0.51 to 0.89).

The UPLIFT study demonstrated no significant change in rate of decline in FEV1 with tiotropium therapy compared to placebo. However, tiotropium therapy improved quality of life and reduced the frequency of COPD exacerbations and respiratory failure. Overall, this study is an excellent example how a well-designed prospective study can overturn the results of prior retrospective analyses.

The authors offered three potential reasons for the lack of difference in rate of FEV1 decline among the groups. First, tiotropium may not actually alter the decline of lung function in COPD. Second, since both groups were permitted any respiratory medications other than another anticholinergic, there may have been a “ceiling effect” reached by the alternative medications, and thus no additional benefit offered by tiotropium therapy. Third, the authors noted the placebo group dropouts tended to be have more severe COPD, and so the remaining “healthy survivor” patients may have biased the group differences toward a null result.

Limitations of this study include a high dropout rate in both groups as well as a large male predominance (~75%) that limits generalizability. Finally, the limited clinical benefits of daily tiotropium use are not likely to be cost-effective. In 2010, researchers applied the treatment effects demonstrated in UPLIFT to an observational dataset of 56,321 tiotropium users in Belgium and estimated an average cost of 1.2 million euros per quality-adjusted life year (QALY) gained.

Further Reading/References:
1. Wiki Journal Club
2. 2 Minute Medicine
3. Neyt et al., “Tiotropium’s cost-effectiveness for the treatment of COPD: a cost-utility analysis under real-world conditions” (2010)

Summary by Gordon Pelegrin, MD


“A Controlled Trial of Renal Denervation for Resistant Hypertension”

N Engl J Med. 2014 Apr 10;370(15):1393-401. [free full text]

Approximately 10% of patients with hypertension have resistant hypertension (SBP > 140 despite adherence to three maximally tolerated doses of antihypertensives, including a diuretic). Evidence suggests that the sympathetic nervous system plays a large role in such cases, so catheter-based radiofrequency ablation of the renal arteries (renal denervation therapy) was developed as a potential treatment for resistant HTN. The 2010 SYMPLICITY HTN-2 trial was a small (n=106), non-blinded, randomized trial of renal denervation vs. continued care with oral antihypertensives that demonstrated a remarkable 30 mmHg greater decrease in SBP with renal denervation. Thus the 2014 SYMPLICITY HTN-3 trial was designed to evaluate the efficacy of renal denervation in a single-blinded trial with a sham-procedure control group.

Population: adults with resistant HTN with SBP ≥ 160 despite adherence to 3+ maximized antihypertensive drug classes, including a diuretic

pertinent exclusion criteria: 2º HTN, renal artery stenosis > 50%, prior renal artery intervention
(Note – all patients received angiography prior to randomization.)

Intervention: renal denervation with the Symplicity (Medtronic) radioablation catheter
Comparison: renal angiography only (sham procedure)

1º – mean change in office systolic BP from baseline at 6 months (examiner blinded to intervention)

2º – change in mean 24hr ambulatory SBP at 6 months

primary safety endpoint – composite of death, ESRD, embolic event with end-organ damage, renal artery or other vascular complication, hypertensive crisis within 30 days, or new renal artery stenosis of > 70%


535 patients were randomized. There were no differences in baseline characteristics among the two groups. On average, patients were receiving five antihypertensive medications.

There was no significant difference in reduction of SBP between the two groups at 6 months. ∆SBP was -14.13 ± 23.93 mmHg in the denervation group vs. -11.74 ± 25.94 mmHg in the sham-procedure group, for a between-group difference of -2.39 mmHg (95% CI -6.89 to 2.12, p = 0.26 with a  superiority margin of 5 mmHg). The change in 24hr ambulatory SBP at 6 months was -6.75 ± 15.11 mmHg in the denervation group vs. -4.79 ± 17.25 mmHg in the sham-procedure group, for a between-group difference of -1.96 mmHg (95% CI -4.97 to 1.06, p = 0.98 with a superiority margin of 2 mmHg). There was no significant difference in the prevalence of the composite safety endpoint at 6 months with 4.0% of the denervation group and 5.8% of the sham-procedure group reaching the endpoint (percentage-point difference of -1.9, 95% CI -6.0 to 2.2).
In patients with resistant hypertension, renal denervation therapy provided no reduction in SBP at 6-month follow-up relative to a sham procedure.

This trial was an astounding failure for Medtronic and its Symplicity renal denervation radioablation catheter. The magnitude of the difference in results between the non-blinded, no-sham-procedure SYMPLICITY HTN-2 trial and this patient-blinded, sham-procedure-controlled trial is likely a product of 1) a marked placebo effect of procedural intervention, 2) Hawthorne effect in the non-blinded trial, and 3) regression toward the mean (patients were enrolled based on unusually high BP readings that over the course of the trial declined to reflect a lower true baseline).

Currently, there is no role for renal denervation therapy in the treatment of HTN (resistant or otherwise). However, despite the results of SYMPLICITY HTN-3, other companies and research groups are assessing the role of different radioablation catheters in patients with low-risk essential HTN and with resistant HTN (for example, see https://www.ncbi.nlm.nih.gov/pubmed/29224639).

Further Reading/References:
2. UpToDate, “Treatment of resistant hypertension,” heading “Catheter-based radiofrequency ablation of sympathetic nerves”

Summary by Duncan F. Moore, MD

Week 18 – VERT

“Effects of Risedronate Treatment on Vertebral and Nonvertebral Fractures in Women With Postmenopausal Osteoporosis”

by the Vertebral Efficacy with Risedronate Therapy (VERT) Study Group

JAMA. 1999 Oct 13;282(14):1344-52. [free full text]

Bisphosphonates are a highly effective and relatively safe class of medications for the prevention of fractures in patients with osteoporosis. The VERT trial published in 1999 was a landmark trial that demonstrated this protective effect with the daily oral bisphosphonate risedronate.

Population: post-menopausal women with either 2 or more vertebral fractures per radiography or 1 vertebral fracture with decreased lumbar spine bone mineral density

Intervention: risedronate 2.5mg mg PO daily or risedronate 5mg PO daily

Comparison: placebo PO daily

1. prevalence of new vertebral fracture at 3 years follow-up, per annual imaging
2. prevalence of new non-vertebral fracture at 3 years follow-up, per annual imaging
3. change in bone mineral density, per DEXA q6 months

2458 patients were randomized. During the course of the study, “data from other trials indicated that the 2.5mg risedronate dose was less effective than the 5mg dose,” and thus the authors discontinued further data collection on the 2.5mg treatment arm at 1 year into the study. All treatment groups had similar baseline characteristics. 55% of the placebo group and 60% of the 5mg risedronate group completed 3 years of treatment. The prevalence of new vertebral fracture within 3 years was 11.3% in the risedronate group and 16.3% in the placebo group (RR 0.59, 95% CI 0.43-0.82, p = 0.003; NNT = 20). The prevalence of new non-vertebral fractures at 3 years was 5.2% in the treatment arm and 8.4% in the placebo arm (RR 0.6, 95% CI 0.39-0.94, p = 0.02; NNT = 31). Regarding bone mineral density (BMD), see Figure 4 for a visual depiction of the changes in BMD by treatment group at the various 6-month timepoints. Notably, change from baseline BMD of the lumbar spine and femoral neck was significantly higher (and positive) in the risedronate 5mg group at all follow-up timepoints relative to the placebo group and at all timepoints except 6 months for the femoral trochanter measurements. Regarding adverse events, there was no difference in the incidence of upper GI adverse events among the two groups. GI complaints “were the most common adverse events associated with study discontinuance,” and GI events lead to 42% of placebo withdrawals but only 36% of the 5mg risedronate withdrawals.

Oral risedronate reduces the risk of vertebral and non-vertebral fractures in patients with osteoporosis while increasing bone mineral density.

Overall, this was a large, well-designed RCT that demonstrated a concrete treatment benefit. As a result, oral bisphosphonate therapy has become the standard of care both for treatment and prevention of osteoporosis. This study, as well as others, demonstrated that such therapies are well-tolerated with relatively few side effects.

A notable strength of this study is that it did not exclude patients with GI comorbidities.  One weakness is the modification of the trial protocol to eliminate the risedronate 2.5mg treatment arm after 1 year of study. Although this arm demonstrated a reduction in vertebral fracture at 1 year relative to placebo (p = 0.02), its elimination raises suspicion that the pre-specified analyses were not yielding the anticipated results during the interim analysis and thus the less-impressive treatment arm was discarded.

Further Reading/References:
1. Weekly alendronate vs. weekly risedronate
2. Comparative effectiveness of pharmacologic treatments to prevent fractures: an updated systematic review (2014)

Summary by Duncan F. Moore, MD


“Effect of carvedilol on survival in severe chronic heart failure”

by the Carvedilol Prospective Randomized Cumulative Survival (COPERNICUS) Study Group

N Engl J Med. 2001 May 31;344(22):1651-8. [free full text]

We are all familiar with the role of beta-blockers in the management of heart failure with reduced ejection fraction. In the late 1990s, a growing body of excellent RCTs demonstrated that metoprolol succinate, bisoprolol, and carvedilol improved morbidity and mortality in patients with mild to moderate HFrEF, while the only trial of beta-blockade (with bucindolol) in patients with severe HFrEF failed to demonstrate a mortality benefit. In 2001, the COPERNICUS trial further elucidated the mortality benefit of carvedilol in patients with severe HFrEF.

Population: patients with severe CHF (NYHA class III-IV symptoms and LVEF < 25%) despite “appropriate conventional therapy”

Intervention: carvedilol with protocolized uptitration (in addition to pt’s usual meds)

Comparison: placebo with protocolized uptitration (in addition to pt’s usual meds)

Outcomes: all-cause mortality and combined risk of death or hospitalization for any cause

2289 patients were randomized before the trial was stopped early due to higher than expected mortality benefit in the carvedilol arm. Mean follow-up was 10.4 months. Regarding mortality: 190 (16.8%) of placebo patients died, while only 130 (11.2%) of carvedilol patients died (p = 0.0014) (NNT = 17.9). Regarding mortality or hospitalization: 507 (44.7%) of placebo patients died or were hospitalized, while only 425 (36.8%) of carvedilol patients died or were hospitalized (NNT = 12.6). Both outcomes were found to be of similar directions and magnitudes in subgroup analyses (age, sex, LVEF < 20% or >20%, ischemic vs. non-ischemic CHF, study site location, and no CHF hospitalization within year preceding randomization).

In severe heart failure with reduced ejection fraction, carvedilol significantly reduces mortality and hospitalization risk.

This was a straightforward, well-designed, double-blind RCT with a compelling conclusion. In addition, the dropout rate was higher in the placebo arm than the carvedilol arm! Despite longstanding clinician fears that beta-blockade would be ineffective or even harmful in patients with already advanced (but compensated) HFrEF, this trial definitively established the role for beta-blockade in such patients.

Per the 2013 ACCF/AHA guidelines, “use of one of the three beta blockers proven to reduce mortality (e.g. bisoprolol, carvedilol, and sustained-release metoprolol succinate) is recommended for all patients with current or prior symptoms of HFrEF, unless contraindicated.”

Of note, there are two COPERNICUS trials. This is the first reported study, in NEJM from 2001, which reports only the mortality and mortality + hospitalization results, again in the context of a highly anticipated trial that was terminated early due to mortality benefit. A year later, the full results were published in Circulation, which described findings such as a decreased number of hospitalizations, fewer total hospitalization days, fewer days hospitalized for CHF, improved subjective scores, and fewer serious adverse events (e.g. sudden death, cardiogenic shock, VT) in the carvedilol arm.

Further Reading/References:
1. 2013 ACCF/AHA Guideline for the Management of Heart Failure
2. COPERNICUS, 2002 Circulation version
3. Wiki Journal Club (describes 2001 NEJM, cites 2002 Circulation)
4. 2 Minute Medicine (describes and cites 2002 Circulation)

Summary by Duncan F. Moore, MD

Week 12 – Early Palliative Care in NSCLC

“Early Palliative Care for Patients with Metastatic Non-Small-Cell Lung Cancer”

N Engl J Med. 2010 Aug 19;363(8):733-42 [free full text]

Ideally, palliative care improves a patient’s quality of life while facilitating appropriate usage of healthcare resources. However, initiating palliative care late in a disease course or in the inpatient setting may limit these beneficial effects. This 2010 study by Temel et al. sought to demonstrate benefits of early integrated palliative care on patient-reported quality of life outcomes and resource utilization.

Population: outpatients with metastatic NSCLC diagnosed < 8 weeks ago and ECOG performance status 0-2

Intervention: “early palliative care” – met with palliative MD/ARNP within 3 weeks of enrollment and at least monthly afterward

Comparison: standard oncologic care


Primary – change in Trial Outcome Index (TOI) from baseline to 12 weeks

TOI = sum of the lung-cancer, physical well-being, and functional well-being subscales of the Functional Assessment of Cancer Therapy­–Lung (FACT-L) scale (scale range 0-84, higher score = better function)


  • change in FACT-L score at 12 weeks (scale range 0-136)
  • change in lung-cancer subscale of FACT-L at 12 weeks (scale range 0-28)
  • “aggressive care,” meaning one of the following: chemo within 14 days before death, lack of hospice care, or admission to hospice ≤ 3 days before death
  • documentation of resuscitation preference in outpatient records
  • prevalence of depression at 12 weeks per HADS and PHQ-9
  • median survival

151 patients were randomized. There were no significant difference in baseline characteristics among the two groups. Palliative-care patients (n=77) had a mean TOI increase of 2.3 points, versus a 2.3-point decrease in the standard-care group (n=73) (p=0.04).

Secondary outcomes:

  • ∆ FACT-L score at 12 weeks: +4.2± 13.8 in the palliative group vs. -0.4 ±13.8 in the standard group (p=0.09 for difference between the two groups)
  • ∆ lung-cancer subscale at 12 weeks: +0.8±3.6 in palliative vs. +0.3±4.0 in standard (p=0.50)
  • aggressive end-of-life care was received in 33% of palliative patients vs. 53% of standard patients (p=0.05)
  • resuscitation preferences were documented in 53% of palliative patients vs. 28% of standard patients (p=0.05)
  • depression at 12 weeks per PHQ-9 was 4% in palliative patients vs. 17% in standard patients (p = 0.04)
  • median survival was 11.6 months in the palliative group versus 8.9 months in the standard group (p=0.02). (See Figure 3 on page 741 for the Kaplan-Meier curve.)

Early palliative care in patients with metastatic non-small cell lung cancer improved quality of life and mood, decreased aggressive end-of-life care, and improved survival.

This is a landmark study, both for its quantification of the quality-of-life (QoL) benefits of palliative intervention and for its seemingly counterintuitive finding that early palliative care actually improved survival.

The authors hypothesized that the demonstrated QoL and mood improvements may have led to the increased survival, as prior studies had associated lower QoL and depressed mood with decreased survival. However, I find more compelling their hypotheses that “the integration of palliative care with standard oncologic care may facilitate the optimal and appropriate administration of anticancer therapy, especially during the final months of life” and earlier referral to a hospice program may result in “better management of symptoms, leading to stabilization of [the patient’s] condition and prolonged survival.”

In practice, this study and those that followed have further spurred the integration of palliative care into many standard outpatient oncology workflows, including features such as co-located palliative care teams and palliative-focused checklists/algorithms for primary oncology providers.

Limitations of this study: 1) a complex subjective primary endpoint, 2) non-blinded, 3) single-center, minimally diverse patient population.

Further Reading/References:
1. ClinicalTrials.gov
2. Wiki Journal Club
3. Profile of first author Dr. Temel
4. UpToDate, “Benefits, services, and models of subspecialty palliative care”

Summary by Duncan F. Moore, MD

Week 6 – COURAGE

“Optimal Medical Therapy with or without PCI for Stable Coronary Disease”

by the Clinical Outcomes Utilizing Revascularization and Aggressive Drug Evaluation (COURAGE) Trial Research Group

N Engl J Med. 2007 Apr 12;356(15):1503-16 [free full text]

The optimal medical management of stable coronary artery disease has been well-described. However, prior to the 2007 COURAGE trial, the role of percutaneous coronary intervention (PCI) in the initial management of stable coronary artery disease was unclear. It was known that PCI improved angina symptoms and short-term exercise performance in stable disease, but its mortality benefit and reduction of future myocardial infarction and ACS were unknown.

Population: US and Canadian patients with stable coronary artery disease
(See paper for inclusion/exclusion criteria. Disease had to be sufficiently and objectively severe, but not too severe, and symptoms could not be sustained at the highest CCS grade.)
Intervention: optimal medical management and PCI
(Optimal medical management included antiplatelet, anti-anginal, ACEi/ARB, and cholesterol-lowering therapy.)
Comparison: optimal medical management alone
1º: composite of all-cause mortality and non-fatal MI
2º: composite of all-cause mortality, non-fatal MI, and stroke, and hospitalization for unstable angina

2287 patients were randomized. Both groups had similar baseline characteristics with the exception of a higher prevalence of proximal LAD disease in the medical-therapy group. Median duration of follow-up was 4.6 years in both groups. Death or non-fatal MI occurred in 18.4% of the PCI group and in 17.8% of the medical-therapy group (p=0.62). Death, non-fatal MI, or stroke occurred in 20.0% of the PCI group and 19.5% of the medical-therapy group (p=0.62). Hospitalization for ACS occurred in 12.4% of the PCI group and 11.8% of the medical-therapy group (p=0.56). Revascularization during follow-up was performed in 21.1% of the PCI group but in 32.6% of the medical-therapy group (HR 0.60, 95% CI 0.51–0.71, p<0.001). Finally, 66% of PCI patients were free of angina at 1 year follow-up compared with 58% of medical-therapy patients (p<0.001); rates were 72% and 67% at 3 years (p=0.02) and 72% and 74% at five years (not significant).

In the initial management of stable coronary artery disease, PCI in addition to optimal medical management provided no mortality benefit over optimal medical management alone.

However, initial management with PCI did provide a time-limited improvement in angina symptoms.

As the authors of COURAGE nicely summarize on page 1512, the atherosclerotic plaques of ACS and stable CAD are different. Vulnerable, ACS-prone plaques have thin caps and spread outward along the wall of the coronary artery, as opposed to the plaques of stable CAD which have thick fibrous caps and are associated with inward-directed remodeling that narrows the artery lumen (and thus cause reliable angina symptoms and luminal narrowing on coronary angiography).

Notable limitations in this study: 1) the population was largely male, white, and 42% came from VA hospitals, thus limiting generalizability of the study; 2) drug-eluting stents were not clinically available until the last 6 months of the study, so most stents placed were bare metal.

Later meta-analyses were weakly suggestive of an association of PCI with improved all-cause mortality. It is thought that there may be a subset of patients with stable CAD who achieve a mortality benefit from PCI. Per UpToDate, there are ongoing RCTs investigating this possibility.

It is important to note that all of the above discussions assume that the patient does not have specific coronary artery anatomy in which initial CABG would provide a mortality benefit (e.g. left main disease, multi-vessel disease with decreased LVEF). Finally, PCI should be considered in patients whose physical activity is limited by angina symptoms despite optimal medical therapy.

Further Reading:
1. Wiki Journal Club
2. 2 Minute Medicine
3. Canadian Cardiovascular Society grading of angina pectoris
4. https://www.uptodate.com/contents/stable-ischemic-heart-disease-indications-for-revascularization

Summary by Duncan F. Moore, MD

Week 4 – NLST

“Reduced Lung-Cancer Mortality with Low-Dose Computed Tomographic Screening”

by the National Lung Cancer Screening Trial Research Team

N Engl J Med. 2011 Aug 4;365(5):395-409 [NEJM free full text]

Despite a reduction in smoking rates in the United States, lung cancer remains the number one cause of cancer death in the United States, as well as worldwide. Earlier studies of plain chest radiograph for lung cancer screening demonstrated no benefit, and thus in 2002 the National Lung Screening Trial (NLST) was undertaken to determine whether then-recent advances in CT technology could lead to an effective lung cancer screening method.

Population: adults age 55-74 with 30+ pack-years of smoking (if former smokers, they must have quit within the past 15 years)
Intervention: three annual screenings for lung cancer with low-dose CT
Comparison: three annual screenings for lung cancer with PA chest radiograph
Outcome: 1º = mortality from lung cancer, 2º = mortality from any cause and incidence of lung cancer

53,454 patients were randomized, and both groups had similar baseline characteristics. The low-dose CT group demonstrated 247 deaths from lung cancer per 100,000 person-years, whereas the radiography group demonstrated 309 deaths per 100,000 person-years. Thus a relative reduction in rate of death by 20.0% was seen in the CT group (95% CI 6.8 – 26.7%, p = 0.004). The number needed to screen with CT to prevent one lung cancer death was 320. There were 1877 deaths from any cause in the CT group and 2000 deaths in the radiography group; thus CT screening demonstrated a risk reduction of death from any cause of 6.7% (95% CI 1.2% – 13.6%, p = 0.02). Incidence of lung cancer in the CT group was 645 per 100,000 person-years and 941 per 100,000 person-years in the radiography group (RR 1.13, 95% CI 1.03 – 1.23).

Lung cancer screening with low-dose CT scan in high-risk patients provides a significant mortality benefit.

This trial was stopped early because the mortality benefit was so high. The benefit was driven by the reduction in deaths attributed to lung cancer – and when deaths from lung cancer were excluded from the overall mortality analysis there was no significant difference among the two arms. Largely on the basis of this study, the 2013 USPSTF guidelines for lung cancer screening recommend annual low-dose CT scan in patients who meet NLST inclusion criteria.

Per UpToDate, there are seven low-dose CT screening trials in progress in Europe. It is hoped that meta-analysis of all such RCTs will allow for further refinement in risk stratification, frequency of screening, and management of positive screening findings.

Of note, no randomized trial has ever demonstrated a mortality benefit of chest radiography for lung cancer screening. The Prostate, Lung, Colorectal, and Ovarian (PLCO) Cancer Screening Trial tested this modality vs. “community care,” and because the PLCO trial was ongoing at the time of creation of the NSLT, the NSLT authors trial decided to compare their intervention (CT) to chest radiography, in case the results of chest radiography in PLCO were positive (ultimately they were not).

Further Reading:
1. USPSTF Guidelines for Lung Cancer Screening (2013)
2. ClinicalTrials.gov
3. Wiki Journal Club
4. 2 Minute Medicine

Summary by Duncan F. Moore, MD