Week 40 – Early Palliative Care in NSCLC

“Early Palliative Care for Patients with Metastatic Non-Small-Cell Lung Cancer”

N Engl J Med. 2010 Aug 19;363(8):733-42. [free full text]

Ideally, palliative care improves a patient’s quality of life while facilitating appropriate usage of healthcare resources. However, initiating palliative care late in a disease course or in the inpatient setting may limit these beneficial effects. This 2010 study by Temel et al. sought to demonstrate benefits of early integrated palliative care on patient-reported quality-of-life (QoL) outcomes and resource utilization.

The study enrolled outpatients with metastatic NSCLC diagnosed < 8 weeks prior and ECOG performance status 0-2 and randomized them to either “early palliative care” (met with palliative MD/ARNP within 3 weeks of enrollment and at least monthly afterward) or to standard oncologic care. The primary outcome was the change in Trial Outcome Index (TOI) from baseline to 12 weeks.

TOI = sum of the lung cancer, physical well-being, and functional well-being subscales of the Functional Assessment of Cancer Therapy­–Lung (FACT-L) scale (scale range 0-84, higher score = better function)

Secondary outcomes included:

      • change in FACT-L score at 12 weeks (scale range 0-136)
      • change in lung cancer subscale of FACT-L at 12 weeks (scale range 0-28)
      • “aggressive care,” meaning one of the following: chemo within 14 days before death, lack of hospice care, or admission to hospice ≤ 3 days before death
      • documentation of resuscitation preference in outpatient records
      • prevalence of depression at 12 weeks per HADS and PHQ-9
      • median survival

151 patients were randomized. Palliative-care patients (n = 77) had a mean TOI increase of 2.3 points vs. a 2.3-point decrease in the standard-care group (n = 73) (p = 0.04). Median survival was 11.6 months in the palliative group vs. 8.9 months in the standard group (p = 0.02). (See Figure 3 on page 741 for the Kaplan-Meier curve.) Prevalence of depression at 12 weeks per PHQ-9 was 4% in palliative patients vs. 17% in standard patients (p = 0.04). Aggressive end-of-life care was received in 33% of palliative patients vs. 53% of standard patients (p = 0.05). Resuscitation preferences were documented in 53% of palliative patients vs. 28% of standard patients (p = 0.05). There was no significant change in FACT-L score or lung cancer subscale score at 12 weeks.

Early palliative care in patients with metastatic non-small cell lung cancer improved quality of life and mood, decreased aggressive end-of-life care, and improved survival. This is a landmark study, both for its quantification of the QoL benefits of palliative intervention and for its seemingly counterintuitive finding that early palliative care actually improved survival.

The authors hypothesized that the demonstrated QoL and mood improvements may have led to the increased survival, as prior studies had associated lower QoL and depressed mood with decreased survival. However, I find more compelling their hypotheses that “the integration of palliative care with standard oncologic care may facilitate the optimal and appropriate administration of anticancer therapy, especially during the final months of life” and earlier referral to a hospice program may result in “better management of symptoms, leading to stabilization of [the patient’s] condition and prolonged survival.”

In practice, this study and those that followed have further spurred the integration of palliative care into many standard outpatient oncology workflows, including features such as co-located palliative care teams and palliative-focused checklists/algorithms for primary oncology providers. Of note, in the inpatient setting, a recent meta-analysis concluded that early hospital palliative care consultation was associated with a $3200 reduction in direct hospital costs ($4250 in subgroup of patients with cancer).

Further Reading/References:
1. ClinicalTrials.gov
2. Wiki Journal Club
3. Profile of first author Dr. Temel
4. “Economics of Palliative Care for Hospitalized Adults with Serious Illness: A Meta-analysis” JAMA Internal Medicine (2018)
5. UpToDate, “Benefits, services, and models of subspecialty palliative care”

Summary by Duncan F. Moore, MD

Week 39 – Early TIPS in Cirrhosis with Variceal Bleeding

“Early Use of TIPS in Patients with Cirrhosis and Variceal Bleeding”

N Engl J Med. 2010 Jun 24;362(25):2370-9. [free full text]

Variceal bleeding is a major cause of morbidity and mortality in decompensated cirrhosis. The standard of care for an acute variceal bleed includes a combination of vasoactive drugs, prophylactic antibiotics, and endoscopic techniques (e.g. banding). Transjugular intrahepatic portosystemic shunt (TIPS) can be used to treat refractory bleeding. This 2010 trial sought to determine the utility of early TIPS during the initial bleed in high-risk patients when compared to standard therapy.

The trial enrolled cirrhotic patients (Child-Pugh class B or C with score ≤ 13) with acute esophageal variceal bleeding. All patients received endoscopic band ligation (EBL) or endoscopic injection sclerotherapy (EIS) at the time of diagnostic endoscopy. All patients also received vasoactive drugs (terlipressin, somatostatin, or octreotide). Patients were randomized to either TIPS performed within 72 hours after diagnostic endoscopy or to “standard therapy” by 1) treatment with vasoactive drugs with transition to nonselective beta blocker when patients were free of bleeding followed by 2) addition of isosorbide mononitrate to maximum tolerated dose, and 3) a second session of EBL at 7-14 days after the initial session (repeated q10-14 days until variceal eradication was achieved). The primary outcome was a composite of failure to control acute bleeding or failure to prevent “clinically significant” variceal bleeding (requiring hospital admission or transfusion) at 1 year after enrollment. Selected secondary outcomes included 1-year mortality, development of hepatic encephalopathy (HE), ICU days, and hospital LOS.

359 patients were screened for inclusion, but ultimately only 63 were randomized. Baseline characteristics were similar among the two groups except that the early TIPS group had a higher rate of patients with previous hepatic encephalopathy. The primary composite endpoint of failure to control acute bleeding or rebleeding within 1 year occurred in 14 of 31 (45%) patients in the pharmacotherapy-EBL group and in only 1 of 32 (3%) of the early TIPS group (p = 0.001). The 1-year actuarial probability of remaining free of the primary outcome was 97% in the early TIPS group vs. 50% in the pharmacotherapy-EBL group (ARR 47 percentage points, 95% CI 25-69 percentage points, NNT 2.1). Regarding mortality, at one year, 12 of 31 (39%) patients in the pharmacotherapy-EBL group had died, while only 4 of 32 (13%) in the early TIPS group had died (p = 0.001, NNT = 4.0). There were no group differences in prevalence of HE at one year (28% in the early TIPS group vs. 40% in the pharmacotherapy-EBL group, p = 0.13). Additionally, there were no group differences in 1-year actuarial probability of new or worsening ascites. There were also no differences in length of ICU stay or hospitalization duration.

Early TIPS in acute esophageal variceal bleeding, when compared to standard pharmacotherapy and endoscopic band ligation, improved control of index bleeding, reduced recurrent variceal bleeding at 1 year, and reduced all-cause mortality. Prior studies had demonstrated that TIPS reduced the rebleeding rate but increased the rate of hepatic encephalopathy without improving survival. As such, TIPS had only been recommended as a rescue therapy. In contrast, this study presents compelling data that challenge these paradigms. The authors note that in “patients with Child-Pugh class C or in class B with active variceal bleeding, failure to initially control the bleeding or early rebleeding contributes to further deterioration in liver function, which in turn worsens the prognosis and may preclude the use of rescue TIPS.” Despite this, today, TIPS remains primarily a salvage therapy for use in cases of recurrent bleeding despite standard pharmacotherapy and EBL. There may be a subset of patients in whom early TIPS is the ideal strategy, but further trials will be required to identify this subset.

Further Reading/References:
1. Wiki Journal Club
2. 2 Minute Medicine
3. UpToDate, “Prevention of recurrent variceal hemorrhage in patients with cirrhosis”

Summary by Duncan F. Moore, MD

Week 38 – ACCORD

“Effects of Intensive Glucose Lowering in Type 2 Diabetes”

by the Action to Control Cardiovascular Risk in Diabetes (ACCORD) Study Group

N Engl J Med. 2008 Jun 12;358(24):2545-59. [free full text]

We all treat type 2 diabetes mellitus (T2DM) on a daily basis, and we understand that untreated T2DM places patients at increased risk for adverse micro- and macrovascular outcomes. Prior to the 2008 ACCORD study, prospective epidemiological studies had noted a direct correlation between increased hemoglobin A1c values and increased risk of cardiovascular events. This correlation implied that treating T2DM to lower A1c levels would result in the reduction of cardiovascular risk. The ACCORD trial was the first large RCT to evaluate this specific hypothesis through comparison of events in two treatment groups – aggressive and less aggressive glucose management.

The trial enrolled patients with T2DM with A1c ≥ 7.5% and either age 40-79 with prior cardiovascular disease or age 55-79 with “anatomical evidence of significant atherosclerosis,” albuminuria, LVH, or ≥ 2 additional risk factors for cardiovascular disease (dyslipidemia, HTN, current smoker, or obesity). Notable exclusion criteria included “frequent or recent serious hypoglycemic events,” an unwillingness to inject insulin, BMI > 45, Cr > 1.5, or “other serious illness.” Patients were randomized to either intensive therapy targeting A1c to < 6.0% or to standard therapy targeting A1c 7.0-7.9%. The primary outcome was a composite first nonfatal MI or nonfatal stroke and death from cardiovascular causes. Reported secondary outcomes included all-cause mortality, severe hypoglycemia, heart failure, motor vehicle accidents in which the patient was the driver, fluid retention, and weight gain.

10,251 patients were randomized. The average age was 62, the average duration of T2DM was 10 years, and the average A1c was 8.1%. Both groups lowered their median A1c quickly, and median A1c values of the two groups separated rapidly within the first four months. (See Figure 1.) The intensive-therapy group had more exposure to antihyperglycemics of all classes. See Table 2.) Drugs were more frequently added, removed, or titrated in the intensive-therapy group (4.4 times per year versus 2.0 times per year in the standard-therapy group). At one year, the intensive-therapy group had a median A1c of 6.4% versus 7.5% in the standard-therapy group.

The primary outcome of MI/stroke/cardiovascular death occurred in 352 (6.9%) intensive-therapy patients versus 371 (7.2%) standard-therapy patients (HR 0.90, 95% CI 0.78-1.04, p = 0.16).  The trial was stopped early at a mean follow-up of 3.5 years due to increased all-cause mortality in the intensive-therapy group. 257 (5.0%) of the intensive-therapy patients died, but only 203 (4.0%) of the standard-therapy patients died (HR 1.22, 95% CI 1.01-1.46, p = 0.04). For every 95 patients treated with intensive therapy for 3.5 years, one extra patient died. Death from cardiovascular causes was also increased in the intensive-therapy group (HR 1.35, 95% CI 1.04-1.76, p = 0.02). Regarding additional secondary outcomes, the intensive-therapy group had higher rates of hypoglycemia, weight gain, and fluid retention than the standard-therapy group. (See Table 3.) There were no group differences in rates of heart failure or motor vehicle accidents in which the patient was the driver.

Intensive glucose control of T2DM increased all-cause mortality and did not alter the risk of cardiovascular events. This harm was previously unrecognized. The authors performed sensitivities analyses, including non-prespecified analyses, such as group differences in use of drugs like rosiglitazone, and they were unable to find an explanation for this increased mortality.

The target A1c level in T2DM remains a nuanced, patient-specific goal. Aggressive management may lead to improved microvascular outcomes, but it must be weighed against the risk of hypoglycemia. As summarized by UpToDate, while long-term data from the UKPDS suggests there may be a macrovascular benefit to aggressive glucose management early in the course of T2DM, the data from ACCORD suggest strongly that, in patients with longstanding T2DM and additional risk factors for cardiovascular disease, such management increases mortality.

The 2019 American Diabetes Association guidelines suggest that “a reasonable A1c goal for many nonpregnant adults is < 7%.” More stringent goals (< 6.5%) may be appropriate if they can be achieved without significant hypoglycemia or polypharmacy, and less stringent goals (< 8%) may be appropriate for patients “with a severe history of hypoglycemia, limited life expectancy, advanced microvascular or macrovascular complications…”

Of note, ACCORD also simultaneously cross-enrolled its patients in studies of intensive blood pressure management and adjunctive lipid management with fenofibrate. See this 2010 NIH press release and the links below for more information.

Further Reading/References:
1. ACCORD @ Wiki Journal Club
2. ACCORD @ 2 Minute Medicine
3. American Diabetes Association – “Glycemic Targets.” Diabetes Care (2019).
4. “Effect of intensive treatment of hyperglycaemia on microvascular outcomes in type 2 diabetes: an analysis of the ACCORD randomised trial.” Lancet (2010).

Summary by Duncan F. Moore, MD

Image Credit: Omstaal, CC BY-SA 4.0, via Wikimedia Commons

Week 37 – AFFIRM

“A Comparison of Rate Control and Rhythm Control in Patients with Atrial Fibrillation”

by the Atrial Fibrillation Follow-Up Investigation of Rhythm Management (AFFIRM) Investigators

N Engl J Med. 2002 Dec 5;347(23):1825-33. [free full text]

It seems like the majority of patients with atrial fibrillation that we encounter today in the inpatient setting are being treated with a rate-control strategy, as opposed to a rhythm-control strategy. There was a time when both approaches were considered acceptable, and perhaps rhythm control was even the preferred initial strategy. The AFFIRM trial was the landmark study to address this debate.

The trial randomized patients with atrial fibrillation (judged “likely to be recurrent”) aged 65 or older “or who had other risk factors for stroke or death” to either 1) a rhythm-control strategy with one or more drugs from a pre-specified list and/or cardioversion to achieve sinus rhythm or 2) a rate-control strategy with beta-blockers, CCBs, and/or digoxin to a target resting HR ≤ 80 and a six-minute walk test HR ≤ 110. The primary endpoint was death during follow-up. The major secondary endpoint was a composite of death, disabling stroke, disabling anoxic encephalopathy, major bleeding, and cardiac arrest.

4060 patients were randomized. Death occurred in 26.7% of rhythm-control patients versus 25.9% of rate-control patients (HR 1.15, 95% CI 0.99 – 1.34, p = 0.08). The composite secondary endpoint occurred in 32.0% of rhythm control-patients versus 32.7% of rate-control patients (p = 0.33). Rhythm-control strategy was associated with a higher risk of death among patients older than 65 and patients with CAD (see Figure 2). Additionally, rhythm-control patients were more likely to be hospitalized during follow-up (80.1% vs. 73.0%, p < 0.001) and to develop torsades de pointes (0.8% vs. 0.2%, p = 0.007).

This trial demonstrated that a rhythm-control strategy in atrial fibrillation offers no mortality benefit over a rate-control strategy. At the time of publication, the authors wrote that rate control was an “accepted, though often secondary alternative” to rhythm control. Their study clearly demonstrated that there was no significant mortality benefit to either strategy and that hospitalizations were greater in the rhythm-control group. In subgroup analysis that rhythm control led to higher mortality among the elderly and those with CAD. Notably, 37.5% of rhythm-control patients had crossed over to rate control strategy by 5 years of follow-up, whereas only 14.9% of rate-control patients had switched over to rhythm control.

But what does this study mean for our practice today? Generally speaking, rate control is preferred in most patients, particularly the elderly and patients with CHF, whereas rhythm control may be pursued in patients with persistent symptoms despite rate control, patients unable to achieve rate control on AV nodal agents alone, and patients younger than 65. Both the AHA/ACC (2014) and the European Society of Cardiology (2016) guidelines have extensive recommendations that detail specific patient scenarios.

Further Reading / References:
1. Cardiologytrials.org
2. AFFIRM @ Wiki Journal Club
3. AFFIRM @ 2 Minute Medicine
4. Visual abstract @ Visualmed

Summary by Duncan F. Moore, MD

Image Credit: Drj, CC BY-SA 3.0, via Wikimedia Commons

Week 36 – CORTICUS

“Hydrocortisone Therapy for Patients with Septic Shock”

N Engl J Med. 2008 Jan 10;358(2):111-24. [free full text]

Steroid therapy in septic shock has been a hotly debated topic since the 1980s. The Annane trial in 2002 suggested that there was a mortality benefit to early steroid therapy and so for almost a decade this became standard of care. In 2008, the CORTICUS trial was performed suggesting otherwise.

The trial enrolled ICU patients with septic shock onset with past 72 hrs (defined as SBP < 90 despite fluids or need for vasopressors and hypoperfusion or organ dysfunction from sepsis). Excluded patients included those with an “underlying disease with a poor prognosis,” life expectancy < 24hrs, immunosuppression, and recent corticosteroid use. Patients were randomized to hydrocortisone 50mg IV q6h x5 days plus taper or to placebo injections q6h x5 days plus taper. The primary outcome was 28-day mortality among patients who did not have a response to ACTH stim test (cortisol rise < 9mcg/dL). Secondary outcomes included 28-day mortality in patients who had a positive response to ACTH stim test, 28-day mortality in all patients, reversal of shock (defined as SBP ≥ 90 for at least 24hrs without vasopressors) in all patients and time to reversal of shock in all patients.

In ACTH non-responders (n = 233), intervention vs. control 28-day mortality was 39.2% vs. 36.1%, respectively (p = 0.69). In ACTH responders (n = 254), intervention vs. control 28-day mortality was 28.8% vs. 28.7% respectively (p = 1.00). Reversal of was shock 84.7%% vs. 76.5% (p = 0.13). Among all patients, intervention vs. control 28-day mortality was 34.3% vs. 31.5% (p = 0.51) and reversal of shock 79.7% vs. 74.2% (p = 0.18). The duration of time to reversal of shock was significantly shorter among patients receiving hydrocortisone (per Kaplan-Meier analysis, p<0.001; see Figure 2) with median time to of reversal 3.3 days vs. 5.8 days (95% CI 5.2 – 6.9).

In conclusion, the CORTICUS trial demonstrated no mortality benefit of steroid therapy in septic shock regardless of a patient’s response to ACTH. Despite the lack of mortality benefit, it demonstrated an earlier resolution of shock with steroids. This lack of mortality benefit sharply contrasted with the previous Annane 2002 study. Several reasons have been posited for this difference including poor powering of the CORTICUS study (which did not reach the desired n = 800), inclusion starting within 72 hrs of septic shock vs. Annane starting within 8 hrs, and the overall sicker nature of Annane patients (who were all mechanically ventilated). Subsequent meta-analyses disagree about the mortality benefit of steroids, but meta-regression analyses suggest benefit among the sickest patients. All studies agree about the improvement in shock reversal. The 2016 Surviving Sepsis Campaign guidelines recommend IV hydrocortisone in septic shock in patients who continue to be hemodynamically unstable despite adequate fluid resuscitation and vasopressor therapy.

Per Drs. Sonti and Vinayak of the GUH MICU (excepted from their excellent Georgetown Critical Care Top 40): “Practically, we use steroids when reaching for a second pressor or if there is multiorgan system dysfunction. Our liver patients may have deficient cortisol production due to inadequate precursor lipid production; use of corticosteroids in these patients represents physiologic replacement rather than adjunct supplement.”

The ANZICS collaborative group published the ADRENAL trial in NEJM in 2018 – which demonstrated that “among patients with septic shock undergoing mechanical ventilation, a continuous infusion of hydrocortisone did not result in lower 90-day mortality than placebo.” The authors did note “a more rapid resolution of shock and a lower incidence of blood transfusion” among patients receiving hydrocortisone. The folks at EmCrit argued that this was essentially a negative study, and thus in the existing context of CORTICUS, the results of the ADRENAL trial do not change our management of refractory septic shock.

Finally, the 2018 APPROCCHSS trial (also by Annane) evaluated the survival benefit hydrocortisone plus fludocortisone vs. placebo in patients with septic shock and found that this intervention reduced 90-day all-cause mortality. At this time, it is difficult truly discern the added information of this trial given its timeframe, sample size, and severity of underlying illness. See the excellent discussion in the following links: WikiJournal Club, PulmCrit, PulmCCM, and UpToDate.

References / Additional Reading:
1. CORTICUS @ Wiki Journal Club
2. CORTICUS @ Minute Medicine
3. Surviving Sepsis Campaign: International Guidelines for Management of Sepsis and Septic Shock (2016), section “Corticosteroids”
4. Annane trial (2002) full text
5. PulmCCM, “Corticosteroids do help in sepsis: ADRENAL trial”
6. UpToDate, “Glucocorticoid therapy in septic shock”

Post by Gordon Pelegrin, MD

Image Credit: LHcheM, CC BY-SA 3.0, via Wikimedia Commons

Week 35 – POISE

“Effects of extended-release metoprolol succinate in patients undergoing non-cardiac surgery: a randomised controlled trial”

aka the PeriOperative Ischemic Evaluation (POISE) trial

Lancet. 2008 May 31;371(9627):1839-47. [free full text]

Non-cardiac surgery is commonly associated with major cardiovascular complications. It has been hypothesized that perioperative beta blockade would reduce such events by attenuating the effects of the intraoperative increases in catecholamine levels. Prior to the 2008 POISE trial, small- and moderate-sized trials had revealed inconsistent results, alternately demonstrating benefit and non-benefit with perioperative beta blockade. The POISE trial was a large RCT designed to assess the benefit of extended-release metoprolol succinate (vs. placebo) in reducing major cardiovascular events in patients of elevated cardiovascular risk.

The trial enrolled patients age 45+ undergoing non-cardiac surgery with estimated LOS 24+ hrs and elevated risk of cardiac disease, meaning: either 1) hx of CAD, 2) peripheral vascular disease, 3) hospitalization for CHF within past 3 years, 4) undergoing major vascular surgery, 5) or any three of the following seven risk criteria: undergoing intrathoracic or intraperitoneal surgery, hx CHF, hx TIA, hx DM, Cr > 2.0, age 70+, or undergoing urgent/emergent surgery.

Notable exclusion criteria: HR < 50, 2nd or 3rd degree heart block, asthma, already on beta blocker, prior intolerance of beta blocker, hx CABG within 5 years and no cardiac ischemia since

Intervention: metoprolol succinate (extended-release) 100mg PO starting 2-4 hrs before surgery, additional 100mg at 6-12 hrs postoperatively, followed by 200mg daily for 30 days.

Patients unable to take PO meds postoperatively were given metoprolol infusion.


Comparison: placebo PO / IV at same frequency as metoprolol arm

Primary – composite of cardiovascular death, non-fatal MI, and non-fatal cardiac arrest at 30 days

Secondary (at 30 days)

      • cardiovascular death
      • non-fatal MI
      • non-fatal cardiac arrest
      • all-cause mortality
      • non-cardiovascular death
      • MI
      • cardiac revascularization
      • stroke
      • non-fatal stroke
      • CHF
      • new, clinically significant atrial fibrillation
      • clinically significant hypotension
      • clinically significant bradycardia


Pre-specified subgroup analyses of primary outcome:

9298 patients were randomized. However, fraudulent activity was detected at participating sites in Iran and Colombia, and thus 947 patients from these sites were excluded from the final analyses. Ultimately, 4174 were randomized to the metoprolol group, and 4177 were randomized to the placebo group. There were no significant differences in baseline characteristics, pre-operative cardiac medications, surgery type, or anesthesia type between the two groups (see Table 1).

Regarding the primary outcome, metoprolol patients were less likely than placebo patients to experience the primary composite endpoint of cardiovascular death, non-fatal MI, and non-fatal cardiac arrest (HR 0.84, 95% CI 0.70-0.99, p = 0.0399). See Figure 2A for the relevant Kaplan-Meier curve. Note that the curves separate distinctly within the first several days.

Regarding selected secondary outcomes (see Table 3 for full list), metoprolol patients were more likely to die from any cause (HR 1.33, 95% CI 1.03-1.74, p = 0.0317). See Figure 2D for the Kaplan-Meier curve for all-cause mortality. Note that the curves start to separate around day 10. Cause of death was analyzed, and the only group difference in attributable cause was an increased number of deaths due to sepsis or infection in the metoprolol group (data not shown). Metoprolol patients were more likely to sustain a stroke (HR 2.17, 95% CI 1.26-3.74, p = 0.0053) or a non-fatal stroke (HR 1.94, 95% CI 1.01-3.69, p = 0.0450). Of all patients who sustained a non-fatal stroke, only 15-20% made a full recovery. Metoprolol patients were less likely to sustain new-onset atrial fibrillation (HR 0.76, 95% CI 0.58-0.99, p = 0.0435) and less likely to sustain a non-fatal MI (HR 0.70, 95% CI 0.57-0.86, p = 0.0008). There were no group differences in risk of cardiovascular death or non-fatal cardiac arrest. Metoprolol patients were more likely to sustain clinically significant hypotension (HR 1.55, 95% CI 1.38-1.74, p < 0.0001) and clinically significant bradycardia (HR 2.74, 95% CI 2.19-3.43, p < 0.0001).

Subgroup analysis did not reveal any significant interaction with the primary outcome by RCRI, sex, type of surgery, or anesthesia type.

In patients with cardiovascular risk factors undergoing non-cardiac surgery, the perioperative initiation of beta blockade decreased the composite risk of cardiovascular death, non-fatal MI, and non-fatal cardiac arrest and increased the overall mortality risk and risk of stroke.

This study affirms its central hypothesis – that blunting the catecholamine surge of surgery is beneficial from a cardiac standpoint. (Most patients in this study had an RCRI of 1 or 2.) However, the attendant increase in all-cause mortality is dramatic. The increased mortality is thought to result from delayed recognition of sepsis due to masking of tachycardia. Beta blockade may also limit the physiologic hemodynamic response necessary to successfully fight a serious infection. In retrospective analyses mentioned in the discussion, the investigators state that they cannot fully explain the increased risk of stroke in the metoprolol group. However, hypotension attributable to beta blockade explains about half of the increased number of strokes.

Overall, the authors conclude that “patients are unlikely to accept the risks associated with perioperative extended-release metoprolol.”

A major limitation of this study is the fact that 10% of enrolled patients were discarded in analysis due to fraudulent activity at selected investigation sites. In terms of generalizability, it is important to remember that POISE excluded patients who were already on beta blockers.

POISE is an important piece of evidence underpinning the 2014 ACC/AHA Guideline on Perioperative Cardiovascular Evaluation and Management of Patients Undergoing Noncardiac Surgery, which includes the following recommendations regarding beta blockers:

      • Beta blocker therapy should not be started on the day of surgery (Class III – Harm, Level B)
      • Continue beta blockers in patients who are on beta blockers chronically (Class I, Level B)
      • In patients with intermediate- or high-risk preoperative tests, it may be reasonable to begin beta blockers
      • In patients with ≥ 3 RCRI risk factors, it may be reasonable to begin beta blockers before surgery
      • Initiating beta blockers in the perioperative setting as an approach to reduce perioperative risk is of uncertain benefit in those with a long-term indication but no other RCRI risk factors
      • It may be reasonable to begin perioperative beta blockers long enough in advance to assess safety and tolerability, preferably > 1 day before surgery

Further Reading/References:
1. POISE @ Wiki Journal Club
2. POISE @ 2 Minute Medicine
3. UpToDate, “Management of cardiac risk for noncardiac surgery”
4. 2014 ACC/AHA guideline on perioperative cardiovascular evaluation and management of patients undergoing noncardiac surgery: a report of the American College of Cardiology/American Heart Association Task Force on practice guidelines.

Image Credit: Mark Oniffrey, CC BY-SA 4.0, via Wikimedia Commons

Summary by Duncan F. Moore, MD

Week 34 – HACA

“Mild Therapeutic Hypothermia to Improve the Neurologic Outcome After Cardiac Arrest”

by the Hypothermia After Cardiac Arrest Study Group

N Engl J Med. 2002 Feb 21;346(8):549-56. [free full text]

Neurologic injury after cardiac arrest is a significant source of morbidity and mortality. It is hypothesized that brain reperfusion injury (via the generation of free radicals and other inflammatory mediators) following ischemic time is the primary pathophysiologic basis. Animal models and limited human studies have demonstrated that patients treated with mild hypothermia following cardiac arrest have improved neurologic outcome. The 2002 HACA study sought to evaluate prospectively the utility of therapeutic hypothermia in reducing neurologic sequelae and mortality post-arrest.

Population: European patients who achieve return of spontaneous circulation (ROSC) after presenting to the ED in cardiac arrest

inclusion criteria: witnessed arrest, ventricular fibrillation or non-perfusing ventricular tachycardia as initial rhythm, estimated interval 5 to 15 min from collapse to first resuscitation attempt, no more than 60 min from collapse to ROSC, age 18-75

pertinent exclusion criteria: pt already < 30ºC on admission, comatose state prior to arrest due to CNS drugs, response to commands following ROSC

Intervention: Cooling to target temperature 32-34ºC with maintenance for 24 hrs followed by passive rewarming. Patients received pancuronium for neuromuscular blockade to prevent shivering.

Comparison: Standard intensive care


Primary: a “favorable neurologic outcome” at 6 months defined as Pittsburgh cerebral-performance scale category 1 (good recovery) or 2 (moderate disability). (Of note, the examiner was blinded to treatment group allocation.)


        • all-cause mortality at 6 months
        • specific complications within the first 7 days: bleeding “of any severity,” pneumonia, sepsis, pancreatitis, renal failure, pulmonary edema, seizures, arrhythmias, and pressure sores

3551 consecutive patients were assessed for enrollment and ultimately 275 met inclusion criteria and were randomized. The normothermia group had more baseline DM and CAD and were more likely to have received BLS from a bystander prior to the ED.

Regarding neurologic outcome at 6 months, 75 of 136 (55%) of the hypothermia group had a favorable neurologic outcome, versus 54/137 (39%) in the normothermia group (RR 1.40, 95% CI 1.08-1.81, p = 0.009; NNT = 6). After adjusting for all baseline characteristics, the RR increased slightly to 1.47 (95% CI 1.09-1.82).

Regarding death at 6 months, 41% of the hypothermia group had died, versus 55% of the normothermia group (RR 0.74, 95% CI 0.58-0.95, p = 0.02; NNT = 7). After adjusting for all baseline characteristics, RR = 0.62 (95% CI 0.36-0.95). There was no difference among the two groups in the rate of any complication or in the total number of complications during the first 7 days.

In ED patients with Vfib or pulseless VT arrest who did not have meaningful response to commands after ROSC, immediate therapeutic hypothermia reduced the rate of neurologic sequelae and mortality at 6 months.

Corresponding practice point from Dr. Sonti and Dr. Vinayak and their Georgetown Critical Care Top 40: “If after ROSC your patient remains unresponsive and does not have refractory hypoxemia/hypotension/coagulopathy, you should initiate therapeutic hypothermia even if the arrest was PEA. The benefit seen was substantial and any proposed biologic mechanism would seemingly apply to all causes of cardiac arrest. The investigators used pancuronium to prevent shivering; [at MGUH] there is a ‘shivering’ protocol in place and if refractory, paralytics can be used.”

This trial, as well as a concurrent publication by Benard et al. ushered in a new paradigm of therapeutic hypothermia or “targeted temperature management” (TTM) following cardiac arrest. Numerous trials in related populations and with modified interventions (e.g. target temperature 36º C) were performed over the following decade, and ultimately led to the current standard of practice.

Per UpToDate, the collective trial data suggest that “active control of the post-cardiac arrest patient’s core temperature, with a target between 32 and 36ºC, followed by active avoidance of fever, is the optimal strategy to promote patient survival.” TTM should be undertaken in all patients who do not follow commands or have purposeful movements following ROSC. Expert opinion at UpToDate recommends maintaining temperature control for at least 48 hours.

Further Reading/References:
1. HACA @ 2 Minute Medicine
2. HACA @ Wiki Journal Club
3. Georgetown Critical Care Top 40, page 23 (Jan. 2016)
4. PulmCCM.org, “Hypothermia did not help after out-of-hospital cardiac arrest, in largest study yet”
5. Cochrane Review, “Hypothermia for neuroprotection in adults after cardiopulmonary resuscitation”
6. The NNT, “Mild Therapeutic Hypothermia for Neuroprotection Following CPR”
7. UpToDate, “Post-cardiac arrest management in adults”

Summary by Duncan F. Moore, MD

Image Credit: Sergey Pesterev, CC BY-SA 4.0, via Wikimedia Commons

Week 33 – ALLHAT

“Major Outcomes in High-Risk Hypertensive Patients Randomized to Angiotensin-Converting Enzyme Inhibitor or Calcium Channel Blocker vs. Diuretic”

The Antihypertensive and Lipid-Lowering Treatment to Prevent Heart Attack Trial (ALLHAT)

JAMA. 2002 Dec 18;288(23):2981-97. [free full text]

Hypertension is a ubiquitous disease, and the cardiovascular and mortality benefits of BP control have been well described. However, as the number of available antihypertensive classes proliferated in the past several decades, a head-to-head comparison of different antihypertensive regimens was necessary to determine the optimal first-step therapy. The 2002 ALLHAT trial was a landmark trial in this effort.

33,357 patients aged 55 years or older with hypertension and at least one other coronary heart disease (CHD) risk factor (previous MI or stroke, LVH by ECG or echo, T2DM, current cigarette smoking, HDL < 35 mg/dL, or documentation of other atherosclerotic cardiovascular disease (CVD)). Notable exclusion criteria: history of hospitalization for CHF, history of treated symptomatic CHF, or known LVEF < 35%.

Prior antihypertensives were discontinued upon initiation of the study drug. Patients were randomized to one of three study drugs in a double-blind fashion. Study drugs and additional drugs were added in a step-wise fashion to achieve a goal BP < 140/90 mmHg.

Step 1: titrate assigned study drug

      • chlorthalidone: 12.5 –> 12.5 (sham titration) –> 25 mg/day
      • amlodipine: 2.5 –> 5 –> 10 mg/day
      • lisinopril: 10 –> 20 –> 40 mg/day

Step 2: add open-label agents at treating physician’s discretion (atenolol, clonidine, or reserpine)

      • atenolol: 25 to 100 mg/day
      • reserpine: 0.05 to 0.2 mg/day
      • clonidine: 0.1 to 0.3 mg BID

Step 3: add hydralazine 25 to 100 mg BID

Pairwise comparisons with respect to outcomes of chlorthalidone vs. either amlodipine or lisinopril. A doxazosin arm existed initially, but it was terminated early due to an excess of CV events, primarily driven by CHF.

Primary –  combined fatal CAD or nonfatal MI


      • all-cause mortality
      • fatal and nonfatal stroke
      • combined CHD (primary outcome, PCI, or hospitalized angina)
      • combined CVD (CHD, stroke, non-hospitalized treated angina, CHF [fatal, hospitalized, or treated non-hospitalized], and PAD)

Over a mean follow-up period of 4.9 years, there was no difference between the groups in either the primary outcome or all-cause mortality.

When compared with chlorthalidone at 5 years, the amlodipine and lisinopril groups had significantly higher systolic blood pressures (by 0.8 mmHg and 2 mmHg, respectively). The amlodipine group had a lower diastolic blood pressure when compared to the chlorthalidone group (0.8 mmHg).

When comparing amlodipine to chlorthalidone for the pre-specified secondary outcomes, amlodipine was associated with an increased risk of heart failure (RR 1.38; 95% CI 1.25-1.52).

When comparing lisinopril to chlorthalidone for the pre-specified secondary outcomes, lisinopril was associated with an increased risk of stroke (RR 1.15; 95% CI 1.02-1.30), combined CVD (RR 1.10; 95% CI 1.05-1.16), and heart failure (RR 1.20; 95% CI 1.09-1.34). The increased risk of stroke was mostly driven by 3 subgroups: women (RR 1.22; 95% CI 1.01-1.46), blacks (RR 1.40; 95% CI 1.17-1.68), and non-diabetics (RR 1.23; 95% CI 1.05-1.44). The increased risk of CVD was statistically significant in all subgroups except in patients aged less than 65. The increased risk of heart failure was statistically significant in all subgroups.

In patients with hypertension and one risk factor for CAD, chlorthalidone, lisinopril, and amlodipine performed similarly in reducing the risks of fatal CAD and nonfatal MI.

The study has several strengths: a large and diverse study population, a randomized, double-blind structure, and the rigorous evaluation of three of the most commonly prescribed “newer” classes of antihypertensives. Unfortunately, neither an ARB nor an aldosterone antagonist was included in the study. Additionally, the step-up therapies were not reflective of contemporary practice. (Instead, patients would likely be prescribed one or more of the primary study drugs.)

The ALLHAT study is one of the hallmark studies of hypertension and has played an important role in hypertension guidelines since it was published. Following the publication of ALLHAT, thiazide diuretics became widely used as first line drugs in the treatment of hypertension. The low cost of thiazides and their limited side-effect profile are particularly attractive class features. While ALLHAT looked specifically at chlorthalidone, in practice the positive findings were attributed to HCTZ, which has been more often prescribed. The authors of ALLHAT argued that the superiority of thiazides was likely a class effect, but according to the analysis at Wiki Journal Club, “there is little direct evidence that HCTZ specifically reduces the incidence of CVD among hypertensive individuals.” Furthermore, a 2006 study noted that that HCTZ has worse 24-hour BP control than chlorthalidone due to a shorter half-life. The ALLHAT authors note that “since a large proportion of participants required more than 1 drug to control their BP, it is reasonable to infer that a diuretic be included in all multi-drug regimens, if possible.” The 2017 ACC/AHA High Blood Pressure Guidelines state that, of the four thiazide diuretics on the market, chlorthalidone is preferred because of a prolonged half-life and trial-proven reduction of CVD (via the ALLHAT study).

Further Reading / References:
1. 2017 ACC Hypertension Guidelines
2. ALLHAT @ Wiki Journal Club
3. 2 Minute Medicine
4. Ernst et al, “Comparative antihypertensive effects of hydrochlorothiazide and chlorthalidone on ambulatory and office blood pressure.” (2006)
5. Gillis Pharmaceuticals
6. Concepts in Hypertension, Volume 2 Issue 6

Summary by Ryan Commins MD

Image Credit: Kimivanil, CC BY-SA 4.0, via Wikimedia Commons

Week 32 – PneumA

“Comparison of 8 vs 15 Days of Antibiotic Therapy for Ventilator-Associated Pneumonia in Adults”

JAMA. 2003 November 19;290(19):2588-2598. [free full text]

Ventilator-associated pneumonia (VAP) is a frequent complication of mechanical ventilation and, prior to this study, few trials had addressed the optimal duration of antibiotic therapy in VAP. Thus, patients frequently received 14- to 21-day antibiotic courses. As antibiotic stewardship efforts increased and awareness grew of the association between prolonged antibiotic courses and the development of multidrug resistant (MDR) infections, more data were needed to clarify the optimal VAP treatment duration.

This 2003 trial by the PneumA Trial Group was the first large randomized trial to compare shorter (8-day) versus longer (15-day) treatment courses for VAP.

The noninferiority study, carried out in 51 French ICUs, enrolled intubated patients with clinical suspicion for VAP and randomized them to either 8 or 15 days of antimicrobials. Antimicrobial regimens were chosen by the treating clinician. 401 patients met eligibility criteria. 197 were randomized to the 8-day regimen. 204 patients were randomized to the 15-day regimen. Study participants were blinded to randomization assignment until day 8. Analysis was performed using an intention-to-treat model. The primary outcomes measured were death from any cause at 28 days, antibiotic-free days, and microbiologically documented pulmonary infection recurrence.

Study findings demonstrated a similar 28-day mortality in both groups (18.8% mortality in 8-day group vs. 17.2% in 15-day group, group difference 90% CI -3.7% to 6.9%). The 8-day group did not develop more recurrent infections (28.9% in 8-day group vs. 26.0% in 15-day group, group difference 90% CI -3.2% to 9.1%). The 8-day group did have more antibiotic-free days when measured at the 28-day point (13.1 in 8-day group vs. 8.7 in 15-day group, p<0.001). A subgroup analysis did show that more 8-day-group patients who had an initial infection with lactose-nonfermenting GNRs developed a recurrent pulmonary infection, so noninferiority was not established in this specific subgroup (40.6% recurrent GNR infection in 8-day group vs. 25.4% in 15-day group, group difference 90% CI 3.9% to 26.6%).

There is no benefit to prolonging VAP treatment to 15 days (except perhaps when Pseudomonas aeruginosa is suspected based on gram stain/culture data). Shorter courses of antibiotics for VAP treatment allow for less antibiotic exposure without increasing rates of recurrent infection or mortality.

The 2016 IDSA guidelines on VAP treatment recommend a 7-day course of antimicrobials for treatment of VAP (as opposed to a longer treatment course such as 8-15 days). These guidelines are based on the IDSA’s own large meta-analysis (of 10 randomized trials, including PneumA, as well as an observational study) which demonstrated that shorter courses of antibiotics (7 days) reduce antibiotic exposure and recurrent pneumonia due to MDR organisms without affecting clinical outcomes, such as mortality. Of note, this 7-day course recommendation also applies to treatment of lactose-nonfermenting GNRs, such as Pseudomonas.

When considering the PneumA trial within the context of the newest IDSA guidelines, we see that we now have over 15 years of evidence supporting the use of shorter VAP treatment courses.

Further Reading/References:
1. 2016 IDSA Guidelines for the Management of HAP/VAP
2. PneumA @ Wiki Journal Club
3. PulmCCM “IDSA Guidelines 2016: HAP, VAP & It’s the End of HCAP as We Know It (And I Feel Fine)”
4. PulmCrit “The siren’s call: Double-coverage for ventilator associated PNA”

Summary by Liz Novick, MD

Week 31 – PLCO

“Mortality Results from a Randomized Prostate-Cancer Screening Trial”

by the Prostate, Lung, Colorectal, and Ovarian (PLCO) Cancer Screening Trial project team

N Engl J Med. 2009 Mar 26;360(13):1310-9. [free full text]

The use of prostate-specific-antigen (PSA) testing to screen for prostate cancer has been a contentious subject for decades. Prior to the 2009 PLCO trial, there were no high-quality prospective studies of the potential benefit of PSA testing.

The trial enrolled men ages 55-74 (excluded if history of prostate, lung, or colorectal cancer, current cancer treatment, or > 1 PSA test in the past 3 years). Patients were randomized to annual PSA testing for 6 years with annual digital rectal exam (DRE) for 4 years or to usual care. The primary outcome was the prostate-cancer-attributable death rate, and the secondary outcome was the incidence of prostate cancer.

38,343 patients were randomized to the screening group, and 38,350 were randomized to the usual-care group. Baseline characteristics were similar in both groups. Median follow-up duration was 11.5 years. Patients in the screening group were 85% compliant with PSA testing and 86% compliant with DRE. In the usual-care group, 40% of patients received a PSA test within the first year, and 52% received a PSA test by the sixth year. Cumulative DRE rates in the usual-care group were between 40-50%. By seven years, there was no significant difference in rates of death attributable to prostate cancer. There were 50 deaths in the screening group and only 44 in the usual-care group (rate ratio 1.13, 95% CI 0.75 – 1.70). At ten years, there were 92 and 82 deaths in the respective groups (rate ratio 1.11, 95% CI 0.83–1.50). By seven years, there was a higher rate of prostate cancer detection in the screening group. 2820 patients were diagnosed in the screening group, but only 2322 were diagnosed in the usual-care group (rate ratio 1.22, 95% CI 1.16–1.29). By ten years, there were 3452 and 2974 diagnoses in the respective groups (rate ratio 1.17, 95% CI 1.11–1.22). Treatment-related complications (e.g. infection, incontinence, impotence) were not reported in this study.

In summary, yearly PSA screening increased the prostate cancer diagnosis rate but did not impact prostate-cancer mortality when compared to the standard of care. However, there were relatively high rates of PSA testing in the usual-care group (40-50%). The authors cite this finding as a probable major contributor to the lack of mortality difference. Other factors that may have biased to a null result were prior PSA testing and advances in treatments for prostate cancer during the trial. Regarding the former, 44% of men in both groups had already had one or more PSA tests prior to study enrollment. Prior PSA testing likely contributed to selection bias.

PSA screening recommendations prior to this 2009 study:

      • American Urological Association and American Cancer Society – recommended annual PSA and DRE, starting at age 50 if normal risk and earlier in high-risk men
      • National Comprehensive Cancer Network: “a risk-based screening algorithm, including family history, race, and age”
      • 2008 USPSTF Guidelines: insufficient evidence to determine balance between risks/benefits of PSA testing in men younger than 75; recommended against screening in age 75+ (Grade I Recommendation)

The authors of this study conclude that their results “support the validity of the recent [2008] recommendations of the USPSTF, especially against screening all men over the age of 75.”

However, the conclusions of the European Randomized Study of Screening for Prostate Cancer (ERSPC), which was published concurrently with PLCO in NEJM, differed. In ERSPC, PSA was screened every 4 years. The authors found an increased rate of detection of prostate cancer, but, more importantly, they found that screening decreased prostate cancer mortality (adjusted rate ratio 0.80, 95% CI 0.65–0.98, p = 0.04; NNT 1410 men receiving 1.7 screening visits over 9 years). Like PLCO, this study did not report treatment harms that may have been associated with overly zealous diagnosis.

The USPSTF reexamined its PSA guidelines in 2012. Given the lack of mortality benefit in PLCO, the pitiful mortality benefit in ERSPC, and the assumed harm from over-diagnosis and excessive intervention in patients who would ultimately not succumb to prostate cancer, the USPSTF concluded that PSA-based screening for prostate cancer should not be offered (Grade D Recommendation).

In the following years, the pendulum has swung back partially toward screening. In May 2018, the USPSTF released new recommendations that encourage men ages 55-69 to have an informed discussion with their physician about potential benefits and harms of PSA-based screening (Grade C Recommendation). The USPSTF continues to recommend against screening in patients over 70 years old (Grade D).

Screening for prostate cancer remains a complex and controversial topic. Guidelines from the American Cancer Society, American Urological Association, and USPSTF vary, but ultimately all recommend shared decision-making. UpToDate has a nice summary of talking points culled from several sources.

Further Reading/References:
1. 2 Minute Medicine
2. ERSPC @ Wiki Journal Club
3. UpToDate, Screening for Prostate Cancer

Summary by Duncan F. Moore, MD

Image Credit: Otis Brawley, Public Domain, NIH National Cancer Institute Visuals Online