Week 44 – National Lung Screening Trial (NLST)

“Reduced Lung-Cancer Mortality with Low-Dose Computed Tomographic Screening”

by the National Lung Cancer Screening Trial (NLST) Research Team

N Engl J Med. 2011 Aug 4;365(5):395-409 [free full text]

Despite a reduction in smoking rates in the United States, lung cancer remains the number one cause of cancer death in the United States as well as worldwide. Earlier studies of plain chest radiography for lung cancer screening demonstrated no benefit, and in 2002 the National Lung Screening Trial (NLST) was undertaken to determine whether then recent advances in CT technology could lead to an effective lung cancer screening method.

The study enrolled adults age 55-74 with 30+ pack-years of smoking (if former smokers, they must have quit within the past 15 years). Patients were randomized to either the intervention of three annual screenings for lung cancer with low-dose CT or to the comparator/control group to receive three annual screenings for lung cancer with PA chest radiograph. The primary outcome was mortality from lung cancer. Notable secondary outcomes were all-cause mortality and the incidence of lung cancer.

53,454 patients were randomized, and both groups had similar baseline characteristics. The low-dose CT group sustained 247 deaths from lung cancer per 100,000 person-years, whereas the radiography group sustained 309 deaths per 100,000 person-years. A relative reduction in rate of death by 20.0% was seen in the CT group (95% CI 6.8 – 26.7%, p = 0.004). The number needed to screen with CT to prevent one lung cancer death was 320. There were 1877 deaths from any cause in the CT group and 2000 deaths in the radiography group, so CT screening demonstrated a risk reduction of death from any cause of 6.7% (95% CI 1.2% – 13.6%, p = 0.02). Incidence of lung cancer in the CT group was 645 per 100,000 person-years and 941 per 100,000 person-years in the radiography group (RR 1.13, 95% CI 1.03 – 1.23).

Lung cancer screening with low-dose CT scan in high-risk patients provides a significant mortality benefit. This trial was stopped early because the mortality benefit was so high. The benefit was driven by the reduction in deaths attributed to lung cancer, and when deaths from lung cancer were excluded from the overall mortality analysis, there was no significant difference among the two arms. Largely on the basis of this study, the 2013 USPSTF guidelines for lung cancer screening recommend annual low-dose CT scan in patients who meet NLST inclusion criteria. However, it must be noted that, even in the “ideal” circumstances of this trial performed at experienced centers, 96% of abnormal CT screening results in this trial were actually false positives. Of all positive results, 11% led to invasive studies.

Per UpToDate, since NSLT, there have been several European low-dose CT screening trials published. However, all but one (NELSON) appear to be underpowered to demonstrate a possible mortality reduction. Meta-analysis of all such RCTs could allow for further refinement in risk stratification, frequency of screening, and management of positive screening findings.

No randomized trial has ever demonstrated a mortality benefit of plain chest radiography for lung cancer screening. The Prostate, Lung, Colorectal, and Ovarian (PLCO) Cancer Screening Trial tested this modality vs. “community care,” and because the PLCO trial was ongoing at the time of creation of the NSLT, the NSLT authors trial decided to compare their intervention (CT) to plain chest radiography in case the results of plain chest radiography in PLCO were positive. Ultimately, they were not.

Further Reading:
1. USPSTF Guidelines for Lung Cancer Screening (2013)
2. NLST @ ClinicalTrials.gov
3. NLST @ Wiki Journal Club
4. NLST @ 2 Minute Medicine
5. UpToDate, “Screening for lung cancer”

Summary by Duncan F. Moore, MD

Image Credit: Yale Rosen, CC BY-SA 2.0, via Wikimedia Commons

Week 43 – FREEDOM

“Strategies for Multivessel Revascularization in Patients with Diabetes”

by the FREEDOM (Future Revascularization Evaluation in Patients with Diabetes Mellitus: Optimal Management of Multivessel Disease) Trial investigators

N Engl J Med. 2012 Dec 20;367(25):2375-84. [free full text]

Previous studies, such as the 1996 BARI trial), have demonstrated that patients who have multivessel coronary artery disease (CAD) and diabetes mellitus (DM) and who received coronary artery bypass grafting (CABG) surgery lived longer than patients undergoing balloon angioplasty. However, since that publication, percutaneous coronary intervention (PCI) technology advanced significantly. Prior to the publication of FREEDOM in 2012, there had only been small, underpowered studies comparing PCI with drug-eluting stent (DES) to CABG. FREEDOM was powered appropriately to discover superiority of revascularization strategy (PCI with DES vs. CABG) in patients with DM and multivessel CAD.

Population:

Inclusion criteria:

      • 18 years or older
      • Diabetes mellitus – defined by American Diabetes Association
      • Multivessel Coronary Artery Disease
        • > 70% stenosis (angiographically confirmed)
        • 2 or more epicardial vessels
        • 2 or more coronary-artery territories

Selected exclusion criteria:

      • NYHA Class III-IV heart failure
      • Prior CABG, valve surgery, or PCI (< 6 months)
      • Prior significant bleed (< 6 months)
      • Left main stenosis ≥ 50%

 

Design:
Patients meeting criteria were assigned 1:1 into PCI with first-generation paclitaxel-eluting stent (51%) or sirolimus-eluting stent (43%) versus CABG. The PCI group was placed on aspirin and clopidogrel for dual antiplatelet therapy (DAPT) for at least 12 months. For the CABG group, arterial revascularization was encouraged. The mean SYNTAX score (tool used to score complexity of CAD) was 26.2 and did not significantly differ between groups. Guideline-driven targets for lowering medical risk factors were used: LDL <70, BP <130/80, HgbA1c <7. Minimum follow-up was 2 years.


Outcomes:

Primary: Composite of death from any cause, non-fatal myocardial infarction (MI), and non-fatal stroke

Secondary

      1. Rate of major adverse cardiovascular and cerebrovascular events at 30 days and 12 months
      2. Repeat revascularization
      3. Annual all-cause mortality
      4. Annual cardiovascular mortality


Results:
953 patients and 947 patients were randomized into the PCI and CABG groups, respectively. At 5 years, the primary outcome (combined death, MI, or stroke) occurred in 200 of the PCI group and 146 of the CABG group (26.6% vs 18.7%, p = 0.005). The curves started diverging at 2 years. All-cause mortality was higher in the PCI group versus the CABG group (16.3% vs 10.9%, p = 0.049). Regarding secondary outcomes, 13.9% of patients in the PCI group had a repeat MI versus 6.0% in the CABG group (p < 0.001). There were fewer strokes in the PCI group than in the CABG group (2.4% vs 5.2%, p = 0.03). There was no statistically significant difference between study groups regarding cardiovascular death (10.9% vs 6.8%, p = 0.12).

At 5 years, the analysis of outcomes according to category of SYNTAX score (≤ 22, 23 to 32, ≥ 33) showed no significant subgroup interaction (p = 0.58).

Regarding safety, major bleeding between the two groups at 30 days was 0.02% for PCI vs 0.04% for CABG (p = 0.13). The incidence of acute renal failure requiring hemodialysis was observed in one patient in the PCI group and eight patients in the CABG group (p = 0.02)

Implication/Discussion:
The BARI Trial (1996) was the first trial to show that patients with DM and multivessel CAD derive mortality benefit from bypass grafting over PCI with balloon angioplasty. Furthermore, the BARI 2D (2009) trial demonstrated this benefit of bypass grafting over PCI with bare metal stents (BMS). At the time of the FREEDOM Trial, there had not been a randomized comparison of CABG versus PCI with newer technology and first-generation paclitaxel/sirolimus DES. In this study, CABG showed a 5.3% absolute reduction in all-cause mortality over PCI as well decreased rates of MI and repeat revascularization. CABG was associated with a mild absolute increase in stroke (2.8%). However, this mild increased stroke risk is consistent with most other comparative trials of the two treatment strategies. There was no statistical difference in major bleeding between the two groups.

CABG is likely better than PCI for various reasons. For one, diabetic arteries are affected diffusely and tend to have more extensive atherosclerotic disease compared to those without diabetes, so the likelihood of successful PCI alone is low. Many suspected that with advancement in PCI (i.e. DES) that the BARI data would become irrelevant. However, CABG continued to show benefit despite the technological advancements of drug-eluting stents and PCI. Improvement in surgical technique as well as the use of arterial revascularization (i.e. internal mammary artery) helped maintain superior outcomes with CABG compared to PCI.

The study was limited by the fact that due to low numbers, the subgroup analysis (i.e. SYNTAX scores) was not appropriately powered for statistical significance. Further, the study was not blinded, and patients may have been treated differently on the basis of their surgical procedure. Also, there was variability of STYNAX scores between the study groups, but this circumstance was thought to reflect real world heterogeneity.

Bottom Line:
CABG was superior to PCI with DES in patients with DM and multivessel CAD in that it significantly reduced rates of death and MI despite a small increased risk of stroke.

Further Reading/References:
1. BARI Trial @ NEJM
2. BARI 2D Trial @ NEJM
3. ACCF/AHA 2011 Guideline for Coronary Artery Bypass Graft Surgery
4. FREEDOM @ Wiki Journal Club
5. FREEDOM @ 2 Minute Medicine
5. FREEDOM @ Visualmed

Summary by Patrick Miller, MD.

Image Credit: Jerry Hecht, US Public Domain, via Wikimedia Commons

Week 42 – BeSt

“Clinical and Radiographic Outcomes of Four Different Treatment Strategies in Patients with Early Rheumatoid Arthritis (the BeSt Study).”

Arthritis & Rheumatism. 2005 Nov;52(11):3381-3390. [free full text]

Rheumatoid arthritis (RA) is among the most prevalent of the rheumatic diseases with a lifetime prevalence of 3.6% in women and 1.7% in men [1]. It is a chronic, systemic, inflammatory autoimmune disease of variable clinical course that can severely impact physical functional status and even mortality. Over the past 30 years, as the armamentarium of therapies for RA has exploded, there has been increased debate about the ideal initial therapy. The BeSt (Dutch: Behandel-Strategieën “treatment strategies”) trial was designed to compare, according to the authors, four of “the most frequently used and discussed strategies.” Regimens incorporating traditional disease-modifying antirheumatic drugs (DMARDs), such as methotrexate, and newer therapies, such as TNF-alpha inhibitors, were compared directly.

The trial enrolled 508 DMARD-naïve patients with early rheumatoid arthritis. Pertinent exclusion criteria included history of cancer and pre-existing laboratory abnormalities or comorbidities (e.g. elevated creatinine or ALT, alcohol abuse, pregnancy or desire to conceive, etc.) that would preclude the use of various DMARDs. Patients were randomized to one of four treatment groups. Within each regimen, the Disease Activity Score in 44 joints (DAS-44) was assessed q3 months, and, if > 2.4, the medication regimen was uptitrated to the next step within the treatment group.

Four Treatment Groups

  1. Sequential monotherapy: methotrexate (MTX) 15mg/week, uptitrated PRN to 25-30mg/week. If insufficient control, the following sequence was pursued: sulfasalazine (SSZ) monotherapy, leflunomide monotherapy, MTX + infliximab, gold with methylprednisolone, MTX + cyclosporin A (CSA) + prednisone
  2. Step-up combination therapy: MTX 15mg/week, uptitrated PRN to 25-30mg/week. If insufficient control, SSZ was added, followed by hydroxychloroquine (HCQ), followed by prednisone. If patients failed to respond to those four drugs, they were switched to MTX + infliximab, then MTX + CSA + prednisone, and finally to leflunomide.
  3. Initial combination therapy with tapered high-dose prednisone: MTX 7.5 mg/week + SSZ 2000 mg/day + prednisone 60mg/day (tapered in 7 weeks to 7.5 mg/day). If insufficient control, MTX was uptitrated to 25-30 mg/week. Next, combination would be switched to MTX + CSA + prednisone, then MTX + infliximab, then leflunomide monotherapy, gold with methylprednisolone, and finally azathioprine with prednisone.
  4. Initial combination therapy with infliximab: MTX 25-30 mg/week + infliximab 3 mg/kg at weeks 0, 2, 6, and q8 weeks thereafter. There was a protocol for infliximab-dose uptitration starting at 3 months. If insufficient control on MTX and infliximab 10 mg/kg, patients were switched to SSZ, then leflunomide, then MTX + CSA + prednisone, then gold + methylprednisolone, and finally AZA with prednisone.

Once clinical response was adequate for at least 6 months, there was a protocol for tapering the drug regimen.

The primary endpoints were: 1) functional ability per the Dutch version of the Health Assessment Questionnaire (D-HAQ), collected by a blinded research nurse q3 months and 2) radiographic joint damage per the modified Sharp/Van der Heijde score (SHS). Pertinent secondary outcomes included DAS-44 score and laboratory evidence of treatment toxicity.

At randomization, enrolled RA patients had a median duration of symptoms of 23 weeks and median duration since diagnosis of RA of 2 weeks. Mean DAS-44 was 4.4 ± 0.9. 72% of patients had erosive disease. Mean D-HAQ score at 3 months was 1.0 in groups 1 and 2 and 0.6 in groups 3 and 4 (p < 0.001 for groups 1 and 2 vs. groups 3 and 4; paired tests otherwise insignificant). Mean D-HAQ at 1 year was 0.7 in groups 1 and 2 and 0.5 in groups 3 and 4 (p = 0.010 for group 1 vs. group 3, p = 0.003 for group 1 vs. group 4; paired tests otherwise insignificant). At 1 year, patients in group 3 or 4 had less radiographic progression in joint damage per SHS than patients in group 1 or 2. Median increases in SHS were 2.0, 2.5., 1.0, and 0.5 in groups 1-4, respectively (p = 0.003 for group 1 vs. group 3, p < 0.001 for group 1 versus group 4, p = 0.007 for group 2 vs. group 3, p < 0.001 for group 2 vs. group 4). Regarding DAS-44 score: low disease activity (DAS-44 ≤ 2.4) at 1 year was reached in 53%, 64%, 71%, 74% of groups 1-4, respectively (p = 0.004 for group 1 vs. group 3, p = 0.001 for group 1 vs. group 4, p not significant for other comparisons). There were no group differences in prevalence of adverse effects.

Overall, among patients with early RA, initial combination therapy that included either prednisone (group 3) or infliximab (group 4) resulted in better functional and radiographic improvement than did initial therapy with sequential monotherapy (group 1) or step-up combination therapy (group 2). In the discussion, the authors note that given the treatment group differences in radiographic progression of disease, “starting therapy with a single DMARD would be a missed opportunity in a considerable number of patients.” Contemporary commentary by Weisman notes that “the authors describe both an argument and a counterargument arising from their observations: aggressive treatment with combinations of expensive drugs would ‘overtreat’ a large proportion of patients, yet early suppression of disease activity may have an important influence on subsequent long‐term disability and damage.”

Fourteen years later, it is a bit difficult to place the specific results of this trial in our current practice. Its trial design is absolutely byzantine and compares the 1-year experience of a variety of complex protocols that theoretically have substantial eventual potential overlap. Furthermore, it is difficult to assess if the relatively small group differences in symptom (D-HAQ) and radiographic (SHS) scales were truly clinically significant even if they were statistically significant. The American College of Rheumatology 2015 Guideline for the Treatment of Rheumatoid Arthritis synthesized the immense body of literature that came before and after the BeSt study and ultimately gave a variety of conditional statements about the “best practice” treatment of symptomatic early RA. (See Table 2 on page 8.) The recommendations emphasized DMARD monotherapy as the initial strategy but in the specific setting of a treat-to-target strategy. They also recommended escalation to combination DMARDs or biologics in patients with moderate or high disease activity despite DMARD monotherapy.

References / Additional Reading:
1. “The lifetime risk of adult-onset rheumatoid arthritis and other inflammatory autoimmune rheumatic diseases.” Arthritis Rheum. 2011 Mar;63(3):633-9. [https://www.ncbi.nlm.nih.gov/pubmed/21360492]
2. BeSt @ Wiki Journal Club
3. “Progress toward the cure of rheumatoid arthritis? The BeSt study.” Arthritis Rheum. 2005 Nov;52(11):3326-32.
4. “Review: treat to target in rheumatoid arthritis: fact, fiction, or hypothesis?” Arthritis Rheumatol. 2014 Apr;66(4):775-82. [https://www.ncbi.nlm.nih.gov/pubmed/24757129]
5. “2015 American College of Rheumatology Guideline for the Treatment of Rheumatoid Arthritis” Arthritis Rheumatol. 2016 Jan;68(1):1-26
6. RheumDAS calculator

Summary by Duncan F. Moore, MD

Image Credit: Braegel, CC BY 3.0, via Wikimedia Commons

Week 41 – Transfusion Strategies for Upper GI Bleeding

“Transfusion Strategies for Acute Upper Gastrointestinal Bleeding”

N Engl J Med. 2013 Jan 3;368(1):11-21. [free full text]

A restrictive transfusion strategy of 7 gm/dL was established following the previously discussed 1999 TRICC trial. Notably, both TRICC and its derivative study TRISS excluded patients who had an active bleed. In 2013, Villanueva et al. performed a study to establish whether there was benefit to a restrictive transfusion strategy in patients with acute upper GI bleeding.

The study enrolled consecutive adults presenting to a single center in Spain with hematemesis (or bloody nasogastric aspirate), melena, or both. Notable exclusion criteria included: a clinical Rockall score* of 0 with a hemoglobin level higher than 12g/dL, massive exsanguinating bleeding, lower GIB, patient refusal of blood transfusion, ACS, stroke/TIA, transfusion within 90 days, recent trauma or surgery

*The Rockall score is a system to assess risk for further bleeding or death on a scale from 0-11. Higher scores (3-11) indicate higher risk. Of the 648 patients excluded, the most common reason for exclusion (n = 329) was low risk of bleeding.

Intervention: restrictive transfusion strategy (transfusion threshold Hgb = 7.0 gm/dL) [n = 444]

Comparison: liberal transfusion strategy (transfusion threshold Hgb = 9.0 gm/dL) [n = 445]

During randomization, patients were stratified by presence or absence of cirrhosis.

As part of the study design, all patients underwent emergent EGD within 6 hours and received relevant hemostatic intervention depending on the cause of bleeding.

 

Outcome:
Primary outcome: 45-day mortality

Secondary outcomes, selected:

      • Incidence of further bleeding associated with hemodynamic instability or hemoglobin drop > 2 gm/dL in 6 hours
      • Incidence and number of RBC transfusions
      • Other products and fluids transfused
      • Hgb level at nadir, discharge, and 45 days

Subgroup analyses: Patients were stratified by presence of cirrhosis and corresponding Child-Pugh class, variceal bleeding, and peptic ulcer bleeding. An additional subgroup analysis was performed to evaluate changes in hepatic venous pressure gradient between the two strategies.

Results:
The primary outcome of 45-day mortality was lower in the restrictive strategy (5% vs. 9%; HR 0.55, 95% CI 0.33-0.92; p = 0.02; NNT = 24.8). In subgroup analysis, this finding remained consistent for patients who had Child-Pugh class A or B but was not statistically significant among patients who had Class C. Further stratification for variceal bleeding and peptic ulcer disease did not make a difference in mortality.

Secondary outcomes:
Rates of further bleeding events and RBC transfusion, as well as number of products transfused, were lower in the restrictive strategy. Subgroup analysis demonstrated that rates of re-bleeding were lower in Child-Pugh class A and B but not in C. As expected, the restrictive strategy also resulted in the lowest hemoglobin levels at 24 hours. Hemoglobin levels among patients in the restrictive strategy were lower at discharge but were not significantly different from the liberal strategy at 45 days. There was no group difference in amount of non-RBC blood products or colloid/crystalloid transfused. Patients in the restrictive strategy experienced fewer adverse events, particularly transfusion reactions such as transfusion-associated circulatory overload and cardiac complications. Patients in the liberal-transfusion group had significant post-transfusion increases in mean hepatic venous pressure gradient following transfusion. Such increases were not seen in the restrictive-strategy patients.

Implication/Discussion:
In patients with acute upper GI bleeds, a restrictive strategy with a transfusion threshold 7 gm/dL reduces 45-day mortality, the rate and frequency of transfusions, and the rate of adverse reactions, relative to a liberal strategy with a transfusion threshold of 9 gm/dL.

In their discussion, the authors hypothesize that the “harmful effects of transfusion may be related to an impairment of hemostasis. Transfusion may counteract the splanchnic vasoconstrictive response caused by hypovolemia, inducing an increase in splanchnic blood flow and pressure that may impair the formation of clots. Transfusion may also induce abnormalities in coagulation properties.”

Subgroup analysis suggests that the benefit of the restrictive strategy is less pronounced in patients with more severe hepatic dysfunction. These findings align with prior studies in transfusion thresholds for critically ill patients. However, the authors note that the results conflict with studies in other clinical circumstances, specifically in the pediatric ICU and in hip surgery for high-risk patients.

There are several limitations to this study. First, its exclusion criteria limit its generalizability. Excluding patients with massive exsanguination is understandable given lack of clinical equipoise; however, this choice allows too much discretion with respect to the definition of a massive bleed. (Note that those excluded due to exsanguination comprised only 39 of 648.) Lack of blinding was a second limitation. Potential bias was mitigated by well-defined transfusion protocols. Additionally, there a higher incidence of transfusion-protocol violations in the restrictive group, which probably biased results toward the null. Overall, deviations from the protocol occurred in fewer than 10% of cases.

Further Reading/References:
1. Transfusion Strategies for Acute Upper GI Bleeding @ Wiki Journal Club
2. Transfusion Strategies for Acute Upper GI Bleeding @ 2 Minute Medicine
3. TRISS @ Wiki Journal Club

Summary by Gordon Pelegrin, MD

Image Credit: Jeremias, CC BY-SA 3.0, via Wikimedia Commons

Week 40 – PROSEVA

Prone Positioning in Severe Acute Respiratory Distress Syndrome
by the PROSEVA Study Group

N Engl J Med. 2013 June 6; 368(23):2159-2168 [free full text]

Prone positioning had been used for many years in ICU patients with ARDS in order to improve oxygenation. Per Dr. Sonti’s Georgetown Critical Care Top 40, the physiologic basis for benefit with proning lies in the idea that atelectatic regions of lung typically occur in the most dependent portion of an ARDS patient, with hyperinflation affecting the remaining lung. Periodic reversal of these regions via moving the patient from supine to prone and vice versa ensures no one region of the lung will have extended exposure to either atelectasis or overdistention. Although the oxygenation benefits have been long noted, the PROSEVA trial established mortality benefit.

Study patients were selected from 26 ICUs in France and 1 in Spain which had daily practice with prone positioning for at least 5 years. Inclusion criteria: ARDS patients intubated and ventilated <36hr with severe ARDS (defined as PaO2:FiO2 ratio <150, PEEP>5, and TV of about 6ml/kg of predicted body weight). (NB: by the Berlin definition for ARDS, severe ARDS is defined as PaO2:FiO2 ratio <100.) Patients were either randomized to the intervention of proning within 36 hours of mechanical ventilation for at least 16 consecutive hours (N=237) or to the control of being left in a semirecumbent (supine) position (N=229). The primary outcome was mortality at day 28. Secondary outcomes included mortality at day 90, rate of successful extubation (no reintubation or use of noninvasive ventilation x48hr), time to successful extubation, length of stay in the ICU, complications, use of noninvasive ventilation, tracheotomy rate, number of days free from organ dysfunction, ventilator settings, measurements of ABG, and respiratory system mechanics during the first week after randomization.

At the time of randomization in the study, the majority of characteristics were similar between the two groups, although the authors noted differences in the SOFA score and the use of neuromuscular blockers and vasopressors. The supine group at baseline had a higher SOFA score indicating more severe organ failure, and also had higher rate of vasopressor usage. The prone group had a higher rate of usage of neuromuscular blockade. The primary outcome of 28 day mortality was significantly lower in the prone group than in the supine group, at 16.0% vs 32.8% (p < 0.001, NNT = 6.0). This mortality decrease was still statistically significant when adjusted for the SOFA score. Secondary outcomes were notable for a significantly higher rate of successful extubation in the prone group (hazard ratio 0.45; 95% CI 0.29-0.7, p < 0.001). Additionally, the PaO2:FiO2 ratio was significantly higher in the supine group, whereas the PEEP and FiO2 were significantly lower. The remainder of secondary outcomes were statistically similar.

PROSEVA showed a significant mortality benefit with early use of prone positioning in severe ARDS. This mortality benefit was considerably larger than that seen in past meta-analyses, which was likely due to this study selecting specifically for patients with severe disease as well as specifying longer prone-positioning sessions than employed in prior studies. Critics have noted the unexpected difference in baseline characteristics between the two arms of the study. While these critiques are reasonable, the authors mitigate at least some of these complaints by adjusting the mortality for the statistically significant differences. With such a radical mortality benefit it might be surprising that more patients are not proned at our institution. One reason is that relatively few of our patients have severe ARDS. Additionally, proning places a high demand on resources and requires a coordinated effort of multiple staff. All treatment centers in this study had specially-trained staff that had been performing proning on a daily basis for at least 5 years, and thus were very familiar with the process. With this in mind, we consider the use of proning in patients meeting criteria for severe ARDS.

References and further reading:
1. PROSEVA @ 2 Minute Medicine
2. PROSEVA @ Wiki Journal Club
3. PROSEVA @ Georgetown Critical Care Top 40, pages 8-9
4. Life in the Fastlane, Critical Care Compendium, “Prone Position and Mechanical Ventilation”
5. PulmCCM.org, “ICU Physiology in 1000 Words: The Hemodynamics of Prone”

Summary by Gordon Pelegrin, MD

Image Credit: by James Heilman, MD, CC BY-SA 3.0, via Wikimedia Commons

Week 39 – POISE

“Effects of extended-release metoprolol succinate in patients undergoing non-cardiac surgery: a randomised controlled trial”

Lancet. 2008 May 31;371(9627):1839-47. [free full text]

Non-cardiac surgery is commonly associated with major cardiovascular complications. It has been hypothesized that perioperative beta blockade would reduce such events by attenuating the effects of the intraoperative increases in catecholamine levels. Prior to the 2008 POISE trial, small- and moderate-sized trials had revealed inconsistent results, alternately demonstrating benefit and non-benefit with perioperative beta blockade. The POISE trial was a large RCT designed to assess the benefit of extended-release metoprolol succinate (vs. placebo) in reducing major cardiovascular events in patients of elevated cardiovascular risk.

The trial enrolled patients age 45+ undergoing non-cardiac surgery with estimated LOS 24+ hrs and elevated risk of cardiac disease, meaning: either 1) hx of CAD, 2) peripheral vascular disease, 3) hospitalization for CHF within past 3 years, 4) undergoing major vascular surgery, 5) or any three of the following seven risk criteria: undergoing intrathoracic or intraperitoneal surgery, hx CHF, hx TIA, hx DM, Cr > 2.0, age 70+, or undergoing urgent/emergent surgery.

Notable exclusion criteria: HR < 50, 2nd or 3rd degree heart block, asthma, already on beta blocker, prior intolerance of beta blocker, hx CABG within 5 years and no cardiac ischemia since

Intervention: metoprolol succinate (extended-release) 100mg PO starting 2-4 hrs before surgery, additional 100mg at 6-12 hrs postoperatively, followed by 200mg daily for 30 days. (Patients unable to take PO meds postoperatively were given metoprolol infusion.)

Comparison: placebo PO / IV at same frequency as metoprolol arm

Outcome:
Primary – composite of cardiovascular death, non-fatal MI, and non-fatal cardiac arrest at 30 days

Secondary (at 30 days)

        • cardiovascular death
        • non-fatal MI
        • non-fatal cardiac arrest
        • all-cause mortality
        • non-cardiovascular death
        • MI
        • cardiac revascularization
        • stroke
        • non-fatal stroke
        • CHF
        • new, clinically significant atrial fibrillation
        • clinically significant hypotension
        • clinically significant bradycardia

Pre-specified subgroup analyses of primary outcome:

Results:
9298 patients were randomized. However, fraudulent activity was detected at participating sites in Iran and Colombia, and thus 947 patients from these sites were excluded from the final analyses. Ultimately, 4174 were randomized to the metoprolol group, and 4177 were randomized to the placebo group. There were no significant differences in baseline characteristics, pre-operative cardiac medications, surgery type, or anesthesia type between the two groups (see Table 1).

Regarding the primary outcome, metoprolol patients were less likely than placebo patients to experience the primary composite endpoint of cardiovascular death, non-fatal MI, and non-fatal cardiac arrest (HR 0.84, 95% CI 0.70-0.99, p = 0.0399). See Figure 2A for the relevant Kaplan-Meier curve. Note that the curves separate distinctly within the first several days.

Regarding selected secondary outcomes (see Table 3 for full list), metoprolol patients were more likely to die from any cause (HR 1.33, 95% CI 1.03-1.74, p = 0.0317). See Figure 2D for the Kaplan-Meier curve for all-cause mortality. Note that the curves start to separate around day 10. Cause of death was analyzed, and the only group difference in attributable cause was an increased number of deaths due to sepsis or infection in the metoprolol group (data not shown). Metoprolol patients were more likely to sustain a stroke (HR 2.17, 95% CI 1.26-3.74, p = 0.0053) or a non-fatal stroke (HR 1.94, 95% CI 1.01-3.69, p = 0.0450). Of all patients who sustained a non-fatal stroke, only 15-20% made a full recovery. Metoprolol patients were less likely to sustain new-onset atrial fibrillation (HR 0.76, 95% CI 0.58-0.99, p = 0.0435) and less likely to sustain a non-fatal MI (HR 0.70, 95% CI 0.57-0.86, p = 0.0008). There were no group differences in risk of cardiovascular death or non-fatal cardiac arrest. Metoprolol patients were more likely to sustain clinically significant hypotension (HR 1.55, 95% CI 1.38-1.74, P < 0.0001) and clinically significant bradycardia (HR 2.74, 95% CI 2.19-3.43, p < 0.0001).

Subgroup analysis did not reveal any significant interaction with the primary outcome by RCRI, sex, type of surgery, or anesthesia type.

Implication/Discussion:
In patients with cardiovascular risk factors undergoing non-cardiac surgery, the perioperative initiation of beta blockade decreased the composite risk of cardiovascular death, non-fatal MI, and non-fatal cardiac arrest and increased the overall mortality risk and risk of stroke.

This study affirms its central hypothesis – that blunting the catecholamine surge of surgery is beneficial from a cardiac standpoint. (Most patients in this study had an RCRI of 1 or 2.) However, the attendant increase in all-cause mortality is dramatic. The increased mortality is thought to result from delayed recognition of sepsis due to masking of tachycardia. Beta blockade may also limit the physiologic hemodynamic response necessary to successfully fight a serious infection. In retrospective analyses mentioned in the discussion, the investigators state that they cannot fully explain the increased risk of stroke in the metoprolol group. However, hypotension attributable to beta blockade explains about half of the increased number of strokes.

Overall, the authors conclude that “patients are unlikely to accept the risks associated with perioperative extended-release metoprolol.”

A major limitation of this study is the fact that 10% of enrolled patients were discarded in analysis due to fraudulent activity at selected investigation sites. In terms of generalizability, it is important to remember that POISE excluded patients who were already on beta blockers.

Currently, per expert opinion at UpToDate, it is not recommended to initiate beta blockers preoperatively in order improve perioperative outcomes. POISE is an important piece of evidence underpinning the 2014 ACC/AHA Guideline on Perioperative Cardiovascular Evaluation and Management of Patients Undergoing Noncardiac Surgery, which includes the following recommendations regarding beta blockers:

      • Beta blocker therapy should not be started on the day of surgery (Class III – Harm, Level B)
      • Continue beta blockers in patients who are on beta blockers chronically (Class I, Level B)
      • In patients with intermediate- or high-risk preoperative tests, it may be reasonable to begin beta blockers
      • In patients with ≥ 3 RCRI risk factors, it may be reasonable to begin beta blockers before surgery
      • Initiating beta blockers in the perioperative setting as an approach to reduce perioperative risk is of uncertain benefit in those with a long-term indication but no other RCRI risk factors
      • It may be reasonable to begin perioperative beta blockers long enough in advance to assess safety and tolerability, preferably > 1 day before surgery

Further Reading/References:
1. Wiki Journal Club
2. 2 Minute Medicine
3. UpToDate, “Management of cardiac risk for noncardiac surgery”
4. 2014 ACC/AHA guideline on perioperative cardiovascular evaluation and management of patients undergoing noncardiac surgery: a report of the American College of Cardiology/American Heart Association Task Force on practice guidelines.

Image Credit: Mark Oniffrey, CC BY-SA 4.0, via Wikimedia Commons

Summary by Duncan F. Moore, MD

Week 38 – Effect of Early vs. Deferred Therapy for HIV (NA-ACCORD)

“Effect of Early versus Deferred Antiretroviral Therapy for HIV on Survival”

N Engl J Med. 2009 Apr 30;360(18):1815-26 [free full text]

The optimal timing of initiation of antiretroviral therapy (ART) in asymptomatic patients with HIV has been a subject of investigation since the advent of antiretrovirals. Guidelines in 1996 recommended starting ART for all HIV-infected patients with CD4 count < 500, but over time provider concerns regarding resistance, medication nonadherence, and adverse effects of medications led to more restrictive prescribing. In the mid-2000s, guidelines recommended ART initiation in asymptomatic HIV patients with CD4 < 350. However, contemporary subgroup analysis of RCT data and other limited observational data suggested that deferring initiation of ART increased rates of progression to AIDS and mortality. Thus the NA-ACCORD authors sought to retrospectively analyze their large dataset to investigate the mortality effect of early vs. deferred ART initiation.

The study examined the cases of treatment-naïve patients with HIV and no hx of AIDS-defining illness evaluated during 1996-2005. Two subpopulations were analyzed retrospectively: CD4 count 351-500 and CD4 count 500+. No intervention was undertaken. The primary outcome was, within each CD4 sub-population, mortality in patients treated with ART within 6 months after the first CD4 count within the range of interest vs. mortality in patients for whom ART was deferred until the CD4 count fell below the range of interest.

8362 eligible patients had a CD4 count of 351-500, and of these, 2084 (25%) initiated ART within 6 months, whereas 6278 (75%) patients deferred therapy until CD4 < 351. 9155 eligible patients had a CD4 count of 500+, and of these, 2220 (24%) initiated ART within 6 months, whereas 6935 (76%) patients deferred therapy until CD4 < 500. In both CD4 subpopulations, patients in the early-ART group were older, more likely to be white, more likely to be male, less likely to have HCV, and less likely to have a history of injection drug use. Cause-of-death information was obtained in only 16% of all deceased patients. The majority of these deaths in both the early- and deferred-therapy groups were from non-AIDS-defining conditions.

In the subpopulation with CD4 351-500, there were 137 deaths in the early-therapy group vs. 238 deaths in the deferred-therapy group. Relative risk of death for deferred therapy was 1.69 (95% CI 1.26-2.26, p < 0.001) per Cox regression stratified by year. After adjustment for history of injection drug use, RR = 1.28 (95% CI 0.85-1.93, p = 0.23). In an unadjusted analysis, HCV infection was a risk factor for mortality (RR 1.85, p= 0.03). After exclusion of patients with HCV infection, RR for deferred therapy = 1.52 (95% CI 1.01-2.28, p = 0.04).

In the subpopulation with CD4 500+, there were 113 deaths in the early-therapy group vs. 198 in the deferred-therapy group. Relative risk of death for deferred therapy was 1.94 (95% CI 1.37-2.79, p < 0.001). After adjustment for history of injection drug use, RR = 1.73 (95% CI 1.08-2.78, p = 0.02). Again, HCV infection was a risk factor for mortality (RR = 2.03, p < 0.001). After exclusion of patients with HCV infection, RR for deferred therapy = 1.90 (95% CI 1.14-3.18, p = 0.01).

Thus, in a large retrospective study, the deferred initiation of antiretrovirals in asymptomatic HIV infection was associated with higher mortality.

This was the first retrospective study of early initiation of ART in HIV that was large enough to power mortality as an endpoint while controlling for covariates. However, it is limited significantly by its observational, non-randomized design that introduced substantial unmeasured confounders. A notable example is the absence of socioeconomic confounders (e.g. insurance status). Perhaps early-initiation patients were more well-off, and their economic advantage was what drove the mortality benefit rather than the early initiation of ART. This study also made no mention of the tolerability of ART or adverse reactions to it.

In the years that followed this trial, NIH and WHO consensus guidelines shifted the trend toward earlier treatment of HIV. In 2015, the INSIGHT START trial (the first large RCT of immediate vs. deferred ART) showed a definitive mortality benefit of immediate initiation of ART in patients with CD4 500+. Since that time, per UpToDate, the standard of care has been to treat “essentially all” HIV-infected patients with ART.

Further Reading/References:
1. Wiki Journal Club
2. 2 Minute Medicine
3. INSIGHT START (2015), Pubmed, NEJM PDF
4. UpToDate, “When to initiate antiretroviral therapy in HIV-infected patients”

Summary by Duncan F. Moore, MD

Image Credit: Sigve, CC0 1.0, via WikiMedia Commons

Week 37 – LOTT

“A Randomized Trial of Long-Term Oxygen for COPD with Moderate Desaturation”

by the Long-Term Oxygen Treatment Trial (LOTT) Research Group

N Engl J Med. 2016 Oct 27;375(17):1617-1627. [free full text]

The long-term treatment of severe resting hypoxemia (SpO2 < 89%) in COPD with supplemental oxygen has been a cornerstone of modern outpatient COPD management since its mortality benefit was demonstrated circa 1980. Subsequently, the utility of supplemental oxygen in COPD patients with moderate resting daytime hypoxemia (SpO2 89-93%) was investigated in trials in the 1990s; however, such trials were underpowered to assess mortality benefit. Ultimately, the LOTT trial was funded by the NIH and Centers for Medicare and Medicaid Services (CMS) primarily to determine if there was a mortality benefit to supplemental oxygen in COPD patients with moderate hypoxemia as well to analyze as numerous other secondary outcomes, such as hospitalization rates and exercise performance.

The LOTT trial was originally planned to enroll 3500 patients. However, after 7 months the trial had randomized only 34 patients, and mortality had been lower than anticipated. Thus in late 2009 the trial was redesigned to include broader inclusion criteria (now patients with exercise-induced hypoxemia could qualify) and the primary endpoint was broadened from mortality to a composite of time to first hospitalization or death.

The revised LOTT trial enrolled COPD patients with moderate resting hypoxemia (SpO2 89-93%) or moderate exercise-induced desaturation during the 6-minute walk test (SpO2 ≥ 80% for ≥ 5 minutes and < 90% for ≥ 10 seconds). Patients were randomized to either supplemental oxygen (24-hour oxygen if resting SpO2 89-93%, otherwise oxygen only during sleep and exercise if the desaturation occurred only during exercise) or to usual care without supplemental oxygen. Supplemental oxygen flow rate was 2 liters per minute and could be uptitrated by protocol among patients with exercise-induced hypoxemia. The primary outcome was time to composite of first hospitalization or death. Secondary outcomes included hospitalization rates, lung function, performance on 6-minute walk test, and quality of life.

368 patients were randomized to the supplemental-oxygen group and 370 to the no-supplemental-oxygen group. Of the supplemental-oxygen group, 220 patients were prescribed 24-hour oxygen support, and 148 were prescribed oxygen for use during exercise and sleep only. Median duration of follow-up was 18.4 months. Regarding the primary outcome, there was no group difference in time to death or first hospitalization (p = 0.52 by log-rank test). See Figure 1A. Furthermore, there were no treatment-group differences in the primary outcome among patients of the following pre-specified subgroups: type of oxygen prescription, “desaturation profile,” race, sex, smoking status, SpO2 nadir during 6-minute walk, FEV1, BODE  index, SF-36 physical-component score, BMI, or history of anemia. Patients with a COPD exacerbation in the 1-2 months prior to enrollment, age 71+ at enrollment, and those with lower Quality of Well-Being Scale score at enrollment all demonstrated benefit from supplemental O2, but none of these subgroup treatment effects were sustained when the analyses were adjusted for multiple comparisons. Regarding secondary outcomes, there were no treatment-group differences in rates of all-cause hospitalizations, COPD-related hospitalizations, or non-COPD-related hospitalizations, and there were no differences in change from baseline measures of quality of life, anxiety, depression, lung function, and distance achieved in 6-minute walk.

The LOTT trial presents compelling evidence that there is no significant benefit, mortality or otherwise, of oxygen supplementation in patients with COPD and either moderate hypoxemia at rest (SpO2 > 88%) or exercise-induced hypoxemia. Although this trial’s substantial redesign in its early course is noted, the trial still is our best evidence to date about the benefit (or lack thereof) of oxygen in this patient group. As acknowledged by the authors, the trial may have had significant selection bias in referral. (Many physicians did not refer specific patients for enrollment because “they were too ill or [were believed to have benefited] from oxygen.”) Another notable limitation of this study is that nocturnal oxygen saturation was not evaluated. The authors do note that “some patients with COPD and severe nocturnal desaturation might benefit from nocturnal oxygen supplementation.”

For further contemporary contextualization of the study, please see the excellent post at PulmCCM from 11/2016. Included in that post is a link to an overview and Q&A from the NIH regarding the LOTT study.

References / Additional Reading:
1. PulmCCM, “Long-term oxygen brought no benefits for moderate hypoxemia in COPD”
2. LOTT @ 2 Minute Medicine
3. LOTT @ ClinicalTrials.gov
4. McDonald, J.H. 2014. Handbook of Biological Statistics (3rd ed.). Sparky House Publishing, Baltimore, Maryland.
5. Centers for Medicare and Medicaid Services, “Certificate of Medical Necessity CMS-484– Oxygen”
6. Ann Am Thorac Soc. 2018 Dec;15(12):1369-1381. “Optimizing Home Oxygen Therapy. An Official American Thoracic Society Workshop Report.”

Summary by Duncan F. Moore, MD

Image Credit: Patrick McAleer, CC BY-SA 2.0 UK, via Wikimedia Commons

Week 36 – HAS-BLED

“A Novel User-Friendly Score (HAS-BLED) To Assess 1-Year Risk of Major Bleeding in Patients with Atrial Fibrillation”

Chest. 2010 Nov;138(5):1093-100 [free full text]

Atrial fibrillation (AF) is a well-known risk factor for ischemic stroke. Stroke risk is further increased by individual comorbidities, such as CHF, HTN, and DM, and can be stratified with scores, such as CHADS2 and CHA2DS2VASC. Patients with intermediate stroke risk are recommended to be treated with oral anticoagulation (OAC). However, stroke risk is often also closely related to bleeding risk, and the benefits of anticoagulation for stroke need to be weighed against the added risk of bleeding. At the time of this study, there were no validated and user-friendly bleeding risk-stratification schemes. This study aimed to develop a practical risk score to estimate the 1-year risk of major bleeding (as defined in the study) in a contemporary, real world cohort of patients with AF.

The study enrolled adults with an EKG or Holter-proven diagnosis of AF. (Patients with mitral valve stenosis or previous valvular surgery were excluded.) No experiment was performed in this retrospective cohort study.

In a derivation cohort, the authors retrospectively performed univariate analyses to identify a range of clinical features associated with major bleeding (p < 0.10). Based on systematic reviews, they added additional risk factors for major bleeding. Ultimately, what resulted was a list of comprehensive risk factors deemed HAS-BLED:

H – Hypertension (> 160 mmHg systolic)
A – Abnormal renal (HD, transplant, Cr > 2.26 mg/dL) and liver function (cirrhosis, bilirubin > 2x normal w/ AST/ALT/ALP > 3x normal) – 1 pt each for abnormal renal or liver function
S – Stroke

B – Bleeding (prior major bleed or predisposition to bleed)
L – Labile INRs (time in therapeutic range < 60%)
E – Elderly (age > 65)
D – Drugs (i.e. ASA, clopidogrel, NSAIDs) or alcohol use (> 8 units per week) concomitantly – 1 pt each for use of either

Each risk factor was equivalent to one point. The HAS-BLED score was then compared to the HEMORR2HAGES scheme [https://www.mdcalc.com/hemorr2hages-score-major-bleeding-risk], a prior tool for estimating bleeding risk.

Outcomes:

      • incidence of major bleeding within 1 year, overall
      • bleeds per 100 patient-years, by HAS-BLED score
      • c-statistic for the HAS-BLED score in predicting the risk of bleeding

Definitions:

      • major bleeding = bleeding causing hospitalization, Hgb drop >2 g/L, or requiring blood transfusion, that was not a hemorrhagic stroke
      • hemorrhagic stroke = focal neurologic deficit of sudden onset, diagnosed by a neurologist, lasting >24h and caused by bleeding

Results:
3,456 patients with AF without mitral valve stenosis or valve surgery who completed their 1-year follow-up were analyzed retrospectively. 64.8% (2242) of these patients were on OAC (12.8% of whom on concurrent antiplatelet therapy), 24% (828) were on antiplatelet therapy alone, and 10.2% (352) received no antithrombotic therapy. 1.5% (53) of patients experienced a major bleed during the first year, with 17% (9) of these patients sustaining intracerebral hemorrhage.

HAS-BLED Score       Bleeds per 100-patient years
0                                        1.13
1                                         1.02
2                                        1.88
3                                        3.74
4                                        8.70
5                                        12.50
6*                                     0.0                   *(n = 2 patients at risk, neither bled)

Patients were given a HAS-BLED score and a HEMORR2HAGES score. C-statistics were then used to determine the predictive accuracy of each model overall as well as within patient subgroups (OAC alone, OAC + antiplatelet, antiplatelet alone, no antithrombotic therapy).

C statistics for HAS-BLED were as follows: for overall cohort, 0.72 (95%CI 0.65-0.79); for OAC alone, 0.69 (95%CI 0.59-0.80); for OAC + antiplatelet, 0.78 (95%CI 0.65-0.91); for antiplatelet alone, 0.91 (95%CI 0.83-1.00); and for those on no antithrombotic therapy, 0.85 (95%CI 0.00-1.00).

C statistics for HEMORR2HAGES were as follows: for overall cohort, 0.66 (95%CI 0.57-0.74); for OAC alone, 0.64 (95%CI 0.53-0.75); for OAC + antiplatelet, 0.83 (95%CI 0.74-0.91); for antiplatelet alone, 0.83 (95%CI 0.68-0.98); and for those without antithrombotic therapy, 0.81 (95%CI 0.00-1.00).

Implication/Discussion:
This study helped to establish a practical and user-friendly assessment of bleeding risk in AF. HAS-BLED is superior to its predecessor HEMORR2HAGES in that it has an easier-to-remember acronym and is quicker and simpler to perform. All of its risk factors are readily available from the clinical history or are routinely tested. Both stratification tools had a broadly similar c-statistics for the overall cohort – 0.72 for HAS-BLED versus 0.66 for HEMORR2HAGES respectively. However, HAS-BLED was particularly useful when looking at antiplatelet therapy alone or no antithrombotic therapy at all (0.91 and 0.85, respectively).

This study is useful because it provides evidence-based, easily-calculable, and actionable risk stratification in assessing bleeding risk in AF. In prior studies, such as ACTIVE-A (ASA + clopidogrel versus ASA alone for patients with AF deemed unsuitable for OAC), almost half of all patients (n= ~3500) were given a classification of “unsuitable for OAC,” which was based solely on physician clinical judgement alone without a predefined objective scoring. Now, physicians have an objective way to assess bleed risk rather than “gut feeling” or wanting to avoid iatrogenic insult.

The RE-LY trial used the HAS-BLED score to decide which patients with AF should get the standard dabigatran dose (150mg BID) versus a lower dose (110mg BID) for anticoagulation. This risk-stratified dosing resulted in a significant reduction in major bleeding compared with warfarin and maintained a similar reduction in stroke risk.

Furthermore, the HAS-BLED score could allow the physician to be more confident when deciding which patients may be appropriate for referral for a left atrial appendage occlusion device (e.g. Watchman).

Limitations:
The study had a limited number of major bleeds and a short follow-up period, and thus it is possible that other important risk factors for bleeding were not identified. Also, there were large numbers of patients lost to 1-year follow-up. These patients were likely to have had more comorbidities and may have transferred to nursing homes or even have died – which may have led to an underestimate of bleeding rates. Furthermore, the study had a modest number of very elderly patients (i.e. 75-84 and ≥85), who are likely to represent the greatest bleeding risk.

Bottom Line:
HAS-BLED provides an easy, practical tool to assess the individual bleeding risk of patients with AF. Oral anticoagulation should be considered for scores of 3 or less. HAS-BLED scores are ≥4, it is reasonable to think about alternatives to oral anticoagulation.

Further Reading/References:
1. HAS-BLED @ 2 Minute Medicine
2. ACTIVE-A trial
3. RE-LY trial:
4. RE-LY @ Wiki Journal Club
5. HAS-BLED Calculator
6. HEMORR2HAGES Calculator
7. CHADS2 Calculator
8. CHA2DS2VASC Calculator
9. Watchman (for Healthcare Professionals)

Summary by Patrick Miller, MD

Image Credit: CardioNetworks, CC BY-SA 3.0, via Wikimedia Commons

Week 35 – CORTICUS

“Hydrocortisone Therapy for Patients with Septic Shock”

N Engl J Med. 2008 Jan 10;358(2):111-24. [free full text]

Steroid therapy in septic shock has been a hotly debated topic since the 1980s. The Annane trial in 2002 suggested that there was a mortality benefit to early steroid therapy and so for almost a decade this became standard of care. In 2008, the CORTICUS trial was performed suggesting otherwise.

The trial enrolled ICU patients with septic shock onset with past 72 hrs (defined as SBP < 90 despite fluids or need for vasopressors and hypoperfusion or organ dysfunction from sepsis). Excluded patients included those with an “underlying disease with a poor prognosis,” life expectancy < 24hrs, immunosuppression, and recent corticosteroid use. Patients were randomized to hydrocortisone 50mg IV q6h x5 days plus taper or to placebo injections q6h x5 days plus taper. The primary outcome was 28-day mortality among patients who did not have a response to ACTH stim test (cortisol rise < 9mcg/dL). Secondary outcomes included 28-day mortality in patients who had a positive response to ACTH stim test, 28-day mortality in all patients, reversal of shock (defined as SBP ≥ 90 for at least 24hrs without vasopressors) in all patients and time to reversal of shock in all patients.

In ACTH non-responders (n = 233), intervention vs. control 28-day mortality was 39.2% vs. 36.1%, respectively (p = 0.69). In ACTH responders (n = 254), intervention vs. control 28-day mortality was 28.8% vs. 28.7% respectively (p = 1.00). Reversal of was shock 84.7%% vs. 76.5% (p = 0.13). Among all patients, intervention vs. control 28-day mortality was 34.3% vs. 31.5% (p = 0.51) and reversal of shock 79.7% vs. 74.2% (p = 0.18). The duration of time to reversal of shock was significantly shorter among patients receiving hydrocortisone (per Kaplan-Meier analysis, p<0.001; see Figure 2) with median time to of reversal 3.3 days vs. 5.8 days (95% CI 5.2 – 6.9).

In conclusion, the CORTICUS trial demonstrated no mortality benefit of steroid therapy in septic shock regardless of a patient’s response to ACTH. Despite the lack of mortality benefit, it demonstrated an earlier resolution of shock with steroids. This lack of mortality benefit sharply contrasted with the previous Annane 2002 study. Several reasons have been posited for this difference including poor powering of the CORTICUS study (which did not reach the desired n = 800), inclusion starting within 72 hrs of septic shock vs. Annane starting within 8 hrs, and the overall sicker nature of Annane patients (who were all mechanically ventilated). Subsequent meta-analyses disagree about the mortality benefit of steroids, but meta-regression analyses suggest benefit among the sickest patients. All studies agree about the improvement in shock reversal. The 2016 Surviving Sepsis Campaign guidelines recommend IV hydrocortisone in septic shock in patients who continue to be hemodynamically unstable despite adequate fluid resuscitation and vasopressor therapy.

Per Drs. Sonti and Vinayak of the GUH MICU (excepted from their excellent Georgetown Critical Care Top 40): “Practically, we use steroids when reaching for a second pressor or if there is multiorgan system dysfunction. Our liver patients may have deficient cortisol production due to inadequate precursor lipid production; use of corticosteroids in these patients represents physiologic replacement rather than adjunct supplement.”

The ANZICS collaborative group published the ADRENAL trial in NEJM in 2018 – which demonstrated that “among patients with septic shock undergoing mechanical ventilation, a continuous infusion of hydrocortisone did not result in lower 90-day mortality than placebo.” The authors did note “a more rapid resolution of shock and a lower incidence of blood transfusion” among patients receiving hydrocortisone. The folks at EmCrit argued [https://emcrit.org/emnerd/cc-nerd-case-relative-insufficiency/] that this was essentially a negative study, and thus in the existing context of CORTICUS, the results of the ADRENAL trial do not change our management of refractory septic shock.

Finally, the 2018 APPROCCHSS trial (also by Annane) evaluated the survival benefit hydrocortisone plus fludocortisone vs. placebo in patients with septic shock and found that this intervention reduced 90-day all-cause mortality. At this time, it is difficult truly discern the added information of this trial given its timeframe, sample size, and severity of underlying illness. See the excellent discussion in the following links: WikiJournal Club, PulmCrit, PulmCCM, and UpToDate.

References / Additional Reading:
1. CORTICUS @ Wiki Journal Club
2. CORTICUS @ Minute Medicine
3. Surviving Sepsis Campaign: International Guidelines for Management of Sepsis and Septic Shock (2016), section “Corticosteroids”
4. Annane trial (2002) full text
5. PulmCCM, “Corticosteroids do help in sepsis: ADRENAL trial”
6. UpToDate, “Glucocorticoid therapy in septic shock”

Post by Gordon Pelegrin, MD

Image Credit: LHcheM, CC BY-SA 3.0, via Wikimedia Commons