Week 13 – Sepsis-3

“The Third International Consensus Definitions for Sepsis and Septic Shock (Sepsis-3)”

JAMA. 2016 Feb 23;315(8):801-10. [free full text]

In practice, we recognize sepsis as a potentially life-threatening condition that arises secondary to infection. Because the SIRS criteria were of limited sensitivity and specificity in identifying sepsis and because our understanding of the pathophysiology of sepsis had purportedly advanced significantly during the interval since the last sepsis definition, an international task force of 19 experts was convened to define and prognosticate sepsis more effectively. The resulting 2016 Sepsis-3 definition was the subject of immediate and sustained controversy.

In the words of Sepsis-3, sepsis simply “is defined as life-threatening organ dysfunction caused by a dysregulated host response to infection.” The paper further defines organ dysfunction in terms of a threshold change in the SOFA score by 2+ points. However, the authors state that “the SOFA score is not intended to be used as a tool for patient management but as a means to clinically characterize a septic patient.” The authors note that qSOFA, an easier tool introduced in this paper, can identify promptly at the bedside patients “with suspected infection who are likely to have a prolonged ICU stay or die in the hospital.” A positive screen on qSOFA is identified as 2+ of the following: AMS, SBP ≤ 100, or respiratory rate ≥ 22. At the time of this endorsement of qSOFA, the tool had not been validated prospectively. Finally, septic shock was defined as sepsis with persistent hypotension requiring vasopressors to maintain MAP ≥ 65 and with a serum lactate > 2 despite adequate volume resuscitation.

As noted contemporaneously in the excellent PulmCrit blog post “Top ten problems with the new sepsis definition,” Sepsis-3 was not endorsed by the American College of Chest Physicians, the IDSA, any emergency medicine society, or any hospital medicine society. On behalf of the American College of Chest Physicians, Dr. Simpson published a scathing rejection of Sepsis-3 in Chest in May 2016. He noted “there is still no known precise pathophysiological feature that defines sepsis.” He went on to state “it is not clear to us that readjusting the sepsis criteria to be more specific for mortality is an exercise that benefits patients,” and said “to abandon one system of recognizing sepsis [SIRS] because it is imperfect and not yet in universal use for another system that is used even less seems unwise without prospective validation of that new system’s utility.”

In fact, the later validation of qSOFA demonstrated that the SIRS criteria had superior sensitivity for predicting in-hospital mortality while qSOFA had higher specificity. See the following posts at PulmCrit for further discussion: [https://emcrit.org/isepsis/isepsis-sepsis-3-0-much-nothing/] [https://emcrit.org/isepsis/isepsis-sepsis-3-0-flogging-dead-horse/].

At UpToDate, authors note that “data of the value of qSOFA is conflicting,” and because of this, “we believe that further studies that demonstrate improved clinically meaningful outcomes due to the use of qSOFA compared to clinical judgement are warranted before it can be routinely used to predict those at risk of death from sepsis.”

Additional Reading:
1. PulmCCM, “Simple qSOFA score predicts sepsis as well as anything else”
2. 2 Minute Medicine

Summary by Duncan F. Moore, MD

Image Credit: By Mark Oniffrey – Own work, CC BY-SA 4.0

Week 12 – Rivers Trial

“Early Goal-Directed Therapy in the Treatment of Severe Sepsis and Septic Shock”

N Engl J Med. 2001 Nov 8;345(19):1368-77. [free full text]

Sepsis is common and, in its more severe manifestations, confers a high mortality risk. Fundamentally, sepsis is a global mismatch between oxygen demand and delivery. Around the time of this seminal study by Rivers et al., there was increasing recognition of the concept of the “golden hour” in sepsis management – “where definitive recognition and treatment provide maximal benefit in terms of outcome” (1368). Rivers and his team created a “bundle” of early sepsis interventions that targeted preload, afterload, and contractility, dubbed early goal-directed therapy (EGDT). They evaluated this bundle’s effect on mortality and end-organ dysfunction.

The “Rivers trial” randomized adults presenting to a single US academic center ED with ≥ 2 SIRS criteria and either SBP ≤ 90 after a crystalloid challenge of 20-30ml/kg over 30min or lactate > 4mmol/L to either treatment with the EGDT bundle or to the standard of care.

Intervention: early goal-directed therapy (EGDT)

  • Received a central venous catheter with continuous central venous O2 saturation (ScvO2) measurement
  • Treated according to EGDT protocol (see Figure 2, or below) in ED for at least six hours
    • 500ml bolus of crystalloid q30min to achieve CVP 8-12mm
    • Vasopressors to achieve MAP ≥ 65
    • Vasodilators to achieve MAP ≤ 90
    • If ScvO2 < 70%, transfuse RBCs to achieve Hct ≥ 30
    • If, after CVP, MAP, and Hct were optimized as above and ScvO2 remained < 70%, dobutamine was added and uptitrated to achieve ScvO2 ≥ 70 or until max dose 20 μg/kg/min
      • dobutamine was de-escalated if MAP < 65 or HR > 120
    • Patients in whom hemodynamics could not be optimized were intubated and sedated, in order to decrease oxygen consumption
  • Patients were transferred to inpatient ICU bed as soon as able, and upon transfer ScvO2 measurement was discontinued
  • Inpatient team was blinded to treatment group assignment

The primary outcome was in-hospital mortality. Secondary endpoints included: resuscitation end points, organ-dysfunction scores, coagulation-related variables, administered treatments, and consumption of healthcare resources.

130 patients were randomized to EGDT, and 133 to standard therapy. There were no differences in baseline characteristics. There was no group difference in the prevalence of antibiotics given within the first 6 hours. Standard-therapy patients spent 6.3 ± 3.2 hours in the ED, whereas EGDT patients spent 8.0 ± 2.1 (p < 0.001).

In-hospital mortality was 46.5% in the standard-therapy group, and 30.5% in the EGDT group (p = 0.009, NNT 6.25). 28-day and 60-day mortalities were also improved in the EGDT group. See Table 3.

During the initial six hours of resuscitation, there was no significant group difference in mean heart rate or CVP. MAP was higher in the EGDT group (p < 0.001), but all patients in both groups reached a MAP ≥ 65. ScvO2 ≥ 70% was met by 60.2% of standard-therapy patients and 94.9% of EGDT patients (p < 0.001). A combination endpoint of achievement of CVP, MAP, and UOP (≥ 0.5cc/kg/hr) goals was met by 86.1% of standard-therapy patients and 99.2% of EGDT patients (p < 0.001). Standard-therapy patients had lower ScvO2 and greater base deficit, while lactate and pH values were similar in both groups.

During the period of 7 to 72 hours, the organ-dysfunction scores of APACHE II, SAPS II, and MODS were higher in the standard-therapy group (see Table 2). The prothrombin time, fibrin-split products concentration, and d-dimer concentrations were higher in the standard-therapy group, while PTT, fibrinogen concentration, and platelet counts were similar.

During the initial six hours, EGDT patients received significantly more fluids, pRBCs, and inotropic support than standard-therapy patients. Rates of vasopressor use and mechanical ventilation were similar. During the period of 7 to 72 hours, standard-therapy patients received more fluids, pRBCs, and vasopressors than the EGDT group, and they were more likely to be intubated and to have pulmonary-artery catheterization. Rates of inotrope use were similar. Overall, during the first 72 hrs, standard-therapy patients were more likely to receive vasopressors, be intubated, and undergo pulmonary-artery catheterization. EGDT patients were more likely to receive pRBC transfusion. There was no group difference in total volume of fluid administration or inotrope use. Regarding utilization, there were no group differences in mean duration of vasopressor therapy, mechanical ventilation, or length of stay. Among patients who survived to discharge, standard-therapy patients spent longer in the hospital than EGDT patients (18.4 ± 15.0 vs. 14.6 ± 14.5 days, respectively, p = 0.04).

In conclusion, early goal-directed therapy reduced in-hospital mortality in patients presenting to the ED with severe sepsis or septic shock when compared with usual care. In their discussion, the authors note that “when early therapy is not comprehensive, the progression to severe disease may be well under way at the time of admission to the intensive care unit” (1376).

The Rivers trial has been cited over 10,500 times. It has been widely discussed and dissected for decades. Most importantly, it helped catalyze a then-ongoing paradigm shift of what “usual care” in sepsis is. As noted by our own Drs. Sonti and Vinayak and in their Georgetown Critical Care Top 40: “Though we do not use the ‘Rivers protocol’ as written, concepts (timely resuscitation) have certainly infiltrated our ‘standard of care’ approach.” The Rivers trial evaluated the effect of a bundle (multiple interventions). It was a relatively complex protocol, and it has been recognized that the transfusion of blood to Hgb > 10 may have caused significant harm. In aggregate, the most critical elements of the modern initial resuscitation in sepsis are early administration of antibiotics (notably not protocolized by Rivers) within the first hour and the aggressive administration of IV fluids (now usually 30cc/kg of crystalloid within the first 3 hours of presentation).

More recently, there have been three large RCTs of EGDT versus usual care and/or protocols that used some of the EGDT targets: ProCESS (2014, USA), ARISE (2014, Australia), and ProMISe (2015, UK). In general terms, EGDT provided no mortality benefit compared to usual care. Prospectively, the authors of these three trials planned a meta-analysis – the 2017 PRISM study – which concluded that “EGDT did not result in better outcomes than usual care and was associated with higher hospitalization costs across a broad range of patient and hospital characteristics.” Despite patients in the Rivers trial being sicker than those of ProCESS/ARISE/ProMISe, it was not found in the subgroup analysis of PRISM that EGDT was more beneficial in sicker patients. Overall, the PRISM authors noted that “it remains possible that general advances in the provision of care for sepsis and septic shock, to the benefit of all patients, explain part or all of the difference in findings between the trial by Rivers et al. and the more recent trials.”

Further Reading/References:
1. Wiki Journal Club
2. 2 Minute Medicine
3. Life in The Fast Lane
4. Georgetown Critical Care Top 40
5. “A randomized trial of protocol-based care for early septic shock” (ProCESS). NEJM 2014.
6. “Goal-directed resuscitation for patients with early septic shock” (ARISE). NEJM 2014.
7. “Trial of early, goal-directed resuscitation for septic shock” (ProMISe). NEJM 2015.
8. “Early, Goal-Directed Therapy for Septic Shock – A Patient-level Meta-Analysis” PRISM. NEJM 2017.
9. Surviving Sepsis Campaign
10. UpToDate, “Evaluation and management of suspected sepsis and septic shock in adults”

Summary by Duncan F. Moore, MD

Image Credit: By Clinical_Cases, [CC BY-SA 2.5] via Wikimedia Commons

Week 11 – AFFIRM

“A Comparison of Rate Control and Rhythm Control in Patients with Atrial Fibrillation”

by the Atrial Fibrillation Follow-Up Investigation of Rhythm Management (AFFIRM) Investigators

N Engl J Med. 2002 Dec 5;347(23):1825-33. [free full text]

It seems like the majority of patients with atrial fibrillation that we encounter today in the inpatient setting are being treated with a rate-control strategy, as opposed to a rhythm-control strategy. There was a time when both approaches were considered acceptable, and perhaps rhythm control was even the preferred initial strategy. The AFFIRM trial was the landmark study to address this debate.

The trial randomized patients with atrial fibrillation (judged “likely to be recurrent”) aged 65 or older “or who had other risk factors for stroke or death” to either 1) a rhythm-control strategy with one or more drugs from a pre-specified list and/or cardioversion to achieve sinus rhythm or 2) a rate-control strategy with beta-blockers, CCBs, and/or digoxin to a target resting HR ≤ 80 and a six-minute walk test HR ≤ 110. The primary endpoint was death during follow-up. The major secondary endpoint was a composite of death, disabling stroke, disabling anoxic encephalopathy, major bleeding, and cardiac arrest.

4060 patients were randomized. Death occurred in 26.7% of rhythm-control patients versus 25.9% of rate-control patients (HR 1.15, 95% CI 0.99 – 1.34, p = 0.08). The composite secondary endpoint occurred in 32.0% of rhythm control-patients versus 32.7% of rate-control patients (p = 0.33). Rhythm-control strategy was associated with a higher risk of death among patients older than 65 and patients with CAD (see Figure 2). Additionally, rhythm-control patients were more likely to be hospitalized during follow-up (80.1% vs. 73.0%, p < 0.001) and to develop torsades de pointes (0.8% vs. 0.2%, p = 0.007).

This trial demonstrated that a rhythm-control strategy in atrial fibrillation offers no mortality benefit over a rate-control strategy. At the time of publication, the authors wrote that rate control was an “accepted, though often secondary alternative” to rhythm control. Their study clearly demonstrated that there was no significant mortality benefit to either strategy and that hospitalizations were greater in the rhythm-control group. In subgroup analysis that rhythm control led to higher mortality among the elderly and those with CAD. Notably, 37.5% of rhythm-control patients had crossed over to rate control strategy by 5 years of follow-up, whereas only 14.9% of rate-control patients had switched over to rhythm control.

But what does this study mean for our practice today? Generally speaking, rate control is preferred in most patients, particularly the elderly and patients with CHF, whereas rhythm control may be pursued in patients with persistent symptoms despite rate control, patients unable to achieve rate control on AV nodal agents alone, and patients younger than 65. Both the AHA/ACC (2014) and the European Society of Cardiology (2016) guidelines have extensive recommendations that detail specific patient scenarios.

Further Reading / References:
1. Cardiologytrials.org
2. Wiki Journal Club
3. 2 Minute Medicine
4. Visual abstract @ Visualmed

Summary by Duncan F. Moore, MD

Image Credit: Drj via Wikimedia Commons

Week 10 – CLOT

“Low-Molecular-Weight Heparin versus a Coumarin for the Prevention of Recurrent Venous Thromboembolism in Patients with Cancer”

by the Randomized Comparison of Low-Molecular-Weight Heparin versus Oral Anticoagulant Therapy for the Prevention of Recurrent Venous Thromboembolism in Patients with Cancer (CLOT) Investigators

N Engl J Med. 2003 Jul 10;349(2):146-53. [free full text]

Malignancy is a pro-thrombotic state, and patients with cancer are at significant and sustained risk of venous thromboembolism (VTE) even when treated with warfarin. Warfarin is a suboptimal drug that requires careful monitoring, and its effective administration is challenging in the setting of cancer-associated difficulties with oral intake, end-organ dysfunction, and drug interactions. The 2003 CLOT trial was designed to evaluate whether treatment with low-molecular-weight heparin (LMWH) was superior to treatment with a vitamin K antagonist (VKA) in the prevention of recurrent VTE.

The study randomized adults with active cancer and newly diagnosed symptomatic DVT or PE to treatment with either dalteparin subQ daily (200 IU/kg daily x1 month, then 150 IU/kg daily x5 months) or a vitamin K antagonist x6 months (target INR 2.5, with 5-7 day LMWH bridge). The primary outcome was the recurrence of symptomatic DVT or PE within 6 months of follow-up. Secondary outcomes included major bleed, any bleeding, and all-cause mortality.

338 patients were randomized to the LMWH group, and 338 were randomized to the VKA group. Baseline characteristics were similar among the two groups. 90% of patients had solid malignancies, and 67% of patients had metastatic disease. Within the VKA group, INR was estimated to be therapeutic 46% of the time, subtherapeutic 30% of the time, and supratherapeutic 24% of the time. Within the six-month follow-up period, symptomatic VTE occurred in 8.0% of the dalteparin group and 15.8% of the VKA group (HR 0.48, 95% CI 0.30-0.77, p=0.002; NNT = 12.9). The Kaplan-Meier estimate of recurrent VTE at 6 months was 9% in the dalteparin group and 17% in the VKA group. 6% of the dalteparin group developed major bleeding versus 6% of the VKA group (p = 0.27). 14% of the dalteparin group sustained any type of bleeding event versus 19% of the VKA group (p = 0.09). Mortality at 6 months was 39% in the dalteparin group versus 41% in the VKA group (p = 0.53).

In summary, treatment of VTE in cancer patients with low-molecular-weight heparin reduced the incidence of recurrent VTE relative to the incidence following treatment with vitamin K antagonists. Notably, this reduction in VTE recurrence was not associated with a change in bleeding risk. However, it also did not correlate with a mortality benefit either. This trial initiated a paradigm shift in the treatment of VTE in cancer. LMWH became the standard of care, although cost and convenience may have limited access and adherence to this treatment.

Until recently, no trial had directly compared a DOAC to LMWH in the prevention of recurrent VTE in malignancy. In an open-label, noninferiority trial, the Hokusai VTE Cancer Investigators demonstrated that the oral Xa inhibitor edoxaban (Savaysa) was noninferior to dalteparin with respect to a composite outcome of recurrent VTE or major bleeding. The 2018 SELECT-D trial compared rivaroxaban (Xarelto) to dalteparin and demonstrated a reduced rate of recurrence among patients treated with rivaroxaban (cumulative 6-month event rate of 4% versus 11%, HR 0.43, 95% CI 0.19–0.99) with no difference in rates of major bleeding but increased “clinically relevant nonmajor bleeding” within the rivaroxaban group.

Further Reading/References:
1. CLOT @ Wiki Journal Club
2. 2 Minute Medicine
3. UpToDate, “Treatment of venous thromboembolism in patients with malignancy”
4. Hokusai VTE Cancer Trial @ Wiki Journal Club
5. “Edoxaban for the Treatment of Cancer-Associated Venous Thromboembolism,” NEJM 2017
6. “Comparison of an Oral Factor Xa Inhibitor With Low Molecular Weight Heparin in Patients With Cancer With Venous Thromboembolism: Results of a Randomized Trial (SELECT-D).” J Clin Oncol 2018.

Summary by Duncan F. Moore, MD

Image Credit: By Westgate EJ, FitzGerald GA, CC BY 2.5via Wikimedia Commons

Week 9 – NICE-SUGAR

“Intensive versus Conventional Glucose Control in Critically Ill Patients”

by the Normoglycemia in Intensive Care Evaluation–Survival Using Glucose Algorithm Regulation (NICE-SUGAR) investigators

N Engl J Med 2009;360:1283-97. [free full text]

On the wards we often hear 180 mg/dL used as the upper limit of acceptable for blood glucose with the understanding that tighter glucose control in inpatients can lead to more harm than benefit. The relevant evidence base comes from ICU populations, with scant direct data in non-ICU patients. The 2009 NICE-SUGAR study is the largest and best among this evidence base.

The study randomized ICU patients (expected to require 3 or more days of ICU-level care) to either “intensive” glucose control (target glucose 81 to 108 mg/dL) or conventional glucose control (target of less than 180 mg/dL). The primary outcome was 90-day all-cause mortality.

6104 patients were randomized to the two arms, and both groups had similar baseline characteristics. 27.5% of patients in the intensive-control group died versus 24.9% in the conventional-control group (OR 1.14, 95% CI 1.02-1.28, p= 0.02). Severe hypoglycemia (< 40 mg/dL) was found in 6.8% of intensive patients but only 0.5% of conventional patients.

In conclusion, intensive glucose control increases mortality in ICU patients. The fact that only 20% of these patients had diabetes mellitus suggests that much of the hyperglycemia treated in this study (97% of intensive group received insulin, 69% of conventional) was from stress, critical illness, and corticosteroid use. For ICU patients, intensive insulin therapy is clearly harmful, but the ideal target glucose range remains controversial and by expert opinion appears to be 140-180. For non-ICU inpatients with or without diabetes mellitus, the ideal glucose target is also unclear – the ADA recommends 140-180, and the Endocrine Society recommends a pre-meal target of < 140 and random levels < 180.

References / Further Reading:
1. ADA Standards of Medical Care in Diabetes 2016 (skip to page S99)
2. Wiki Journal Club
3. Visual Abstract @ VisualMed

Summary by Duncan F. Moore, MD

Image Credit: Dietmar Rabich / Wikimedia Commons / “Würfelzucker — 2018 — 3564” / CC BY-SA 4.0