Week 19 – COPERNICUS

“Effect of carvedilol on survival in severe chronic heart failure”

by the Carvedilol Prospective Randomized Cumulative Survival (COPERNICUS) Study Group

N Engl J Med. 2001 May 31;344(22):1651-8. [free full text]

We are all familiar with the role of beta-blockers in the management of heart failure with reduced ejection fraction. In the late 1990s, a growing body of excellent RCTs demonstrated that metoprolol succinate, bisoprolol, and carvedilol improved morbidity and mortality in patients with mild to moderate HFrEF. However, the only trial of beta-blockade (with bucindolol) in patients with severe HFrEF failed to demonstrate a mortality benefit. In 2001, the COPERNICUS trial further elucidated the mortality benefit of carvedilol in patients with severe HFrEF.

The study enrolled patients with severe CHF (NYHA class III-IV symptoms and LVEF < 25%) despite “appropriate conventional therapy” and randomized them to treatment with carvedilol with protocolized uptitration (in addition to pt’s usual meds) or placebo with protocolized uptitration (in addition to pt’s usual meds). The major outcomes measured were all-cause mortality and the combined risk of death or hospitalization for any cause.

2289 patients were randomized before the trial was stopped early due to higher than expected survival benefit in the carvedilol arm. Mean follow-up was 10.4 months. Regarding mortality, 190 (16.8%) of placebo patients died, while only 130 (11.2%) of carvedilol patients died (p = 0.0014) (NNT = 17.9). Regarding mortality or hospitalization, 507 (44.7%) of placebo patients died or were hospitalized, but only 425 (36.8%) of carvedilol patients died or were hospitalized (NNT = 12.6). Both outcomes were found to be of similar directions and magnitudes in subgroup analyses (age, sex, LVEF < 20% or >20%, ischemic vs. non-ischemic CHF, study site location, and no CHF hospitalization within year preceding randomization).

Implication/Discussion:
In severe HFrEF, carvedilol significantly reduces mortality and hospitalization risk.

This was a straightforward, well-designed, double-blind RCT with a compelling conclusion. In addition, the dropout rate was higher in the placebo arm than the carvedilol arm! Despite longstanding clinician fears that beta-blockade would be ineffective or even harmful in patients with already advanced (but compensated) HFrEF, this trial definitively established the role for beta-blockade in such patients.

Per the 2013 ACCF/AHA guidelines, “use of one of the three beta blockers proven to reduce mortality (e.g. bisoprolol, carvedilol, and sustained-release metoprolol succinate) is recommended for all patients with current or prior symptoms of HFrEF, unless contraindicated.”

Please note that there are two COPERNICUS trials. This is the first reported study (NEJM 2001) which reports only the mortality and mortality + hospitalization results, again in the context of a highly anticipated trial that was terminated early due to mortality benefit. A year later, the full results were published in Circulation, which described findings such as a decreased number of hospitalizations, fewer total hospitalization days, fewer days hospitalized for CHF, improved subjective scores, and fewer serious adverse events (e.g. sudden death, cardiogenic shock, VT) in the carvedilol arm.

Further Reading/References:
1. 2013 ACCF/AHA Guideline for the Management of Heart Failure
2. 2017 ACC/AHA/HFSA Focused Update of the 2013 ACCF/AHA Guideline for the Management of Heart Failure
3. COPERNICUS, 2002 Circulation version
4. Wiki Journal Club (describes 2001 NEJM, cites 2002 Circulation)
5. 2 Minute Medicine (describes and cites 2002 Circulation)

Summary by Duncan F. Moore, MD

Week 18 – Early Palliative Care in NSCLC

“Early Palliative Care for Patients with Metastatic Non-Small-Cell Lung Cancer”

N Engl J Med. 2010 Aug 19;363(8):733-42 [free full text]

Ideally, palliative care improves a patient’s quality of life while facilitating appropriate usage of healthcare resources. However, initiating palliative care late in a disease course or in the inpatient setting may limit these beneficial effects. This 2010 study by Temel et al. sought to demonstrate benefits of early integrated palliative care on patient-reported quality-of-life (QoL) outcomes and resource utilization.

The study enrolled outpatients with metastatic NSCLC diagnosed < 8 weeks ago and ECOG performance status 0-2 and randomized them to either “early palliative care” (met with palliative MD/ARNP within 3 weeks of enrollment and at least monthly afterward) or to standard oncologic care. The primary outcome was the change in Trial Outcome Index (TOI) from baseline to 12 weeks.

TOI = sum of the lung cancer, physical well-being, and functional well-being subscales of the Functional Assessment of Cancer Therapy­–Lung (FACT-L) scale (scale range 0-84, higher score = better function)

Secondary outcomes included:

  1. change in FACT-L score at 12 weeks (scale range 0-136)
  2. change in lung cancer subscale of FACT-L at 12 weeks (scale range 0-28)
  3. “aggressive care,” meaning one of the following: chemo within 14 days before death, lack of hospice care, or admission to hospice ≤ 3 days before death
  4. documentation of resuscitation preference in outpatient records
  5. prevalence of depression at 12 weeks per HADS and PHQ-9
  6. median survival

151 patients were randomized. Palliative-care patients (n=77) had a mean TOI increase of 2.3 points vs. a 2.3-point decrease in the standard-care group (n=73) (p=0.04). Median survival was 11.6 months in the palliative group vs. 8.9 months in the standard group (p=0.02). (See Figure 3 on page 741 for the Kaplan-Meier curve.) Prevalence of depression at 12 weeks per PHQ-9 was 4% in palliative patients vs. 17% in standard patients (p = 0.04). Aggressive end-of-life care was received in 33% of palliative patients vs. 53% of standard patients (p=0.05). Resuscitation preferences were documented in 53% of palliative patients vs. 28% of standard patients (p=0.05). There was no significant change in FACT-L score or lung cancer subscale score at 12 weeks.

Implication/Discussion:
Early palliative care in patients with metastatic non-small cell lung cancer improved quality of life and mood, decreased aggressive end-of-life care, and improved survival. This is a landmark study, both for its quantification of the QoL benefits of palliative intervention and for its seemingly counterintuitive finding that early palliative care actually improved survival.

The authors hypothesized that the demonstrated QoL and mood improvements may have led to the increased survival, as prior studies had associated lower QoL and depressed mood with decreased survival. However, I find more compelling their hypotheses that “the integration of palliative care with standard oncologic care may facilitate the optimal and appropriate administration of anticancer therapy, especially during the final months of life” and earlier referral to a hospice program may result in “better management of symptoms, leading to stabilization of [the patient’s] condition and prolonged survival.”

In practice, this study and those that followed have further spurred the integration of palliative care into many standard outpatient oncology workflows, including features such as co-located palliative care teams and palliative-focused checklists/algorithms for primary oncology providers. Of note, in the inpatient setting, a recent meta-analysis concluded that early hospital palliative care consultation was associated with a $3200 reduction in direct hospital costs ($4250 in subgroup of patients with cancer).

Further Reading/References:
1. ClinicalTrials.gov
2. Wiki Journal Club
3. Profile of first author Dr. Temel
4. “Economics of Palliative Care for Hospitalized Adults with Serious Illness: A Meta-analysis” JAMA Internal Medicine (2018)
5. UpToDate, “Benefits, services, and models of subspecialty palliative care”

Summary by Duncan F. Moore, MD

Week 17 – 4S

“Randomised trial of cholesterol lowering in 4444 patients with coronary heart disease: the Scandinavian Simvastatin Survival Study (4S)”

Lancet. 1994 Nov 19;344(8934):1383-9 [free full text]

Statins are an integral part of modern primary and secondary prevention of atherosclerotic cardiovascular disease (ASCVD). Hypercholesterolemia is regarded as a major contributory factor to the development of atherosclerosis, and in the 1980s, a handful of clinical trials demonstrated reduction in MI/CAD incidence with cholesterol-lowering agents, such as cholestyramine and gemfibrozil. However, neither drug demonstrated a mortality benefit. By the late 1980s, there was much hope that the emerging drug class of HMG-CoA reductase inhibitors (statins) would confer a mortality benefit, given their previously demonstrated LDL-lowering effects. The 1994 Scandinavian Simvastatin Survival Study was the first large clinical trial to assess this hypothesis.

4444 adults ages 35-70 with a history of angina pectoris or MI and elevated serum total cholesterol (212 – 309 mg/dL) were recruited from 94 clinical centers in Scandinavia (and in Finland, which is technically a Nordic country but not a Scandinavian country…) and randomized to treatment with either simvastatin 20mg PO qPM or placebo. Dosage was increased at 12 weeks and 6 months to target a serum total cholesterol of 124 to 201 mg/dL. (Placebo patients were randomly uptitrated as well.) The primary endpoint was all-cause mortality. The secondary endpoint was time to first “major coronary event,” which included coronary deaths, nonfatal MI, resuscitated cardiac arrest, and definite silent MI per EKG.

The study was stopped early in 1994 after an interim analysis demonstrated a significant survival benefit in the treatment arm. At a mean 5.4 years of follow-up, 256 (12%) in the placebo group versus 182 (8%) in the simvastatin group had died (RR 0.70, 95% CI 0.58-0.85, p=0.0003, NNT = 30.1). The mortality benefit was driven exclusively by a reduction in coronary deaths. Dropout rates were similar (13% of placebo group and 10% of simvastatin group). The secondary endpoint, occurrence of a major coronary event, occurred in 622 (28%) of the placebo group and 431 (19%) of the simvastatin group (RR 0.66, 95% CI 0.59-0.75, p < 0.00001). Subgroup analyses of women and patients aged 60+ demonstrated similar findings for the primary and secondary outcomes. Over the entire course of the study, the average changes in lipid values from baseline in the simvastatin group were -25% total cholesterol, -35% LDL, +8% HDL, and -10% triglycerides. The corresponding percent changes from baseline in the placebo group were +1%, +1%, +1%, and +7%, respectively.

In conclusion, simvastatin therapy reduced mortality in patients with known CAD and hypercholesterolemia via reduction of major coronary events. This was a large, well-designed, double-blind RCT that ushered in the era of widespread statin use for secondary, and eventually, primary prevention of ASCVD. For further information about modern guidelines for the use of statins, please see the 2013 “ACC/AHA Guideline on the Treatment of Blood Cholesterol to Reduce Atherosclerotic Cardiovascular Risk in Adults” and the 2016 USPSTF guideline “Statin use for the Primary Prevention of Cardiovascular Disease in Adults: Preventive Medication”.

Finally, for history buffs interested in a brief history of the discovery and development of this drug class, please see this paper by Akira Endo.

References / Additional Reading:
1. 4S @ Wiki JournalClub
2. “2013 ACC/AHA Guideline on the Treatment of Blood Cholesterol to Reduce Atherosclerotic Cardiovascular Risk in Adults”
3. “Statin use for the Primary Prevention of Cardiovascular Disease in Adults: Preventive Medication” (2016)
4. UpToDate, “Society guideline links: Lipid disorders in adults”
5. “A historical perspective on the discovery of statins” (2010)

Summary by Duncan F. Moore, MD

Image Credit: Siol, CC BY-SA 3.0, via Wikimedia Commons

Week 16 – MELD

“A Model to Predict Survival in Patients With End-Stage Liver Disease”

Hepatology. 2001 Feb;33(2):464-70. [free full text]

Prior to the adoption of the Model for End-Stage Liver Disease (MELD) score for the allocation of liver transplants, the determination of medical urgency was dependent on the Child-Pugh score. The Child-Pugh score was limited by the inclusion of two subjective variables (severity of ascites and severity of encephalopathy), limited discriminatory ability, and a ceiling effect of laboratory abnormalities. Stakeholders sought an objective, continuous, generalizable index that more accurately and reliably represented disease severity. The MELD score had originally been developed in 2000 to estimate the survival of patients undergoing TIPS. The authors of this 2001 study hypothesized that the MELD score would accurately estimate short-term survival in a wide range of severities and etiologies of liver dysfunction and thus serve as a suitable replacement measure for the Child-Pugh score in the determination of medical urgency in transplant allocation.

This study reported a series of four retrospective validation cohorts for the use of MELD in prediction of mortality in advanced liver disease. The index MELD score was calculated for each patient. Death during follow-up was assessed by chart review.

MELD score = 3.8*ln([bilirubin])+11.2*ln(INR)+9.6*ln([Cr])+6.4*(etiology: 0 if cholestatic or alcoholic, 1 otherwise)

The primary study outcome was the concordance c-statistic between MELD score and 3-month survival. The c-statistic is equivalent to the area under receiver operating characteristic (AUROC). Per the authors, “a c-statistic between 0.8 and 0.9 indicates excellent diagnostic accuracy and a c-statistic greater than 0.7 is generally considered as a useful test.” (See page 455 for further explanation.) There was no reliable comparison statistic (e.g. c-statistic of MELD vs. that of Child-Pugh in all groups).

C-statistic for 3-month survival in the four cohorts ranged from 0.78 to 0.87 (no 95% CIs exceeded 1.0). There was minimal improvement in the c-statistics for 3-month survival with the individual addition of spontaneous bacterial peritonitis, variceal bleed, ascites, and encephalopathy to the MELD score (see Table 4, highest increase in c-statistic was 0.03). When the etiology of liver disease was excluded from the MELD score, there was minimal change in the c-statistics (see Table 5, all paired CIs overlap). C-statistics for 1-week mortality ranged from 0.80 to 0.95.

In conclusion, the MELD score is an excellent predictor of short-term mortality in patients with end-stage liver disease of diverse etiology and severity. Despite the retrospective nature of this study, this study represented a significant improvement upon the Child-Pugh score in determining medical urgency in patients who require liver transplant. In 2002, the United Network for Organ Sharing (UNOS) adopted a modified version of the MELD score for the prioritization of deceased-donor liver transplants in cirrhosis. Concurrent with the 2001 publication of this study, Wiesner et al. performed a prospective validation of the use of MELD in the allocation of liver transplantation. When published in 2003, it demonstrated that MELD score accurately predicted 3-month mortality among patients with chronic liver disease on the waitlist. The MELD score has also been validated in other conditions such as alcoholic hepatitis, hepatorenal syndrome, and acute liver failure (see UpToDate). Subsequent additions to the MELD score have come out over the years. In 2006, the MELD Exception Guidelines offered extra points for severe comorbidities (e.g HCC, hepatopulmonary syndrome). In January 2016, the MELDNa score was adopted and is now used for liver transplant prioritization.

References and Further Reading:
1. “A model to predict poor survival in patients undergoing transjugular intrahepatic portosystemic shunts” (2000)
2. MDCalc “MELD Score”
3. Wiesner et al. “Model for end-stage liver disease (MELD) and allocation of donor livers” (2003)
4. Freeman Jr. et al. “MELD exception guidelines” (2006)
5. 2 Minute Medicine
6. UpToDate “Model for End-stage Liver Disease (MELD)”

Image Credit: Ed Uthman, CC-BY-2.0, via WikiMedia Commons

Week 15 – CHADS2

“Validation of Clinical Classification Schemes for Predicting Stroke”

JAMA. 2001 June 13;285(22):2864-70. [free full text]

Atrial fibrillation is the most common cardiac arrhythmia and affects 1-2% of the overall population with increasing prevalence as people age. Atrial fibrillation also carries substantial morbidity and mortality due to the risk of stroke and thromboembolism although the risk of embolic phenomenon varies widely across various subpopulations. In 2001, the only oral anticoagulation options available were warfarin and aspirin, which had relative risk reductions of 62% and 22%, respectively, consistent across these subpopulations. Clinicians felt that high risk patients should be anticoagulated, but the two common classification schemes, AFI and SPAF, were flawed. Patients were often classified as low risk in one scheme and high risk in the other. The schemes were derived retrospectively and were clinically ambiguous. Therefore, in 2001, a group of investigators combined the two existing schemes to create the CHADS2 scheme and applied it to a new data set.

Population (NRAF cohort): Hospitalized Medicare patients ages 65-95 with non-valvular AF not prescribed warfarin at hospital discharge.

Intervention: Determination of CHADS2 score (1 point for recent CHF, hypertension, age ≥ 75, and DM; 2 points for a history of stroke or TIA)

Comparison: AFI and SPAF risk schemes

Measured Outcome: Hospitalization rates for ischemic stroke (per ICD-9 codes from Medicare claims), stratified by CHADS2 / AFI / SPAF scores.

Calculated Outcome: performance of the various schemes, based on c statistic (a measure of predictive accuracy in a binary logistic regression model)

Results:
1733 patients were identified in the NRAF cohort. When compared to the AFI and SPAF trials, these patients tended be older (81 in NRAF vs. 69 in AFI vs. 69 in SPAF), have a higher burden of CHF (56% vs. 22% vs. 21%), are more likely to be female (58% vs. 34% vs. 28%), and have a history of DM (23% vs. 15% vs. 15%) or prior stroke/TIA (25% vs. 17% vs. 8%). The stroke rate was lowest in the group with a CHADS2 = 0 (1.9 per 100 patient years, adjusting for the assumption that aspirin was not taken). The stroke rate increased by a factor of approximately 1.5 for each 1-point increase in the CHADS2 score.

CHADS2 score           NRAF Adjusted Stroke Rate per 100 Patient-Years
0                                      1.9
1                                      2.8
2                                      4.0
3                                      5.9
4                                      8.5
5                                      12.5
6                                      18.2

The CHADS2 scheme had a c statistic of 0.82 compared to 0.68 for the AFI scheme and 0.74 for the SPAF scheme.

Implication/Discussion
The CHADS2 scheme provides clinicians with a scoring system to help guide decision making for anticoagulation in patients with non-valvular AF.

The authors note that the application of the CHADS2 score could be useful in several clinical scenarios. First, it easily identifies patients at low risk of stroke (CHADS2 = 0) for whom anticoagulation with warfarin would probably not provide significant benefit. The authors argue that these patients should merely be offered aspirin. Second, the CHADS2 score could facilitate medication selection based on a patient-specific risk of stroke. Third, the CHADS2 score could help clinicians make decisions regarding anticoagulation in the perioperative setting by evaluating the risk of stroke against the hemorrhagic risk of the procedure. Although the CHADS2 is no longer the preferred risk-stratification scheme, the same concepts are still applicable to the more commonly used CHA2DS2-VASc.

This study had several strengths. First, the cohort was from seven states that represented all geographic regions of the United States. Second, CHADS2 was pre-specified based on previous studies and validated using the NRAF data set. Third, the NRAF data set was obtained from actual patient chart review as opposed to purely from an administrative database. Finally, the NRAF patients were older and sicker than those of the AFI and SPAF cohorts, and thus the CHADS2 appears to be generalizable to the very large demographic of frail, elderly Medicare patients.

As CHADS2 became widely used clinically in the early 2000s, its application to other cohorts generated a large intermediate-risk group (CHADS2 = 1), which was sometimes > 60% of the cohort (though in the NRAF cohort, CHADS2 = 1 accounted for 27% of the cohort). In clinical practice, this intermediate-risk group was to be offered either warfarin or aspirin. Clearly, a clinical-risk predictor that does not provide clear guidance in over 50% of patients needs to be improved. As a result, the CHA2DS2-VASc scoring system was developed from the Birmingham 2009 scheme. When compared head-to-head in registry data, CHA2DS2-VASc more effectively discriminated stroke risk among patients with a baseline CHADS2 score of 0 to 1. Because of this, CHA2DS2-VASc is the recommended risk stratification scheme in the AHA/ACC/HRS 2014 Practice Guideline for Atrial Fibrillation. In modern practice, anticoagulation is unnecessary when CHA2DS2-VASc score = 0, should be considered (vs. antiplatelet or no treatment) when score = 1, and is recommended when score ≥ 2.

Further Reading:
1. AHA/ACC/HRS 2014 Practice Guideline for Atrial Fibrillation
2. CHA2DS2-VASc (2010)
3. 2 Minute Medicine

Summary by Ryan Commins, MD

Image Credit: Alisa Machalek, NIGMS/NIH – National Insititue of General Medical Sciences, Public Domain

Week 14 – CURB-65

“Defining community acquired pneumonia severity on presentation to hospital: an international derivation and validation study”

Thorax. 2003 May;58(5):377-82. [free full text]

Community-acquired pneumonia (CAP) is frequently encountered by the admitting medicine team. Ideally, the patient’s severity at presentation and risk for further decompensation should determine the appropriate setting for further care, whether as an outpatient, on an inpatient ward, or in the ICU. At the time of this 2003 study, the predominant decision aid was the 20-variable Pneumonia Severity Index. The authors of this study sought to develop a simpler decision aid for determining the appropriate level of care at presentation.

The study examined the 30-day mortality rates of adults admitted for CAP via the ED at three non-US academic medical centers (data from three previous CAP cohort studies). 80% of the dataset was analyzed as a derivation cohort – meaning it was used to identify statistically significant, clinically relevant prognostic factors that allowed for mortality risk stratification. The resulting model was applied to the remaining 20% of the dataset (the validation cohort) in order to assess the accuracy of its predictive ability.

The following variables were integrated into the final model (CURB-65):

  1. Confusion
  2. Urea > 19mg/dL (7 mmol/L)
  3. Respiratory rate ≥ 30 breaths/min
  4. low Blood pressure (systolic BP < 90 mmHg or diastolic BP < 60 mmHg)
  5. age ≥ 65

1068 patients were analyzed. 821 (77%) were in the derivation cohort. 86% of patients received IV antibiotics, 5% were admitted to the ICU, and 4% were intubated. 30-day mortality was 9%. 9 of 11 clinical features examined in univariate analysis were statistically significant (see Table 2).

Ultimately, using the above-described CURB-65 model, in which 1 point is assigned for each clinical characteristic, patients with a CURB-65 score of 0 or 1 had 1.5% mortality, patients with a score of 2 had 9.2% mortality, and patients with a score of 3 or more had 22% mortality. Similar values were demonstrated in the validation cohort. Table 5 summarizes the sensitivity, specificity, PPVs, and NPVs of each CURB-65 score for 30-day mortality in both cohorts. As we would expect from a good predictive model, the sensitivity starts out very high and decreases with increasing score, while the specificity starts out very low and increases with increasing score. For the clinical application of their model, the authors selected the cut points of 1, 2, and 3 (see Figure 2).

In conclusion, CURB-65 is a simple 5-variable decision aid that is helpful in the initial stratification of mortality risk in patients with CAP.

The wide range of specificities and sensitivities at different values of the CURB-65 score makes it a robust tool for risk stratification. The authors felt that patients with a score of 0-1 were “likely suitable for home treatment,” patients with a score of 2 should have “hospital-supervised treatment,” and patients with score of  ≥ 3 had “severe pneumonia” and should be admitted (with consideration of ICU admission if score of 4 or 5).

Following the publication of the CURB-65 Score, the author of the Pneumonia Severity Index (PSI) published a prospective cohort study of CAP that examined the discriminatory power (area under the receiver operating characteristic curve) of the PSI vs. CURB-65. His study found that the PSI “has a higher discriminatory power for short-term mortality, defines a greater proportion of patients at low risk, and is slightly more accurate in identifying patients at low risk” than the CURB-65 score.

Expert opinion at UpToDate prefers the PSI over the CURB-65 score based on its more robust base of confirmatory evidence. Of note, the author of the PSI is one of the authors of the relevant UpToDate article. In an important contrast from the CURB-65 authors, these experts suggest that patients with a CURB-65 score of 0 be managed as outpatients, while patients with a score of 1 and above “should generally be admitted.”

Further Reading/References:
1. Original publication of the PSI, NEJM (1997)
2. PSI vs. CURB-65 (2005)
3. Wiki Journal Club
4. 2 Minute Medicine
5. UpToDate, “CAP in adults: assessing severity and determining the appropriate level of care”

Summary by Duncan F. Moore, MD

Image Credit: by Christaras A, CC BY-SA 3.0

Week 13 – Sepsis-3

“The Third International Consensus Definitions for Sepsis and Septic Shock (Sepsis-3)”

JAMA. 2016 Feb 23;315(8):801-10. [free full text]

In practice, we recognize sepsis as a potentially life-threatening condition that arises secondary to infection. Because the SIRS criteria were of limited sensitivity and specificity in identifying sepsis and because our understanding of the pathophysiology of sepsis had purportedly advanced significantly during the interval since the last sepsis definition, an international task force of 19 experts was convened to define and prognosticate sepsis more effectively. The resulting 2016 Sepsis-3 definition was the subject of immediate and sustained controversy.

In the words of Sepsis-3, sepsis simply “is defined as life-threatening organ dysfunction caused by a dysregulated host response to infection.” The paper further defines organ dysfunction in terms of a threshold change in the SOFA score by 2+ points. However, the authors state that “the SOFA score is not intended to be used as a tool for patient management but as a means to clinically characterize a septic patient.” The authors note that qSOFA, an easier tool introduced in this paper, can identify promptly at the bedside patients “with suspected infection who are likely to have a prolonged ICU stay or die in the hospital.” A positive screen on qSOFA is identified as 2+ of the following: AMS, SBP ≤ 100, or respiratory rate ≥ 22. At the time of this endorsement of qSOFA, the tool had not been validated prospectively. Finally, septic shock was defined as sepsis with persistent hypotension requiring vasopressors to maintain MAP ≥ 65 and with a serum lactate > 2 despite adequate volume resuscitation.

As noted contemporaneously in the excellent PulmCrit blog post “Top ten problems with the new sepsis definition,” Sepsis-3 was not endorsed by the American College of Chest Physicians, the IDSA, any emergency medicine society, or any hospital medicine society. On behalf of the American College of Chest Physicians, Dr. Simpson published a scathing rejection of Sepsis-3 in Chest in May 2016. He noted “there is still no known precise pathophysiological feature that defines sepsis.” He went on to state “it is not clear to us that readjusting the sepsis criteria to be more specific for mortality is an exercise that benefits patients,” and said “to abandon one system of recognizing sepsis [SIRS] because it is imperfect and not yet in universal use for another system that is used even less seems unwise without prospective validation of that new system’s utility.”

In fact, the later validation of qSOFA demonstrated that the SIRS criteria had superior sensitivity for predicting in-hospital mortality while qSOFA had higher specificity. See the following posts at PulmCrit for further discussion: [https://emcrit.org/isepsis/isepsis-sepsis-3-0-much-nothing/] [https://emcrit.org/isepsis/isepsis-sepsis-3-0-flogging-dead-horse/].

At UpToDate, authors note that “data of the value of qSOFA is conflicting,” and because of this, “we believe that further studies that demonstrate improved clinically meaningful outcomes due to the use of qSOFA compared to clinical judgement are warranted before it can be routinely used to predict those at risk of death from sepsis.”

Additional Reading:
1. PulmCCM, “Simple qSOFA score predicts sepsis as well as anything else”
2. 2 Minute Medicine

Summary by Duncan F. Moore, MD

Image Credit: By Mark Oniffrey – Own work, CC BY-SA 4.0

Week 12 – Rivers Trial

“Early Goal-Directed Therapy in the Treatment of Severe Sepsis and Septic Shock”

N Engl J Med. 2001 Nov 8;345(19):1368-77. [free full text]

Sepsis is common and, in its more severe manifestations, confers a high mortality risk. Fundamentally, sepsis is a global mismatch between oxygen demand and delivery. Around the time of this seminal study by Rivers et al., there was increasing recognition of the concept of the “golden hour” in sepsis management – “where definitive recognition and treatment provide maximal benefit in terms of outcome” (1368). Rivers and his team created a “bundle” of early sepsis interventions that targeted preload, afterload, and contractility, dubbed early goal-directed therapy (EGDT). They evaluated this bundle’s effect on mortality and end-organ dysfunction.

The “Rivers trial” randomized adults presenting to a single US academic center ED with ≥ 2 SIRS criteria and either SBP ≤ 90 after a crystalloid challenge of 20-30ml/kg over 30min or lactate > 4mmol/L to either treatment with the EGDT bundle or to the standard of care.

Intervention: early goal-directed therapy (EGDT)

  • Received a central venous catheter with continuous central venous O2 saturation (ScvO2) measurement
  • Treated according to EGDT protocol (see Figure 2, or below) in ED for at least six hours
    • 500ml bolus of crystalloid q30min to achieve CVP 8-12mm
    • Vasopressors to achieve MAP ≥ 65
    • Vasodilators to achieve MAP ≤ 90
    • If ScvO2 < 70%, transfuse RBCs to achieve Hct ≥ 30
    • If, after CVP, MAP, and Hct were optimized as above and ScvO2 remained < 70%, dobutamine was added and uptitrated to achieve ScvO2 ≥ 70 or until max dose 20 μg/kg/min
      • dobutamine was de-escalated if MAP < 65 or HR > 120
    • Patients in whom hemodynamics could not be optimized were intubated and sedated, in order to decrease oxygen consumption
  • Patients were transferred to inpatient ICU bed as soon as able, and upon transfer ScvO2 measurement was discontinued
  • Inpatient team was blinded to treatment group assignment

The primary outcome was in-hospital mortality. Secondary endpoints included: resuscitation end points, organ-dysfunction scores, coagulation-related variables, administered treatments, and consumption of healthcare resources.

130 patients were randomized to EGDT, and 133 to standard therapy. There were no differences in baseline characteristics. There was no group difference in the prevalence of antibiotics given within the first 6 hours. Standard-therapy patients spent 6.3 ± 3.2 hours in the ED, whereas EGDT patients spent 8.0 ± 2.1 (p < 0.001).

In-hospital mortality was 46.5% in the standard-therapy group, and 30.5% in the EGDT group (p = 0.009, NNT 6.25). 28-day and 60-day mortalities were also improved in the EGDT group. See Table 3.

During the initial six hours of resuscitation, there was no significant group difference in mean heart rate or CVP. MAP was higher in the EGDT group (p < 0.001), but all patients in both groups reached a MAP ≥ 65. ScvO2 ≥ 70% was met by 60.2% of standard-therapy patients and 94.9% of EGDT patients (p < 0.001). A combination endpoint of achievement of CVP, MAP, and UOP (≥ 0.5cc/kg/hr) goals was met by 86.1% of standard-therapy patients and 99.2% of EGDT patients (p < 0.001). Standard-therapy patients had lower ScvO2 and greater base deficit, while lactate and pH values were similar in both groups.

During the period of 7 to 72 hours, the organ-dysfunction scores of APACHE II, SAPS II, and MODS were higher in the standard-therapy group (see Table 2). The prothrombin time, fibrin-split products concentration, and d-dimer concentrations were higher in the standard-therapy group, while PTT, fibrinogen concentration, and platelet counts were similar.

During the initial six hours, EGDT patients received significantly more fluids, pRBCs, and inotropic support than standard-therapy patients. Rates of vasopressor use and mechanical ventilation were similar. During the period of 7 to 72 hours, standard-therapy patients received more fluids, pRBCs, and vasopressors than the EGDT group, and they were more likely to be intubated and to have pulmonary-artery catheterization. Rates of inotrope use were similar. Overall, during the first 72 hrs, standard-therapy patients were more likely to receive vasopressors, be intubated, and undergo pulmonary-artery catheterization. EGDT patients were more likely to receive pRBC transfusion. There was no group difference in total volume of fluid administration or inotrope use. Regarding utilization, there were no group differences in mean duration of vasopressor therapy, mechanical ventilation, or length of stay. Among patients who survived to discharge, standard-therapy patients spent longer in the hospital than EGDT patients (18.4 ± 15.0 vs. 14.6 ± 14.5 days, respectively, p = 0.04).

In conclusion, early goal-directed therapy reduced in-hospital mortality in patients presenting to the ED with severe sepsis or septic shock when compared with usual care. In their discussion, the authors note that “when early therapy is not comprehensive, the progression to severe disease may be well under way at the time of admission to the intensive care unit” (1376).

The Rivers trial has been cited over 10,500 times. It has been widely discussed and dissected for decades. Most importantly, it helped catalyze a then-ongoing paradigm shift of what “usual care” in sepsis is. As noted by our own Drs. Sonti and Vinayak and in their Georgetown Critical Care Top 40: “Though we do not use the ‘Rivers protocol’ as written, concepts (timely resuscitation) have certainly infiltrated our ‘standard of care’ approach.” The Rivers trial evaluated the effect of a bundle (multiple interventions). It was a relatively complex protocol, and it has been recognized that the transfusion of blood to Hgb > 10 may have caused significant harm. In aggregate, the most critical elements of the modern initial resuscitation in sepsis are early administration of antibiotics (notably not protocolized by Rivers) within the first hour and the aggressive administration of IV fluids (now usually 30cc/kg of crystalloid within the first 3 hours of presentation).

More recently, there have been three large RCTs of EGDT versus usual care and/or protocols that used some of the EGDT targets: ProCESS (2014, USA), ARISE (2014, Australia), and ProMISe (2015, UK). In general terms, EGDT provided no mortality benefit compared to usual care. Prospectively, the authors of these three trials planned a meta-analysis – the 2017 PRISM study – which concluded that “EGDT did not result in better outcomes than usual care and was associated with higher hospitalization costs across a broad range of patient and hospital characteristics.” Despite patients in the Rivers trial being sicker than those of ProCESS/ARISE/ProMISe, it was not found in the subgroup analysis of PRISM that EGDT was more beneficial in sicker patients. Overall, the PRISM authors noted that “it remains possible that general advances in the provision of care for sepsis and septic shock, to the benefit of all patients, explain part or all of the difference in findings between the trial by Rivers et al. and the more recent trials.”

Further Reading/References:
1. Wiki Journal Club
2. 2 Minute Medicine
3. Life in The Fast Lane
4. Georgetown Critical Care Top 40
5. “A randomized trial of protocol-based care for early septic shock” (ProCESS). NEJM 2014.
6. “Goal-directed resuscitation for patients with early septic shock” (ARISE). NEJM 2014.
7. “Trial of early, goal-directed resuscitation for septic shock” (ProMISe). NEJM 2015.
8. “Early, Goal-Directed Therapy for Septic Shock – A Patient-level Meta-Analysis” PRISM. NEJM 2017.
9. Surviving Sepsis Campaign
10. UpToDate, “Evaluation and management of suspected sepsis and septic shock in adults”

Summary by Duncan F. Moore, MD

Image Credit: By Clinical_Cases, [CC BY-SA 2.5] via Wikimedia Commons

Week 11 – AFFIRM

“A Comparison of Rate Control and Rhythm Control in Patients with Atrial Fibrillation”

by the Atrial Fibrillation Follow-Up Investigation of Rhythm Management (AFFIRM) Investigators

N Engl J Med. 2002 Dec 5;347(23):1825-33. [free full text]

It seems like the majority of patients with atrial fibrillation that we encounter today in the inpatient setting are being treated with a rate-control strategy, as opposed to a rhythm-control strategy. There was a time when both approaches were considered acceptable, and perhaps rhythm control was even the preferred initial strategy. The AFFIRM trial was the landmark study to address this debate.

The trial randomized patients with atrial fibrillation (judged “likely to be recurrent”) aged 65 or older “or who had other risk factors for stroke or death” to either 1) a rhythm-control strategy with one or more drugs from a pre-specified list and/or cardioversion to achieve sinus rhythm or 2) a rate-control strategy with beta-blockers, CCBs, and/or digoxin to a target resting HR ≤ 80 and a six-minute walk test HR ≤ 110. The primary endpoint was death during follow-up. The major secondary endpoint was a composite of death, disabling stroke, disabling anoxic encephalopathy, major bleeding, and cardiac arrest.

4060 patients were randomized. Death occurred in 26.7% of rhythm-control patients versus 25.9% of rate-control patients (HR 1.15, 95% CI 0.99 – 1.34, p = 0.08). The composite secondary endpoint occurred in 32.0% of rhythm control-patients versus 32.7% of rate-control patients (p = 0.33). Rhythm-control strategy was associated with a higher risk of death among patients older than 65 and patients with CAD (see Figure 2). Additionally, rhythm-control patients were more likely to be hospitalized during follow-up (80.1% vs. 73.0%, p < 0.001) and to develop torsades de pointes (0.8% vs. 0.2%, p = 0.007).

This trial demonstrated that a rhythm-control strategy in atrial fibrillation offers no mortality benefit over a rate-control strategy. At the time of publication, the authors wrote that rate control was an “accepted, though often secondary alternative” to rhythm control. Their study clearly demonstrated that there was no significant mortality benefit to either strategy and that hospitalizations were greater in the rhythm-control group. In subgroup analysis that rhythm control led to higher mortality among the elderly and those with CAD. Notably, 37.5% of rhythm-control patients had crossed over to rate control strategy by 5 years of follow-up, whereas only 14.9% of rate-control patients had switched over to rhythm control.

But what does this study mean for our practice today? Generally speaking, rate control is preferred in most patients, particularly the elderly and patients with CHF, whereas rhythm control may be pursued in patients with persistent symptoms despite rate control, patients unable to achieve rate control on AV nodal agents alone, and patients younger than 65. Both the AHA/ACC (2014) and the European Society of Cardiology (2016) guidelines have extensive recommendations that detail specific patient scenarios.

Further Reading / References:
1. Cardiologytrials.org
2. Wiki Journal Club
3. 2 Minute Medicine
4. Visual abstract @ Visualmed

Summary by Duncan F. Moore, MD

Image Credit: Drj via Wikimedia Commons

Week 10 – CLOT

“Low-Molecular-Weight Heparin versus a Coumarin for the Prevention of Recurrent Venous Thromboembolism in Patients with Cancer”

by the Randomized Comparison of Low-Molecular-Weight Heparin versus Oral Anticoagulant Therapy for the Prevention of Recurrent Venous Thromboembolism in Patients with Cancer (CLOT) Investigators

N Engl J Med. 2003 Jul 10;349(2):146-53. [free full text]

Malignancy is a pro-thrombotic state, and patients with cancer are at significant and sustained risk of venous thromboembolism (VTE) even when treated with warfarin. Warfarin is a suboptimal drug that requires careful monitoring, and its effective administration is challenging in the setting of cancer-associated difficulties with oral intake, end-organ dysfunction, and drug interactions. The 2003 CLOT trial was designed to evaluate whether treatment with low-molecular-weight heparin (LMWH) was superior to treatment with a vitamin K antagonist (VKA) in the prevention of recurrent VTE.

The study randomized adults with active cancer and newly diagnosed symptomatic DVT or PE to treatment with either dalteparin subQ daily (200 IU/kg daily x1 month, then 150 IU/kg daily x5 months) or a vitamin K antagonist x6 months (target INR 2.5, with 5-7 day LMWH bridge). The primary outcome was the recurrence of symptomatic DVT or PE within 6 months of follow-up. Secondary outcomes included major bleed, any bleeding, and all-cause mortality.

338 patients were randomized to the LMWH group, and 338 were randomized to the VKA group. Baseline characteristics were similar among the two groups. 90% of patients had solid malignancies, and 67% of patients had metastatic disease. Within the VKA group, INR was estimated to be therapeutic 46% of the time, subtherapeutic 30% of the time, and supratherapeutic 24% of the time. Within the six-month follow-up period, symptomatic VTE occurred in 8.0% of the dalteparin group and 15.8% of the VKA group (HR 0.48, 95% CI 0.30-0.77, p=0.002; NNT = 12.9). The Kaplan-Meier estimate of recurrent VTE at 6 months was 9% in the dalteparin group and 17% in the VKA group. 6% of the dalteparin group developed major bleeding versus 6% of the VKA group (p = 0.27). 14% of the dalteparin group sustained any type of bleeding event versus 19% of the VKA group (p = 0.09). Mortality at 6 months was 39% in the dalteparin group versus 41% in the VKA group (p = 0.53).

In summary, treatment of VTE in cancer patients with low-molecular-weight heparin reduced the incidence of recurrent VTE relative to the incidence following treatment with vitamin K antagonists. Notably, this reduction in VTE recurrence was not associated with a change in bleeding risk. However, it also did not correlate with a mortality benefit either. This trial initiated a paradigm shift in the treatment of VTE in cancer. LMWH became the standard of care, although cost and convenience may have limited access and adherence to this treatment.

Until recently, no trial had directly compared a DOAC to LMWH in the prevention of recurrent VTE in malignancy. In an open-label, noninferiority trial, the Hokusai VTE Cancer Investigators demonstrated that the oral Xa inhibitor edoxaban (Savaysa) was noninferior to dalteparin with respect to a composite outcome of recurrent VTE or major bleeding. The 2018 SELECT-D trial compared rivaroxaban (Xarelto) to dalteparin and demonstrated a reduced rate of recurrence among patients treated with rivaroxaban (cumulative 6-month event rate of 4% versus 11%, HR 0.43, 95% CI 0.19–0.99) with no difference in rates of major bleeding but increased “clinically relevant nonmajor bleeding” within the rivaroxaban group.

Further Reading/References:
1. CLOT @ Wiki Journal Club
2. 2 Minute Medicine
3. UpToDate, “Treatment of venous thromboembolism in patients with malignancy”
4. Hokusai VTE Cancer Trial @ Wiki Journal Club
5. “Edoxaban for the Treatment of Cancer-Associated Venous Thromboembolism,” NEJM 2017
6. “Comparison of an Oral Factor Xa Inhibitor With Low Molecular Weight Heparin in Patients With Cancer With Venous Thromboembolism: Results of a Randomized Trial (SELECT-D).” J Clin Oncol 2018.

Summary by Duncan F. Moore, MD

Image Credit: By Westgate EJ, FitzGerald GA, CC BY 2.5via Wikimedia Commons