Week 41 – Transfusion Strategies for Upper GI Bleeding

“Transfusion Strategies for Acute Upper Gastrointestinal Bleeding”

N Engl J Med. 2013 Jan 3;368(1):11-21. [free full text]

A restrictive transfusion strategy of 7 gm/dL was established following the previously discussed 1999 TRICC trial. Notably, both TRICC and its derivative study TRISS excluded patients who had an active bleed. In 2013, Villanueva et al. performed a study to establish whether there was benefit to a restrictive transfusion strategy in patients with acute upper GI bleeding.

The study enrolled consecutive adults presenting to a single center in Spain with hematemesis (or bloody nasogastric aspirate), melena, or both. Notable exclusion criteria included: a clinical Rockall score* of 0 with a hemoglobin level higher than 12g/dL, massive exsanguinating bleeding, lower GIB, patient refusal of blood transfusion, ACS, stroke/TIA, transfusion within 90 days, recent trauma or surgery

*The Rockall score is a system to assess risk for further bleeding or death on a scale from 0-11. Higher scores (3-11) indicate higher risk. Of the 648 patients excluded, the most common reason for exclusion (n = 329) was low risk of bleeding.

Intervention: restrictive transfusion strategy (transfusion threshold Hgb = 7.0 gm/dL) [n = 444]

Comparison: liberal transfusion strategy (transfusion threshold Hgb = 9.0 gm/dL) [n = 445]

During randomization, patients were stratified by presence or absence of cirrhosis.

As part of the study design, all patients underwent emergent EGD within 6 hours and received relevant hemostatic intervention depending on the cause of bleeding.


Primary outcome: 45-day mortality

Secondary outcomes, selected:

      • Incidence of further bleeding associated with hemodynamic instability or hemoglobin drop > 2 gm/dL in 6 hours
      • Incidence and number of RBC transfusions
      • Other products and fluids transfused
      • Hgb level at nadir, discharge, and 45 days

Subgroup analyses: Patients were stratified by presence of cirrhosis and corresponding Child-Pugh class, variceal bleeding, and peptic ulcer bleeding. An additional subgroup analysis was performed to evaluate changes in hepatic venous pressure gradient between the two strategies.

The primary outcome of 45-day mortality was lower in the restrictive strategy (5% vs. 9%; HR 0.55, 95% CI 0.33-0.92; p = 0.02; NNT = 24.8). In subgroup analysis, this finding remained consistent for patients who had Child-Pugh class A or B but was not statistically significant among patients who had Class C. Further stratification for variceal bleeding and peptic ulcer disease did not make a difference in mortality.

Secondary outcomes:
Rates of further bleeding events and RBC transfusion, as well as number of products transfused, were lower in the restrictive strategy. Subgroup analysis demonstrated that rates of re-bleeding were lower in Child-Pugh class A and B but not in C. As expected, the restrictive strategy also resulted in the lowest hemoglobin levels at 24 hours. Hemoglobin levels among patients in the restrictive strategy were lower at discharge but were not significantly different from the liberal strategy at 45 days. There was no group difference in amount of non-RBC blood products or colloid/crystalloid transfused. Patients in the restrictive strategy experienced fewer adverse events, particularly transfusion reactions such as transfusion-associated circulatory overload and cardiac complications. Patients in the liberal-transfusion group had significant post-transfusion increases in mean hepatic venous pressure gradient following transfusion. Such increases were not seen in the restrictive-strategy patients.

In patients with acute upper GI bleeds, a restrictive strategy with a transfusion threshold 7 gm/dL reduces 45-day mortality, the rate and frequency of transfusions, and the rate of adverse reactions, relative to a liberal strategy with a transfusion threshold of 9 gm/dL.

In their discussion, the authors hypothesize that the “harmful effects of transfusion may be related to an impairment of hemostasis. Transfusion may counteract the splanchnic vasoconstrictive response caused by hypovolemia, inducing an increase in splanchnic blood flow and pressure that may impair the formation of clots. Transfusion may also induce abnormalities in coagulation properties.”

Subgroup analysis suggests that the benefit of the restrictive strategy is less pronounced in patients with more severe hepatic dysfunction. These findings align with prior studies in transfusion thresholds for critically ill patients. However, the authors note that the results conflict with studies in other clinical circumstances, specifically in the pediatric ICU and in hip surgery for high-risk patients.

There are several limitations to this study. First, its exclusion criteria limit its generalizability. Excluding patients with massive exsanguination is understandable given lack of clinical equipoise; however, this choice allows too much discretion with respect to the definition of a massive bleed. (Note that those excluded due to exsanguination comprised only 39 of 648.) Lack of blinding was a second limitation. Potential bias was mitigated by well-defined transfusion protocols. Additionally, there a higher incidence of transfusion-protocol violations in the restrictive group, which probably biased results toward the null. Overall, deviations from the protocol occurred in fewer than 10% of cases.

Further Reading/References:
1. Transfusion Strategies for Acute Upper GI Bleeding @ Wiki Journal Club
2. Transfusion Strategies for Acute Upper GI Bleeding @ 2 Minute Medicine
3. TRISS @ Wiki Journal Club

Summary by Gordon Pelegrin, MD

Image Credit: Jeremias, CC BY-SA 3.0, via Wikimedia Commons

Week 40 – PROSEVA

Prone Positioning in Severe Acute Respiratory Distress Syndrome
by the PROSEVA Study Group

N Engl J Med. 2013 June 6; 368(23):2159-2168 [free full text]

Prone positioning had been used for many years in ICU patients with ARDS in order to improve oxygenation. Per Dr. Sonti’s Georgetown Critical Care Top 40, the physiologic basis for benefit with proning lies in the idea that atelectatic regions of lung typically occur in the most dependent portion of an ARDS patient, with hyperinflation affecting the remaining lung. Periodic reversal of these regions via moving the patient from supine to prone and vice versa ensures no one region of the lung will have extended exposure to either atelectasis or overdistention. Although the oxygenation benefits have been long noted, the PROSEVA trial established mortality benefit.

Study patients were selected from 26 ICUs in France and 1 in Spain which had daily practice with prone positioning for at least 5 years. Inclusion criteria: ARDS patients intubated and ventilated <36hr with severe ARDS (defined as PaO2:FiO2 ratio <150, PEEP>5, and TV of about 6ml/kg of predicted body weight). (NB: by the Berlin definition for ARDS, severe ARDS is defined as PaO2:FiO2 ratio <100.) Patients were either randomized to the intervention of proning within 36 hours of mechanical ventilation for at least 16 consecutive hours (N=237) or to the control of being left in a semirecumbent (supine) position (N=229). The primary outcome was mortality at day 28. Secondary outcomes included mortality at day 90, rate of successful extubation (no reintubation or use of noninvasive ventilation x48hr), time to successful extubation, length of stay in the ICU, complications, use of noninvasive ventilation, tracheotomy rate, number of days free from organ dysfunction, ventilator settings, measurements of ABG, and respiratory system mechanics during the first week after randomization.

At the time of randomization in the study, the majority of characteristics were similar between the two groups, although the authors noted differences in the SOFA score and the use of neuromuscular blockers and vasopressors. The supine group at baseline had a higher SOFA score indicating more severe organ failure, and also had higher rate of vasopressor usage. The prone group had a higher rate of usage of neuromuscular blockade. The primary outcome of 28 day mortality was significantly lower in the prone group than in the supine group, at 16.0% vs 32.8% (p < 0.001, NNT = 6.0). This mortality decrease was still statistically significant when adjusted for the SOFA score. Secondary outcomes were notable for a significantly higher rate of successful extubation in the prone group (hazard ratio 0.45; 95% CI 0.29-0.7, p < 0.001). Additionally, the PaO2:FiO2 ratio was significantly higher in the supine group, whereas the PEEP and FiO2 were significantly lower. The remainder of secondary outcomes were statistically similar.

PROSEVA showed a significant mortality benefit with early use of prone positioning in severe ARDS. This mortality benefit was considerably larger than that seen in past meta-analyses, which was likely due to this study selecting specifically for patients with severe disease as well as specifying longer prone-positioning sessions than employed in prior studies. Critics have noted the unexpected difference in baseline characteristics between the two arms of the study. While these critiques are reasonable, the authors mitigate at least some of these complaints by adjusting the mortality for the statistically significant differences. With such a radical mortality benefit it might be surprising that more patients are not proned at our institution. One reason is that relatively few of our patients have severe ARDS. Additionally, proning places a high demand on resources and requires a coordinated effort of multiple staff. All treatment centers in this study had specially-trained staff that had been performing proning on a daily basis for at least 5 years, and thus were very familiar with the process. With this in mind, we consider the use of proning in patients meeting criteria for severe ARDS.

References and further reading:
1. PROSEVA @ 2 Minute Medicine
2. PROSEVA @ Wiki Journal Club
3. PROSEVA @ Georgetown Critical Care Top 40, pages 8-9
4. Life in the Fastlane, Critical Care Compendium, “Prone Position and Mechanical Ventilation”
5. PulmCCM.org, “ICU Physiology in 1000 Words: The Hemodynamics of Prone”

Summary by Gordon Pelegrin, MD

Image Credit: by James Heilman, MD, CC BY-SA 3.0, via Wikimedia Commons

Week 39 – POISE

“Effects of extended-release metoprolol succinate in patients undergoing non-cardiac surgery: a randomised controlled trial”

Lancet. 2008 May 31;371(9627):1839-47. [free full text]

Non-cardiac surgery is commonly associated with major cardiovascular complications. It has been hypothesized that perioperative beta blockade would reduce such events by attenuating the effects of the intraoperative increases in catecholamine levels. Prior to the 2008 POISE trial, small- and moderate-sized trials had revealed inconsistent results, alternately demonstrating benefit and non-benefit with perioperative beta blockade. The POISE trial was a large RCT designed to assess the benefit of extended-release metoprolol succinate (vs. placebo) in reducing major cardiovascular events in patients of elevated cardiovascular risk.

The trial enrolled patients age 45+ undergoing non-cardiac surgery with estimated LOS 24+ hrs and elevated risk of cardiac disease, meaning: either 1) hx of CAD, 2) peripheral vascular disease, 3) hospitalization for CHF within past 3 years, 4) undergoing major vascular surgery, 5) or any three of the following seven risk criteria: undergoing intrathoracic or intraperitoneal surgery, hx CHF, hx TIA, hx DM, Cr > 2.0, age 70+, or undergoing urgent/emergent surgery.

Notable exclusion criteria: HR < 50, 2nd or 3rd degree heart block, asthma, already on beta blocker, prior intolerance of beta blocker, hx CABG within 5 years and no cardiac ischemia since

Intervention: metoprolol succinate (extended-release) 100mg PO starting 2-4 hrs before surgery, additional 100mg at 6-12 hrs postoperatively, followed by 200mg daily for 30 days. (Patients unable to take PO meds postoperatively were given metoprolol infusion.)

Comparison: placebo PO / IV at same frequency as metoprolol arm

Primary – composite of cardiovascular death, non-fatal MI, and non-fatal cardiac arrest at 30 days

Secondary (at 30 days)

        • cardiovascular death
        • non-fatal MI
        • non-fatal cardiac arrest
        • all-cause mortality
        • non-cardiovascular death
        • MI
        • cardiac revascularization
        • stroke
        • non-fatal stroke
        • CHF
        • new, clinically significant atrial fibrillation
        • clinically significant hypotension
        • clinically significant bradycardia

Pre-specified subgroup analyses of primary outcome:

9298 patients were randomized. However, fraudulent activity was detected at participating sites in Iran and Colombia, and thus 947 patients from these sites were excluded from the final analyses. Ultimately, 4174 were randomized to the metoprolol group, and 4177 were randomized to the placebo group. There were no significant differences in baseline characteristics, pre-operative cardiac medications, surgery type, or anesthesia type between the two groups (see Table 1).

Regarding the primary outcome, metoprolol patients were less likely than placebo patients to experience the primary composite endpoint of cardiovascular death, non-fatal MI, and non-fatal cardiac arrest (HR 0.84, 95% CI 0.70-0.99, p = 0.0399). See Figure 2A for the relevant Kaplan-Meier curve. Note that the curves separate distinctly within the first several days.

Regarding selected secondary outcomes (see Table 3 for full list), metoprolol patients were more likely to die from any cause (HR 1.33, 95% CI 1.03-1.74, p = 0.0317). See Figure 2D for the Kaplan-Meier curve for all-cause mortality. Note that the curves start to separate around day 10. Cause of death was analyzed, and the only group difference in attributable cause was an increased number of deaths due to sepsis or infection in the metoprolol group (data not shown). Metoprolol patients were more likely to sustain a stroke (HR 2.17, 95% CI 1.26-3.74, p = 0.0053) or a non-fatal stroke (HR 1.94, 95% CI 1.01-3.69, p = 0.0450). Of all patients who sustained a non-fatal stroke, only 15-20% made a full recovery. Metoprolol patients were less likely to sustain new-onset atrial fibrillation (HR 0.76, 95% CI 0.58-0.99, p = 0.0435) and less likely to sustain a non-fatal MI (HR 0.70, 95% CI 0.57-0.86, p = 0.0008). There were no group differences in risk of cardiovascular death or non-fatal cardiac arrest. Metoprolol patients were more likely to sustain clinically significant hypotension (HR 1.55, 95% CI 1.38-1.74, P < 0.0001) and clinically significant bradycardia (HR 2.74, 95% CI 2.19-3.43, p < 0.0001).

Subgroup analysis did not reveal any significant interaction with the primary outcome by RCRI, sex, type of surgery, or anesthesia type.

In patients with cardiovascular risk factors undergoing non-cardiac surgery, the perioperative initiation of beta blockade decreased the composite risk of cardiovascular death, non-fatal MI, and non-fatal cardiac arrest and increased the overall mortality risk and risk of stroke.

This study affirms its central hypothesis – that blunting the catecholamine surge of surgery is beneficial from a cardiac standpoint. (Most patients in this study had an RCRI of 1 or 2.) However, the attendant increase in all-cause mortality is dramatic. The increased mortality is thought to result from delayed recognition of sepsis due to masking of tachycardia. Beta blockade may also limit the physiologic hemodynamic response necessary to successfully fight a serious infection. In retrospective analyses mentioned in the discussion, the investigators state that they cannot fully explain the increased risk of stroke in the metoprolol group. However, hypotension attributable to beta blockade explains about half of the increased number of strokes.

Overall, the authors conclude that “patients are unlikely to accept the risks associated with perioperative extended-release metoprolol.”

A major limitation of this study is the fact that 10% of enrolled patients were discarded in analysis due to fraudulent activity at selected investigation sites. In terms of generalizability, it is important to remember that POISE excluded patients who were already on beta blockers.

Currently, per expert opinion at UpToDate, it is not recommended to initiate beta blockers preoperatively in order improve perioperative outcomes. POISE is an important piece of evidence underpinning the 2014 ACC/AHA Guideline on Perioperative Cardiovascular Evaluation and Management of Patients Undergoing Noncardiac Surgery, which includes the following recommendations regarding beta blockers:

      • Beta blocker therapy should not be started on the day of surgery (Class III – Harm, Level B)
      • Continue beta blockers in patients who are on beta blockers chronically (Class I, Level B)
      • In patients with intermediate- or high-risk preoperative tests, it may be reasonable to begin beta blockers
      • In patients with ≥ 3 RCRI risk factors, it may be reasonable to begin beta blockers before surgery
      • Initiating beta blockers in the perioperative setting as an approach to reduce perioperative risk is of uncertain benefit in those with a long-term indication but no other RCRI risk factors
      • It may be reasonable to begin perioperative beta blockers long enough in advance to assess safety and tolerability, preferably > 1 day before surgery

Further Reading/References:
1. Wiki Journal Club
2. 2 Minute Medicine
3. UpToDate, “Management of cardiac risk for noncardiac surgery”
4. 2014 ACC/AHA guideline on perioperative cardiovascular evaluation and management of patients undergoing noncardiac surgery: a report of the American College of Cardiology/American Heart Association Task Force on practice guidelines.

Image Credit: Mark Oniffrey, CC BY-SA 4.0, via Wikimedia Commons

Summary by Duncan F. Moore, MD

Week 38 – Effect of Early vs. Deferred Therapy for HIV (NA-ACCORD)

“Effect of Early versus Deferred Antiretroviral Therapy for HIV on Survival”

N Engl J Med. 2009 Apr 30;360(18):1815-26 [free full text]

The optimal timing of initiation of antiretroviral therapy (ART) in asymptomatic patients with HIV has been a subject of investigation since the advent of antiretrovirals. Guidelines in 1996 recommended starting ART for all HIV-infected patients with CD4 count < 500, but over time provider concerns regarding resistance, medication nonadherence, and adverse effects of medications led to more restrictive prescribing. In the mid-2000s, guidelines recommended ART initiation in asymptomatic HIV patients with CD4 < 350. However, contemporary subgroup analysis of RCT data and other limited observational data suggested that deferring initiation of ART increased rates of progression to AIDS and mortality. Thus the NA-ACCORD authors sought to retrospectively analyze their large dataset to investigate the mortality effect of early vs. deferred ART initiation.

The study examined the cases of treatment-naïve patients with HIV and no hx of AIDS-defining illness evaluated during 1996-2005. Two subpopulations were analyzed retrospectively: CD4 count 351-500 and CD4 count 500+. No intervention was undertaken. The primary outcome was, within each CD4 sub-population, mortality in patients treated with ART within 6 months after the first CD4 count within the range of interest vs. mortality in patients for whom ART was deferred until the CD4 count fell below the range of interest.

8362 eligible patients had a CD4 count of 351-500, and of these, 2084 (25%) initiated ART within 6 months, whereas 6278 (75%) patients deferred therapy until CD4 < 351. 9155 eligible patients had a CD4 count of 500+, and of these, 2220 (24%) initiated ART within 6 months, whereas 6935 (76%) patients deferred therapy until CD4 < 500. In both CD4 subpopulations, patients in the early-ART group were older, more likely to be white, more likely to be male, less likely to have HCV, and less likely to have a history of injection drug use. Cause-of-death information was obtained in only 16% of all deceased patients. The majority of these deaths in both the early- and deferred-therapy groups were from non-AIDS-defining conditions.

In the subpopulation with CD4 351-500, there were 137 deaths in the early-therapy group vs. 238 deaths in the deferred-therapy group. Relative risk of death for deferred therapy was 1.69 (95% CI 1.26-2.26, p < 0.001) per Cox regression stratified by year. After adjustment for history of injection drug use, RR = 1.28 (95% CI 0.85-1.93, p = 0.23). In an unadjusted analysis, HCV infection was a risk factor for mortality (RR 1.85, p= 0.03). After exclusion of patients with HCV infection, RR for deferred therapy = 1.52 (95% CI 1.01-2.28, p = 0.04).

In the subpopulation with CD4 500+, there were 113 deaths in the early-therapy group vs. 198 in the deferred-therapy group. Relative risk of death for deferred therapy was 1.94 (95% CI 1.37-2.79, p < 0.001). After adjustment for history of injection drug use, RR = 1.73 (95% CI 1.08-2.78, p = 0.02). Again, HCV infection was a risk factor for mortality (RR = 2.03, p < 0.001). After exclusion of patients with HCV infection, RR for deferred therapy = 1.90 (95% CI 1.14-3.18, p = 0.01).

Thus, in a large retrospective study, the deferred initiation of antiretrovirals in asymptomatic HIV infection was associated with higher mortality.

This was the first retrospective study of early initiation of ART in HIV that was large enough to power mortality as an endpoint while controlling for covariates. However, it is limited significantly by its observational, non-randomized design that introduced substantial unmeasured confounders. A notable example is the absence of socioeconomic confounders (e.g. insurance status). Perhaps early-initiation patients were more well-off, and their economic advantage was what drove the mortality benefit rather than the early initiation of ART. This study also made no mention of the tolerability of ART or adverse reactions to it.

In the years that followed this trial, NIH and WHO consensus guidelines shifted the trend toward earlier treatment of HIV. In 2015, the INSIGHT START trial (the first large RCT of immediate vs. deferred ART) showed a definitive mortality benefit of immediate initiation of ART in patients with CD4 500+. Since that time, per UpToDate, the standard of care has been to treat “essentially all” HIV-infected patients with ART.

Further Reading/References:
1. Wiki Journal Club
2. 2 Minute Medicine
3. INSIGHT START (2015), Pubmed, NEJM PDF
4. UpToDate, “When to initiate antiretroviral therapy in HIV-infected patients”

Summary by Duncan F. Moore, MD

Image Credit: Sigve, CC0 1.0, via WikiMedia Commons

Week 37 – LOTT

“A Randomized Trial of Long-Term Oxygen for COPD with Moderate Desaturation”

by the Long-Term Oxygen Treatment Trial (LOTT) Research Group

N Engl J Med. 2016 Oct 27;375(17):1617-1627. [free full text]

The long-term treatment of severe resting hypoxemia (SpO2 < 89%) in COPD with supplemental oxygen has been a cornerstone of modern outpatient COPD management since its mortality benefit was demonstrated circa 1980. Subsequently, the utility of supplemental oxygen in COPD patients with moderate resting daytime hypoxemia (SpO2 89-93%) was investigated in trials in the 1990s; however, such trials were underpowered to assess mortality benefit. Ultimately, the LOTT trial was funded by the NIH and Centers for Medicare and Medicaid Services (CMS) primarily to determine if there was a mortality benefit to supplemental oxygen in COPD patients with moderate hypoxemia as well to analyze as numerous other secondary outcomes, such as hospitalization rates and exercise performance.

The LOTT trial was originally planned to enroll 3500 patients. However, after 7 months the trial had randomized only 34 patients, and mortality had been lower than anticipated. Thus in late 2009 the trial was redesigned to include broader inclusion criteria (now patients with exercise-induced hypoxemia could qualify) and the primary endpoint was broadened from mortality to a composite of time to first hospitalization or death.

The revised LOTT trial enrolled COPD patients with moderate resting hypoxemia (SpO2 89-93%) or moderate exercise-induced desaturation during the 6-minute walk test (SpO2 ≥ 80% for ≥ 5 minutes and < 90% for ≥ 10 seconds). Patients were randomized to either supplemental oxygen (24-hour oxygen if resting SpO2 89-93%, otherwise oxygen only during sleep and exercise if the desaturation occurred only during exercise) or to usual care without supplemental oxygen. Supplemental oxygen flow rate was 2 liters per minute and could be uptitrated by protocol among patients with exercise-induced hypoxemia. The primary outcome was time to composite of first hospitalization or death. Secondary outcomes included hospitalization rates, lung function, performance on 6-minute walk test, and quality of life.

368 patients were randomized to the supplemental-oxygen group and 370 to the no-supplemental-oxygen group. Of the supplemental-oxygen group, 220 patients were prescribed 24-hour oxygen support, and 148 were prescribed oxygen for use during exercise and sleep only. Median duration of follow-up was 18.4 months. Regarding the primary outcome, there was no group difference in time to death or first hospitalization (p = 0.52 by log-rank test). See Figure 1A. Furthermore, there were no treatment-group differences in the primary outcome among patients of the following pre-specified subgroups: type of oxygen prescription, “desaturation profile,” race, sex, smoking status, SpO2 nadir during 6-minute walk, FEV1, BODE  index, SF-36 physical-component score, BMI, or history of anemia. Patients with a COPD exacerbation in the 1-2 months prior to enrollment, age 71+ at enrollment, and those with lower Quality of Well-Being Scale score at enrollment all demonstrated benefit from supplemental O2, but none of these subgroup treatment effects were sustained when the analyses were adjusted for multiple comparisons. Regarding secondary outcomes, there were no treatment-group differences in rates of all-cause hospitalizations, COPD-related hospitalizations, or non-COPD-related hospitalizations, and there were no differences in change from baseline measures of quality of life, anxiety, depression, lung function, and distance achieved in 6-minute walk.

The LOTT trial presents compelling evidence that there is no significant benefit, mortality or otherwise, of oxygen supplementation in patients with COPD and either moderate hypoxemia at rest (SpO2 > 88%) or exercise-induced hypoxemia. Although this trial’s substantial redesign in its early course is noted, the trial still is our best evidence to date about the benefit (or lack thereof) of oxygen in this patient group. As acknowledged by the authors, the trial may have had significant selection bias in referral. (Many physicians did not refer specific patients for enrollment because “they were too ill or [were believed to have benefited] from oxygen.”) Another notable limitation of this study is that nocturnal oxygen saturation was not evaluated. The authors do note that “some patients with COPD and severe nocturnal desaturation might benefit from nocturnal oxygen supplementation.”

For further contemporary contextualization of the study, please see the excellent post at PulmCCM from 11/2016. Included in that post is a link to an overview and Q&A from the NIH regarding the LOTT study.

References / Additional Reading:
1. PulmCCM, “Long-term oxygen brought no benefits for moderate hypoxemia in COPD”
2. LOTT @ 2 Minute Medicine
3. LOTT @ ClinicalTrials.gov
4. McDonald, J.H. 2014. Handbook of Biological Statistics (3rd ed.). Sparky House Publishing, Baltimore, Maryland.
5. Centers for Medicare and Medicaid Services, “Certificate of Medical Necessity CMS-484– Oxygen”
6. Ann Am Thorac Soc. 2018 Dec;15(12):1369-1381. “Optimizing Home Oxygen Therapy. An Official American Thoracic Society Workshop Report.”

Summary by Duncan F. Moore, MD

Image Credit: Patrick McAleer, CC BY-SA 2.0 UK, via Wikimedia Commons

Week 36 – HAS-BLED

“A Novel User-Friendly Score (HAS-BLED) To Assess 1-Year Risk of Major Bleeding in Patients with Atrial Fibrillation”

Chest. 2010 Nov;138(5):1093-100 [free full text]

Atrial fibrillation (AF) is a well-known risk factor for ischemic stroke. Stroke risk is further increased by individual comorbidities, such as CHF, HTN, and DM, and can be stratified with scores, such as CHADS2 and CHA2DS2VASC. Patients with intermediate stroke risk are recommended to be treated with oral anticoagulation (OAC). However, stroke risk is often also closely related to bleeding risk, and the benefits of anticoagulation for stroke need to be weighed against the added risk of bleeding. At the time of this study, there were no validated and user-friendly bleeding risk-stratification schemes. This study aimed to develop a practical risk score to estimate the 1-year risk of major bleeding (as defined in the study) in a contemporary, real world cohort of patients with AF.

The study enrolled adults with an EKG or Holter-proven diagnosis of AF. (Patients with mitral valve stenosis or previous valvular surgery were excluded.) No experiment was performed in this retrospective cohort study.

In a derivation cohort, the authors retrospectively performed univariate analyses to identify a range of clinical features associated with major bleeding (p < 0.10). Based on systematic reviews, they added additional risk factors for major bleeding. Ultimately, what resulted was a list of comprehensive risk factors deemed HAS-BLED:

H – Hypertension (> 160 mmHg systolic)
A – Abnormal renal (HD, transplant, Cr > 2.26 mg/dL) and liver function (cirrhosis, bilirubin > 2x normal w/ AST/ALT/ALP > 3x normal) – 1 pt each for abnormal renal or liver function
S – Stroke

B – Bleeding (prior major bleed or predisposition to bleed)
L – Labile INRs (time in therapeutic range < 60%)
E – Elderly (age > 65)
D – Drugs (i.e. ASA, clopidogrel, NSAIDs) or alcohol use (> 8 units per week) concomitantly – 1 pt each for use of either

Each risk factor was equivalent to one point. The HAS-BLED score was then compared to the HEMORR2HAGES scheme [https://www.mdcalc.com/hemorr2hages-score-major-bleeding-risk], a prior tool for estimating bleeding risk.


      • incidence of major bleeding within 1 year, overall
      • bleeds per 100 patient-years, by HAS-BLED score
      • c-statistic for the HAS-BLED score in predicting the risk of bleeding


      • major bleeding = bleeding causing hospitalization, Hgb drop >2 g/L, or requiring blood transfusion, that was not a hemorrhagic stroke
      • hemorrhagic stroke = focal neurologic deficit of sudden onset, diagnosed by a neurologist, lasting >24h and caused by bleeding

3,456 patients with AF without mitral valve stenosis or valve surgery who completed their 1-year follow-up were analyzed retrospectively. 64.8% (2242) of these patients were on OAC (12.8% of whom on concurrent antiplatelet therapy), 24% (828) were on antiplatelet therapy alone, and 10.2% (352) received no antithrombotic therapy. 1.5% (53) of patients experienced a major bleed during the first year, with 17% (9) of these patients sustaining intracerebral hemorrhage.

HAS-BLED Score       Bleeds per 100-patient years
0                                        1.13
1                                         1.02
2                                        1.88
3                                        3.74
4                                        8.70
5                                        12.50
6*                                     0.0                   *(n = 2 patients at risk, neither bled)

Patients were given a HAS-BLED score and a HEMORR2HAGES score. C-statistics were then used to determine the predictive accuracy of each model overall as well as within patient subgroups (OAC alone, OAC + antiplatelet, antiplatelet alone, no antithrombotic therapy).

C statistics for HAS-BLED were as follows: for overall cohort, 0.72 (95%CI 0.65-0.79); for OAC alone, 0.69 (95%CI 0.59-0.80); for OAC + antiplatelet, 0.78 (95%CI 0.65-0.91); for antiplatelet alone, 0.91 (95%CI 0.83-1.00); and for those on no antithrombotic therapy, 0.85 (95%CI 0.00-1.00).

C statistics for HEMORR2HAGES were as follows: for overall cohort, 0.66 (95%CI 0.57-0.74); for OAC alone, 0.64 (95%CI 0.53-0.75); for OAC + antiplatelet, 0.83 (95%CI 0.74-0.91); for antiplatelet alone, 0.83 (95%CI 0.68-0.98); and for those without antithrombotic therapy, 0.81 (95%CI 0.00-1.00).

This study helped to establish a practical and user-friendly assessment of bleeding risk in AF. HAS-BLED is superior to its predecessor HEMORR2HAGES in that it has an easier-to-remember acronym and is quicker and simpler to perform. All of its risk factors are readily available from the clinical history or are routinely tested. Both stratification tools had a broadly similar c-statistics for the overall cohort – 0.72 for HAS-BLED versus 0.66 for HEMORR2HAGES respectively. However, HAS-BLED was particularly useful when looking at antiplatelet therapy alone or no antithrombotic therapy at all (0.91 and 0.85, respectively).

This study is useful because it provides evidence-based, easily-calculable, and actionable risk stratification in assessing bleeding risk in AF. In prior studies, such as ACTIVE-A (ASA + clopidogrel versus ASA alone for patients with AF deemed unsuitable for OAC), almost half of all patients (n= ~3500) were given a classification of “unsuitable for OAC,” which was based solely on physician clinical judgement alone without a predefined objective scoring. Now, physicians have an objective way to assess bleed risk rather than “gut feeling” or wanting to avoid iatrogenic insult.

The RE-LY trial used the HAS-BLED score to decide which patients with AF should get the standard dabigatran dose (150mg BID) versus a lower dose (110mg BID) for anticoagulation. This risk-stratified dosing resulted in a significant reduction in major bleeding compared with warfarin and maintained a similar reduction in stroke risk.

Furthermore, the HAS-BLED score could allow the physician to be more confident when deciding which patients may be appropriate for referral for a left atrial appendage occlusion device (e.g. Watchman).

The study had a limited number of major bleeds and a short follow-up period, and thus it is possible that other important risk factors for bleeding were not identified. Also, there were large numbers of patients lost to 1-year follow-up. These patients were likely to have had more comorbidities and may have transferred to nursing homes or even have died – which may have led to an underestimate of bleeding rates. Furthermore, the study had a modest number of very elderly patients (i.e. 75-84 and ≥85), who are likely to represent the greatest bleeding risk.

Bottom Line:
HAS-BLED provides an easy, practical tool to assess the individual bleeding risk of patients with AF. Oral anticoagulation should be considered for scores of 3 or less. HAS-BLED scores are ≥4, it is reasonable to think about alternatives to oral anticoagulation.

Further Reading/References:
1. HAS-BLED @ 2 Minute Medicine
2. ACTIVE-A trial
3. RE-LY trial:
4. RE-LY @ Wiki Journal Club
5. HAS-BLED Calculator
6. HEMORR2HAGES Calculator
7. CHADS2 Calculator
8. CHA2DS2VASC Calculator
9. Watchman (for Healthcare Professionals)

Summary by Patrick Miller, MD

Image Credit: CardioNetworks, CC BY-SA 3.0, via Wikimedia Commons

Week 35 – CORTICUS

“Hydrocortisone Therapy for Patients with Septic Shock”

N Engl J Med. 2008 Jan 10;358(2):111-24. [free full text]

Steroid therapy in septic shock has been a hotly debated topic since the 1980s. The Annane trial in 2002 suggested that there was a mortality benefit to early steroid therapy and so for almost a decade this became standard of care. In 2008, the CORTICUS trial was performed suggesting otherwise.

The trial enrolled ICU patients with septic shock onset with past 72 hrs (defined as SBP < 90 despite fluids or need for vasopressors and hypoperfusion or organ dysfunction from sepsis). Excluded patients included those with an “underlying disease with a poor prognosis,” life expectancy < 24hrs, immunosuppression, and recent corticosteroid use. Patients were randomized to hydrocortisone 50mg IV q6h x5 days plus taper or to placebo injections q6h x5 days plus taper. The primary outcome was 28-day mortality among patients who did not have a response to ACTH stim test (cortisol rise < 9mcg/dL). Secondary outcomes included 28-day mortality in patients who had a positive response to ACTH stim test, 28-day mortality in all patients, reversal of shock (defined as SBP ≥ 90 for at least 24hrs without vasopressors) in all patients and time to reversal of shock in all patients.

In ACTH non-responders (n = 233), intervention vs. control 28-day mortality was 39.2% vs. 36.1%, respectively (p = 0.69). In ACTH responders (n = 254), intervention vs. control 28-day mortality was 28.8% vs. 28.7% respectively (p = 1.00). Reversal of was shock 84.7%% vs. 76.5% (p = 0.13). Among all patients, intervention vs. control 28-day mortality was 34.3% vs. 31.5% (p = 0.51) and reversal of shock 79.7% vs. 74.2% (p = 0.18). The duration of time to reversal of shock was significantly shorter among patients receiving hydrocortisone (per Kaplan-Meier analysis, p<0.001; see Figure 2) with median time to of reversal 3.3 days vs. 5.8 days (95% CI 5.2 – 6.9).

In conclusion, the CORTICUS trial demonstrated no mortality benefit of steroid therapy in septic shock regardless of a patient’s response to ACTH. Despite the lack of mortality benefit, it demonstrated an earlier resolution of shock with steroids. This lack of mortality benefit sharply contrasted with the previous Annane 2002 study. Several reasons have been posited for this difference including poor powering of the CORTICUS study (which did not reach the desired n = 800), inclusion starting within 72 hrs of septic shock vs. Annane starting within 8 hrs, and the overall sicker nature of Annane patients (who were all mechanically ventilated). Subsequent meta-analyses disagree about the mortality benefit of steroids, but meta-regression analyses suggest benefit among the sickest patients. All studies agree about the improvement in shock reversal. The 2016 Surviving Sepsis Campaign guidelines recommend IV hydrocortisone in septic shock in patients who continue to be hemodynamically unstable despite adequate fluid resuscitation and vasopressor therapy.

Per Drs. Sonti and Vinayak of the GUH MICU (excepted from their excellent Georgetown Critical Care Top 40): “Practically, we use steroids when reaching for a second pressor or if there is multiorgan system dysfunction. Our liver patients may have deficient cortisol production due to inadequate precursor lipid production; use of corticosteroids in these patients represents physiologic replacement rather than adjunct supplement.”

The ANZICS collaborative group published the ADRENAL trial in NEJM in 2018 – which demonstrated that “among patients with septic shock undergoing mechanical ventilation, a continuous infusion of hydrocortisone did not result in lower 90-day mortality than placebo.” The authors did note “a more rapid resolution of shock and a lower incidence of blood transfusion” among patients receiving hydrocortisone. The folks at EmCrit argued [https://emcrit.org/emnerd/cc-nerd-case-relative-insufficiency/] that this was essentially a negative study, and thus in the existing context of CORTICUS, the results of the ADRENAL trial do not change our management of refractory septic shock.

Finally, the 2018 APPROCCHSS trial (also by Annane) evaluated the survival benefit hydrocortisone plus fludocortisone vs. placebo in patients with septic shock and found that this intervention reduced 90-day all-cause mortality. At this time, it is difficult truly discern the added information of this trial given its timeframe, sample size, and severity of underlying illness. See the excellent discussion in the following links: WikiJournal Club, PulmCrit, PulmCCM, and UpToDate.

References / Additional Reading:
1. CORTICUS @ Wiki Journal Club
2. CORTICUS @ Minute Medicine
3. Surviving Sepsis Campaign: International Guidelines for Management of Sepsis and Septic Shock (2016), section “Corticosteroids”
4. Annane trial (2002) full text
5. PulmCCM, “Corticosteroids do help in sepsis: ADRENAL trial”
6. UpToDate, “Glucocorticoid therapy in septic shock”

Post by Gordon Pelegrin, MD

Image Credit: LHcheM, CC BY-SA 3.0, via Wikimedia Commons

Week 34 – PLCO

“Mortality Results from a Randomized Prostate-Cancer Screening Trial”

by the Prostate, Lung, Colorectal, and Ovarian (PLCO) Cancer Screening Trial project team

N Engl J Med. 2009 Mar 26;360(13):1310-9. [free full text]

The use of prostate-specific-antigen (PSA) testing to screen for prostate cancer has been a contentious subject for decades. Prior to the 2009 PLCO trial, there were no high-quality prospective studies of the potential benefit of PSA testing.

The trial enrolled men ages 55-74 (excluded if hx prostate, lung, or colorectal cancer, current cancer treatment, or > 1 PSA test in the past 3 years). Patients were randomized to annual PSA testing for 6 years with annual digital rectal exam (DRE) for 4 years or to usual care. The primary outcome was the prostate-cancer-attributable death rate, and the secondary outcome was the incidence of prostate cancer.

38,343 patients were randomized to the screening group, and 38,350 were randomized to the usual-care group. Baseline characteristics were similar in both groups. Median follow-up duration was 11.5 years. Patients in the screening group were 85% compliant with PSA testing and 86% compliant with DRE. In the usual-care group, 40% of patients received a PSA test within the first year, and 52% received a PSA test by the sixth year. Cumulative DRE rates in the usual-care group were between 40-50%. By seven years, there was no significant difference in rates of death attributable to prostate cancer. There were 50 deaths in the screening group and only 44 in the usual-care group (rate ratio 1.13, 95% CI 0.75 – 1.70). At ten years, there were 92 and 82 deaths in the respective groups (rate ratio 1.11, 95% CI 0.83–1.50). By seven years, there was a higher rate of prostate cancer detection in the screening group. 2820 patients were diagnosed in the screening group, but only 2322 were diagnosed in the usual-care group (rate ratio 1.22, 95% CI 1.16–1.29). By ten years, there were 3452 and 2974 diagnoses in the respective groups (rate ratio 1.17, 95% CI 1.11–1.22). Treatment-related complications (e.g. infection, incontinence, impotence) were not reported in this study.

In summary, yearly PSA screening increased the prostate cancer diagnosis rate but did not impact prostate-cancer mortality when compared to the standard of care. However, there were relatively high rates of PSA testing in the usual-care group (40-50%). The authors cite this finding as a probable major contributor to the lack of mortality difference. Other factors that may have biased to a null result were prior PSA testing and advances in treatments for prostate cancer during the trial. Regarding the former, 44% of men in both groups had already had one or more PSA tests prior to study enrollment. Prior PSA testing likely contributed to selection bias.

PSA screening recommendations prior to this 2009 study:

      • American Urological Association and American Cancer Society – recommended annual PSA and DRE, starting at age 50 if normal risk and earlier in high-risk men
      • National Comprehensive Cancer Network: “a risk-based screening algorithm, including family history, race, and age”
      • 2008 USPSTF Guidelines: insufficient evidence to determine balance between risks/benefits of PSA testing in men younger than 75; recommended against screening in age 75+ (Grade I Recommendation)

The authors of this study conclude that their results “support the validity of the recent [2008] recommendations of the USPSTF, especially against screening all men over the age of 75.”

However, the conclusions of the European Randomized Study of Screening for Prostate Cancer (ERSPC), which was published concurrently with PLCO in NEJM, differed. In ERSPC, PSA was screened every 4 years. The authors found an increased rate of detection of prostate cancer, but, more importantly, they found that screening decreased prostate cancer mortality (adjusted rate ratio 0.80, 95% CI 0.65–0.98, p = 0.04; NNT 1410 men receiving 1.7 screening visits over 9 years). Like PLCO, this study did not report treatment harms that may have been associated with overly zealous diagnosis.

The USPSTF reexamined its PSA guidelines in 2012. Given the lack of mortality benefit in PLCO, the pitiful mortality benefit in ERSPC, and the assumed harm from over-diagnosis and excessive intervention in patients who would ultimately not succumb to prostate cancer, the USPSTF concluded that PSA-based screening for prostate cancer should not be offered (Grade D Recommendation).

In the following years, the pendulum has swung back partially toward screening. In May 2018, the USPSTF released new recommendations that encourage men ages 55-69 to have an informed discussion with their physician about potential benefits and harms of PSA-based screening (Grade C Recommendation). The USPSTF continues to recommend against screening in patients over 70 years old (Grade D).

Screening for prostate cancer remains a complex and controversial topic. Guidelines from the American Cancer Society, American Urological Association, and USPSTF vary, but ultimately all recommend shared decision-making. UpToDate has a nice summary of talking points culled from several sources.

Further Reading/References:
#. PLCO @ 2 Minute Medicine
#. ERSPC @ Wiki Journal Club
#. UpToDate, Screening for Prostate Cancer

Summary by Duncan F. Moore, MD

Image Credit: Otis Brawley, Public Domain, NIH National Cancer Institute Visuals Online

Week 33 – Varenicline vs. Bupropion and Placebo for Smoking Cessation

“Varenicline, an α2β2 Nicotinic Acetylcholine Receptor Partial Agonist, vs Sustained-Release Bupropion and Placebo for Smoking Cessation”

JAMA. 2006 Jul 5;296(1):56-63. [free full text]

Assisting our patients in smoking cessation is a fundamental aspect of outpatient internal medicine. At the time of this trial, the only approved pharmacotherapies for smoking cessation were nicotine replacement therapy and bupropion. As the α2β2 nicotinic acetylcholine receptor (nAChR) was thought to be crucial to the reinforcing effects of nicotine, it was hypothesized that a partial agonist for this receptor could yield sufficient effect to satiate cravings and minimize withdrawal symptoms but also limit the reinforcing effects of exogenous nicotine. Thus Pfizer designed this large phase 3 trial to test the efficacy of its new α2β2 nAChR partial agonist varenicline (Chantix) against the only other non-nicotine pharmacotherapy at the time (bupropion) as well as placebo.

The trial enrolled adult smokers (10+ cigarettes per day) with fewer than three months of smoking abstinence in the past year (notable exclusion criteria included numerous psychiatric and substance use comorbidities). Patients were randomized to 12 weeks of treatment with either varenicline uptitrated by day 8 to 1mg BID, bupropion SR uptitrated by day 4 to 150mg BID, or placebo BID. Patients were also given a smoking cessation self-help booklet at the index visit and encouraged to set a quit date of day 8. Patients were followed at weekly clinic visits for the first 12 weeks (treatment duration) and then a mixture of clinic and phone visits for weeks 13-52. Non-smoking status during follow-up was determined by patient self-report combined with exhaled carbon monoxide < 10ppm. The primary endpoint was the 4-week continuous abstinence rate for study weeks 9-12 (as confirmed by exhaled CO level). Secondary endpoints included the continuous abstinence rate for weeks 9-24 and for weeks 9-52.

1025 patients were randomized. Compliance was similar among the three groups and the median duration of treatment was 84 days. Loss to follow-up was similar among the three groups. CO-confirmed continuous abstinence during weeks 9-12 was 44.0% among the varenicline group vs. 17.7% among the placebo group (OR 3.85, 95% CI 2.70–5.50, p < 0.001) vs. 29.5% among the bupropion group (OR vs. varenicline group 1.93, 95% CI 1.40–2.68, p < 0.001). (OR for bupropion vs. placebo was 2.00, 95% CI 1.38–2.89, p < 0.001.)  Continuous abstinence for weeks 9-24 was 29.5% among the varenicline group vs. 10.5% among the placebo group (p < 0.001) vs. 20.7% among the bupropion group (p = 0.007). Continuous abstinence rates weeks 9-52 were 21.9% among the varenicline group vs. 8.4% among placebo group (p < 0.001) vs. 16.1% among the bupropion group (p = 0.057). Subgroup analysis of the primary outcome by sex did not yield significant differences in drug efficacy by sex.

This study demonstrated that varenicline was superior to both placebo and bupropion in facilitating smoking cessation at up to 24 weeks. At greater than 24 weeks, varenicline remained superior to placebo but was similarly efficacious as bupropion. This was a well-designed and executed large, double-blind, placebo- and active-treatment-controlled multicenter US trial. The trial was completed in April 2005 and a new drug application for varenicline (Chantix) was submitted to the FDA in November 2005. Of note, an “identically designed” (per this study’s authors), manufacturer-sponsored phase 3 trial was performed in parallel and reported very similar results in the in the same July 2006 issue of JAMA (PMID: 16820547) as the above study by Gonzales et al. These robust, positive-outcome pre-approval trials of varenicline helped the drug rapidly obtain approval in May 2006.

Per expert opinion at UpToDate, varenicline remains a preferred first-line pharmacotherapy for smoking cessation. Bupropion is a suitable, though generally less efficacious, alternative, particularly when the patient has comorbid depression. Per UpToDate, the recent (2016) EAGLES trial demonstrated that “in contrast to earlier concerns, varenicline and bupropion have no higher risk of associated adverse psychiatric effects than [nicotine replacement therapy] in smokers with comorbid psychiatric disorders.”

Further Reading/References:
1. This trial @ ClinicalTrials.gov
2. Sister trial: “Efficacy of varenicline, an alpha4beta2 nicotinic acetylcholine receptor partial agonist, vs placebo or sustained-release bupropion for smoking cessation: a randomized controlled trial.” JAMA. 2006 Jul 5;296(1):56-63.
3. Chantix FDA Approval Letter 5/10/2006
4. Rigotti NA. Pharmacotherapy for smoking cessation in adults. Post TW, ed. UpToDate. Waltham, MA: UpToDate Inc. [https://www.uptodate.com/contents/pharmacotherapy-for-smoking-cessation-in-adults] (Accessed on February 16, 2019).
5. “Neuropsychiatric safety and efficacy of varenicline, bupropion, and nicotine patch in smokers with and without psychiatric disorders (EAGLES): a double-blind, randomised, placebo-controlled clinical trial.” Lancet. 2016 Jun 18;387(10037):2507-20.
6. 2 Minute Medicine: “Varenicline and bupropion more effective than varenicline alone for tobacco abstinence”
7. 2 Minute Medicine: “Varenicline safe for smoking cessation in patients with stable major depressive disorder”

Summary by Duncan F. Moore, MD

Image Credit: Сергей Фатеев, CC BY-SA 3.0, via Wikimedia Commons


“Apixaban versus Warfarin in Patients with Atrial Fibrillation”

N Engl J Med. 2011 Sep 15;365(11):981-92. [free full text]

Prior to the development of the DOACs, warfarin was the standard of care for the reduction of risk of stroke in atrial fibrillation. Drawbacks of warfarin include a narrow therapeutic range, numerous drug and dietary interactions, the need for frequent monitoring, and elevated bleeding risk. Around 2010, the definitive RCTs for the oral direct thrombin inhibitor dabigatran (RE-LY) and the oral factor Xa inhibitor rivaroxaban (ROCKET AF) showed equivalence or superiority to warfarin. Shortly afterward, the ARISTOTLE trial demonstrated the superiority of the oral factor Xa inhibitor apixaban (Eliquis).

The trial enrolled patients with atrial fibrillation or flutter with at least one additional risk factor for stroke (age 75+, prior CVA/TIA, symptomatic CHF, or reduced LVEF). Notably, patients with Cr > 2.5 were excluded. Patients were randomized to treatment with either apixaban BID + placebo warfarin daily (reduced 2.5mg apixaban dose given in patients with 2 or more of the following: age 80+, weight < 60, Cr > 1.5) or to placebo apixaban BID + warfarin daily. The primary efficacy outcome was the incidence of stroke, and the primary safety outcome was “major bleeding” (clinically overt and accompanied by Hgb drop of ≥ 2, “occurring at a critical site,” or resulting in death). Secondary outcomes included all-cause mortality and a composite of major bleeding and “clinically-relevant non-major bleeding.”

9120 patients were assigned to the apixaban group, and 9081 were assigned to the warfarin group. Mean CHADS2 score was 2.1. Fewer patients in the apixaban group discontinued their assigned study drug. Median duration of follow-up was 1.8 years. The incidence of stroke was 1.27% per year in the apixaban group vs. 1.60% per year in the warfarin group (HR 0.79, 95% CI 0.66-0.95, p<0.001). This reduction was consistent across all major subgroups (see Figure 2). Notably, the rate of hemorrhagic stroke was 49% lower in the apixaban group, and the rate of ischemic stroke was 8% lower in the apixaban group. All-cause mortality was 3.52% per year in the apixaban group vs. 3.94% per year in the warfarin group (HR 0.89, 95% CI 0.80-0.999, p=0.047). The incidence of major bleeding was 2.13% per year in the apixaban group vs. 3.09% per year in the warfarin group (HR 0.69, 95% CI 0.60-0.80, p<0.001). The rate of intracranial hemorrhage was 0.33% per year in the apixaban group vs. 0.80% per year in the warfarin group (HR 0.42, 95% CI 0.30-0.58, p<0.001). The rate of any bleeding was 18.1% per year in the apixaban group vs. 25.8% in the warfarin group (p<0.001).

In patients with non-valvular atrial fibrillation and at least one other risk factor for stroke, anticoagulation with apixaban significantly reduced the risk of stroke, major bleeding, and all-cause mortality relative to anticoagulation with warfarin. This was a large RCT that was designed and powered to demonstrate non-inferiority but in fact was able to demonstrate the superiority of apixaban. Along with ROCKET AF and RE-LY, the ARISTOTLE trial ushered in the modern era of DOACs in atrial fibrillation. Apixaban was approved by the FDA for the treatment of non-valvular atrial fibrillation in 2012. Patient prescription cost is no longer a major barrier to prescription. These three major DOACs are all preferred in the DC Medicaid formulary (see page 14). To date, no trial has compared the various DOACs directly.

Further Reading/References:
1. ARISTOTLE @ Wiki Journal Club
2. 2 Minute Medicine
3. “Oral anticoagulants for prevention of stroke in atrial fibrillation: systematic review, network meta-analysis, and cost-effectiveness analysis,” BMJ 2017

Summary by Duncan F. Moore, MD