Week 26 – The Oregon Experiment

“The Oregon Experiment – Effects of Medicaid on Clinical Outcomes”

N Engl J Med. 2013 May 2;368(18):1713-22. [free full text]

Access to health insurance is not synonymous with access to healthcare. However, it has been generally assumed that increased access to insurance should improve healthcare outcomes among the newly insured. In 2008, Oregon expanded its Medicaid program by approximately 30,000 patients. These policies were lotteried among approximately 90,000 applicants. The authors of the Oregon Health Study Group sought to study the impact of this “randomized” intervention, the results of which were hotly anticipated given the impending Medicaid expansion of the 2010 PPACA.

Population: Portland, Oregon residents who applied for the 2008 Medicaid expansion

Not all applicants were actually eligible.

Eligibility criteria: age 19-64, US citizen, Oregon resident, ineligible for other public insurance, uninsured for the previous 6 months, income below 100% of the federal poverty level, and assets < $2000.

Intervention: winning the Medicaid-expansion lottery

Comparison: The statistical analyses of clinical outcomes in this study do not actually compare winners to non-winners. Instead, they compare non-winners to winners who ultimately received Medicaid coverage. Winning the lottery increased the chance of being enrolled in Medicaid by about 25 percentage points. Given the assumption that “the lottery affected outcomes only by changing Medicaid enrollment, the effect of being enrolled in Medicaid was simply about 4 times…as high as the effect of being able to apply for Medicaid.” This allowed the authors to conclude causal inferences regarding the benefits of new Medicaid coverage.

Outcomes: Values or point prevalence of the following at approximately 2 years post-lottery:

  1. blood pressure, diagnosis of hypertension
  2. cholesterol levels, diagnosis of hyperlipidemia
  3. HgbA1c, diagnosis of diabetes
  4. Framingham risk score for cardiovascular events
  5. positive depression screen, depression dx after lottery, antidepressant use
  6. health-related quality of life measures
  7. measures of financial hardship (e.g. catastrophic expenditures)
  8. measures of healthcare utilization (e.g. estimated total annual expenditure)

These outcomes were assessed via in-person interviews, assessment of blood pressure, and a blood draw for biomarkers.


Results
:
The study population included 10,405 lottery winners and 10,340 non-winners. Interviews were performed ~25 months after the lottery. While there were no significant differences in baseline characteristics among winners and non-winners, “the subgroup of lottery winners who ultimately enrolled in Medicaid was not comparable to the overall group of persons who did not win the lottery” (no demographic or other data provided).

At approximately 2 years following the lottery, there were no differences in blood pressure or prevalence of diagnosed hypertension between the lottery non-winners and those who enrolled in Medicaid. There were also no differences between the groups in cholesterol values, prevalence of diagnosis of hypercholesterolemia after the lottery, or use of medications for high cholesterol. While more Medicaid enrollees were diagnosed with diabetes after the lottery (absolute increase of 3.8 percentage points, 95% CI 1.93-5.73, p<0.001; prevalence 1.1% in non-winners) and were more likely to be using medications for diabetes than the non-winners (absolute increase of 5.43 percentage points, 95% CI 1.39-9.48, p=0.008), there was no statistically significant difference in HgbA1c values among the two groups. Medicaid coverage did not significantly alter 10-year Framingham cardiovascular event risk. At follow up, fewer Medicaid-enrolled patients screened positive for depression (decrease of 9.15 percentage points, 95% CI -16.70 to -1.60,  p=0.02), while more had formally been diagnosed with depression during the interval since the lottery (absolute increase of 3.81 percentage points, 95% CI 0.15-7.46, p=0.04). There was no significant difference in prevalence of antidepressant use.

Medicaid-enrolled patients were more likely to report that their health was the same or better since 1 year prior (increase of 7.84 percentage points, 95% CI 1.45-14.23, p=0.02). There were no significant differences in scores for quality of life related to physical health or in self-reported levels of pain or global happiness. As seen in Table 4, Medicaid enrollment was associated with decreased out-of-pocket spending (15% had a decrease, average decrease $215), decreased prevalence of medical debt, and a decreased prevalence of catastrophic expenditures (absolute decrease of 4.48 percentage points, 95% CI -8.26 to 0.69, p=0.02).

Medicaid-enrolled patients were prescribed more drugs and had more office visits but no change in number of ED visits or hospital admissions. Medicaid coverage was estimated to increase total annual medical spending by $1,172 per person (an approximately 35% increase). Of note, patients enrolled in Medicaid were more likely to have received a pap smear or mammogram during the study period.

Implication/Discussion:
This study was the first major study to “randomize” health insurance coverage and study the health outcome effects of gaining insurance.

Overall, this study demonstrated that obtaining Medicaid coverage “increased overall health care utilization, improved self-reported health, and reduced financial strain.” However, its effects on patient-level health outcomes were much more muted. Medicaid coverage did not impact the prevalence or severity of hypertension or hyperlipidemia. Medicaid coverage appeared to aid in the detection of diabetes mellitus and use of antihyperglycemics but did not affect average A1c. Accordingly, there was no significant difference in Framingham risk score among the two groups.

The glaring limitation of this study was that its statistical analyses compared two groups with unequal baseline characteristics, despite the purported “randomization” of the lottery. Effectively, by comparing Medicaid enrollees (and not all lottery winners) to the lottery non-winners, the authors failed to perform an intention-to-treat analysis. This design engendered significant confounding, and it is remarkable that the authors did not even attempt to report baseline characteristics among the final two groups, let alone control for any such differences in their final analyses. Furthermore, the fact that not all reported analyses were pre-specified raises suspicion of post hoc data dredging for statistically significant results (“p-hacking”). Overall, power was limited in this study due to the low prevalence of the conditions studied.

Contemporary analysis of this study, both within medicine and within the political sphere, was widely divergent. Medicaid-expansion proponents noted that new access to Medicaid provided a critical financial buffer from potentially catastrophic medical expenditures and allowed increased access to care (as measured by clinic visits, medication use, etc.), while detractors noted that, despite this costly program expansion and fine-toothed analysis, little hard-outcome benefit was realized during the (admittedly limited) follow-up at two years.

Access to insurance is only the starting point in improving the health of the poor. The authors note that “the effects of Medicaid coverage may be limited by the multiple sources of slippage…[including] access to care, diagnosis of underlying conditions, prescription of appropriate medications, compliance with recommendations, and effectiveness of treatment in improving health.”

Further Reading/References:
1. Baicker et al. (2013), “The Impact of Medicaid on Labor Force Activity and Program Participation: Evidence from the Oregon Health Insurance Experiment”
2. Taubman et al. (2014), “Medicaid Increases Emergency-Department Use: Evidence from Oregon’s Health Insurance Experiment”
3. The Washington Post, “Here’s what the Oregon Medicaid study really said” (2013)
4. Michael Cannon, “Oregon Study Throws a Stop Sign in Front of ObamaCare’s Medicaid Expansion”
5. HealthAffairs Policy Brief, “The Oregon Health Insurance Experiment”
6. The Oregon Health Insurance Experiment
7. Sommers et al. (2017) “Health Insurance Coverage and Health – What the Recent Evidence Tells Us

Summary by Duncan F. Moore, MD

Week 25 – CLOT

“Low-Molecular-Weight Heparin versus a Coumarin for the Prevention of Recurrent Venous Thromboembolism in Patients with Cancer”

by the Randomized Comparison of Low-Molecular-Weight Heparin versus Oral Anticoagulant Therapy for the Prevention of Recurrent Venous Thromboembolism in Patients with Cancer (CLOT) Investigators

N Engl J Med. 2003 Jul 10;349(2):146-53. [free full text]

Malignancy is a pro-thrombotic state, and patients with cancer are at significant and sustained risk of venous thromboembolism (VTE) even when treated with warfarin. Warfarin is a suboptimal drug that requires careful monitoring, and its effective administration is challenging in the setting of cancer-associated difficulties with oral intake, end-organ dysfunction, and drug interactions. The 2003 CLOT trial was designed to evaluate whether treatment with low-molecular-weight heparin (LMWH) was superior to a vitamin K antagonist (VKA) in the prevention of recurrent VTE.

Population: adults with active cancer and newly diagnosed symptomatic DVT or PE

The cancer must have been diagnosed or treated within past 6 months, or the patient must have recurrent or metastatic disease.

Intervention: dalteparin subQ daily (200 IU/kg daily x1 month, then 150 IU/kg daily x5 months)

Comparison: vitamin K antagonist x6 months (with 5-7 day LMWH bridge), target INR 2.5

Outcomes:

primary = recurrence of symptomatic DVT or PE within 6 months follow-up

secondary = major bleeding, any bleeding, all-cause mortality

 

Results:
338 patients were randomized to the LMWH group, and 338 were randomized to the VKA group. Baseline characteristics were similar among the two groups. 90% of patients had solid malignancies, and 67% of patients had metastatic disease. Within the VKA group, INR was estimated to be therapeutic 46% of the time, subtherapeutic 30% of the time, and supratherapeutic 24% of the time.

Within the six-month follow-up period, symptomatic VTE occurred in 8.0% of the dalteparin group and 15.8% of the VKA group (HR 0.48, 95% CI 0.30-0.77, p=0.002; NNT = 12.9). The Kaplan-Meier estimate of recurrent VTE at 6 months was 9% in the dalteparin group and 17% in the VKA group.

6% of the dalteparin group developed major bleeding versus 6% of the VKA group (p = 0.27). 14% of the dalteparin group sustained any type of bleeding event versus 19% of the VKA group (p = 0.09). Mortality at 6 months was 39% in the dalteparin group versus 41% in the VKA group (p = 0.53).

Implication/Discussion:
Treatment of VTE in cancer patients with low-molecular-weight heparin reduced the incidence of recurrent VTE relative to the incidence following treatment with vitamin K antagonists.

Notably, this reduction in VTE recurrence was not associated with a change in bleeding risk. However, it also did not correlate with a mortality benefit either.

This trial initiated a paradigm shift in the treatment of VTE in cancer. LMWH became the standard of care, although access and adherence to this treatment was thought to be limited by cost and convenience.

Until last week, no trial had directly compared a DOAC to LMWH in the prevention of recurrent VTE in malignancy. In an open-label, noninferiority trial, the Hokusai VTE Cancer Investigators demonstrated that the oral Xa inhibitor edoxaban (Savaysa) was noninferior to dalteparin with respect to a composite outcome of recurrent VTE or major bleeding.

Further Reading/References:
1. Wiki Journal Club
2. 2 Minute Medicine
3. UpToDate, “Treatment of venous thromboembolism in patients with malignancy”
4. “Edoxaban for the Treatment of Cancer-Associated Venous Thromboembolism,” NEJM 2017

Summary by Duncan F. Moore, MD

Week 24 – SYMPLICITY HTN-3

“A Controlled Trial of Renal Denervation for Resistant Hypertension”

N Engl J Med. 2014 Apr 10;370(15):1393-401. [free full text]

Approximately 10% of patients with hypertension have resistant hypertension (SBP > 140 despite adherence to three maximally tolerated doses of antihypertensives, including a diuretic). Evidence suggests that the sympathetic nervous system plays a large role in such cases, so catheter-based radiofrequency ablation of the renal arteries (renal denervation therapy) was developed as a potential treatment for resistant HTN. The 2010 SYMPLICITY HTN-2 trial was a small (n=106), non-blinded, randomized trial of renal denervation vs. continued care with oral antihypertensives that demonstrated a remarkable 30 mmHg greater decrease in SBP with renal denervation. Thus the 2014 SYMPLICITY HTN-3 trial was designed to evaluate the efficacy of renal denervation in a single-blinded trial with a sham-procedure control group.

Population: adults with resistant HTN with SBP ≥ 160 despite adherence to 3+ maximized antihypertensive drug classes, including a diuretic

pertinent exclusion criteria: 2º HTN, renal artery stenosis > 50%, prior renal artery intervention
(Note – all patients received angiography prior to randomization.)

Intervention: renal denervation with the Symplicity (Medtronic) radioablation catheter
Comparison: renal angiography only (sham procedure)
Outcome:

1º – mean change in office systolic BP from baseline at 6 months (examiner blinded to intervention)

2º – change in mean 24hr ambulatory SBP at 6 months

primary safety endpoint – composite of death, ESRD, embolic event with end-organ damage, renal artery or other vascular complication, hypertensive crisis within 30 days, or new renal artery stenosis of > 70%

 

Results:
535 patients were randomized. There were no differences in baseline characteristics among the two groups. On average, patients were receiving five antihypertensive medications.

There was no significant difference in reduction of SBP between the two groups at 6 months. ∆SBP was -14.13 ± 23.93 mmHg in the denervation group vs. -11.74 ± 25.94 mmHg in the sham-procedure group, for a between-group difference of -2.39 mmHg (95% CI -6.89 to 2.12, p = 0.26 with a  superiority margin of 5 mmHg). The change in 24hr ambulatory SBP at 6 months was -6.75 ± 15.11 mmHg in the denervation group vs. -4.79 ± 17.25 mmHg in the sham-procedure group, for a between-group difference of -1.96 mmHg (95% CI -4.97 to 1.06, p = 0.98 with a superiority margin of 2 mmHg). There was no significant difference in the prevalence of the composite safety endpoint at 6 months with 4.0% of the denervation group and 5.8% of the sham-procedure group reaching the endpoint (percentage-point difference of -1.9, 95% CI -6.0 to 2.2).
Implication/Discussion:
In patients with resistant hypertension, renal denervation therapy provided no reduction in SBP at 6-month follow-up relative to a sham procedure.

This trial was an astounding failure for Medtronic and its Symplicity renal denervation radioablation catheter. The magnitude of the difference in results between the non-blinded, no-sham-procedure SYMPLICITY HTN-2 trial and this patient-blinded, sham-procedure-controlled trial is likely a product of 1) a marked placebo effect of procedural intervention, 2) Hawthorne effect in the non-blinded trial, and 3) regression toward the mean (patients were enrolled based on unusually high BP readings that over the course of the trial declined to reflect a lower true baseline).

Currently, there is no role for renal denervation therapy in the treatment of HTN (resistant or otherwise). However, despite the results of SYMPLICITY HTN-3, other companies and research groups are assessing the role of different radioablation catheters in patients with low-risk essential HTN and with resistant HTN (for example, see https://www.ncbi.nlm.nih.gov/pubmed/29224639).

Further Reading/References:
1.. NephJC, SYMPLICITY HTN-3
2. UpToDate, “Treatment of resistant hypertension,” heading “Catheter-based radiofrequency ablation of sympathetic nerves”

Summary by Duncan F. Moore, MD

Week 23 – ARISTOTLE

“Apixaban versus Warfarin in Patients with Atrial Fibrillation”

N Engl J Med. 2011 Sep 15;365(11):981-92 [free full text]

Prior to the development of the DOACs, warfarin was the standard of care for the reduction of risk of stroke in atrial fibrillation. Drawbacks of warfarin include a narrow therapeutic range, numerous drug and dietary interactions, the need for frequent monitoring, and elevated bleeding risk. Around 2010, the definitive RCTs for the oral direct thrombin inhibitor dabigatran (RE-LY) and the oral factor Xa inhibitor rivaroxaban (ROCKET AF) showed equivalence or superiority to warfarin. Shortly afterward, the ARISTOTLE trial demonstrated the superiority of the oral factor Xa inhibitor apixaban (Eliquis).

Population:

patients with atrial fibrillation or flutter (2+ episodes at least 2 weeks apart), with at least one additional risk factor for stroke (age 75+, prior CVA/TIA, symptomatic CHF, or reduced LVEF)

pertinent exclusions: atrial fibrillation due to a reversible cause, moderate to severe mitral stenosis, Cr > 2.5

Intervention:    apixaban twice daily + placebo warfarin daily

(reduced 2.5mg apixaban dose given in pts with 2 or more of the following: age 80+, weight < 60, Cr > 1.5)

Comparison:   placebo apixaban twice daily + warfarin daily

Outcome:

  • 1º efficacy – stroke
    • (The pre-specified non-inferiority threshold was the preservation of 50% stroke risk reduction relative to warfarin.)
  • 1º safety – “major bleeding” (clinically overt and accompanied by Hgb drop of ≥ 2, “occurring at a critical site,” or resulting in death)
  • 2º efficacy – all-cause mortality
  • 2º safety – a composite of major bleeding and “clinically-relevant non-major bleeding”

Results:
9120 patients were assigned to the apixaban group, and 9081 were assigned to the warfarin group. None of the baseline characteristics differed among the treatment groups. Mean CHADS2 score was 2.1. Fewer patients in the apixaban group discontinued their assigned study drug. Median duration of follow-up was 1.8 years.

The incidence of stroke was 1.27% per year in the apixaban group vs. 1.60% per year in the warfarin group (HR 0.79, 95% CI 0.66-0.95, p<0.001). This reduction was consistent across all major subgroups (see Figure 2). Notably, the rate of hemorrhagic stroke was 49% lower in the apixaban group, and the rate of ischemic stroke was 8% lower in the apixaban group.

All-cause mortality was 3.52% per year in the apixaban group vs. 3.94% per year in the warfarin group (HR 0.89, 95% CI 0.80-0.999, p=0.047).

The incidence of major bleeding was 2.13% per year in the apixaban group vs. 3.09% per year in the warfarin group (HR 0.69, 95% CI 0.60-0.80, p<0.001). The rate of intracranial hemorrhage was 0.33% per year in the apixaban group vs. 0.80% per year in the warfarin group (HR 0.42, 95% CI 0.30-0.58, p<0.001). The rate of any bleeding was 18.1% per year in the apixaban group vs. 25.8% in the warfarin group (p<0.001).

Implication/Discussion:
In patients with non-valvular atrial fibrillation and at least one other risk factor for stroke, anticoagulation with apixaban significantly reduced the risk of stroke, major bleeding, and all-cause mortality relative to anticoagulation with warfarin.

This was a large RCT that was designed and powered to demonstrate non-inferiority but in fact was able to demonstrate the superiority of apixaban. Along with ROCKET AF and RE-LY, the ARISTOTLE trial ushered in the modern era of DOACs in atrial fibrillation. Apixaban was approved by the FDA for the treatment of non-valvular atrial fibrillation in 2012. Patient prescription cost is no longer a major barrier to prescription. These three major DOACs are all first-line therapies in the DC Medicaid formulary for the treatment of non-valvular AF. To date, no trial has compared the various DOACs directly.

Further Reading/References:
1. Wiki Journal Club
2. 2 Minute Medicine
3. “Oral anticoagulants for prevention of stroke in atrial fibrillation: systematic review, network meta-analysis, and cost-effectiveness analysis,” BMJ 2017

Summary by Duncan F. Moore, MD

Week 22 – Effect of Early vs. Deferred Therapy for HIV (NA-ACCORD)

“Effect of Early versus Deferred Antiretroviral Therapy for HIV on Survival”

N Engl J Med. 2009 Apr 30;360(18):1815-26 [free full text]

The optimal timing of initiation of antiretroviral therapy (ART) in asymptomatic patients with HIV has been a subject of investigation since the advent of antiretrovirals. Guidelines in 1996 recommended starting ART for all HIV-infected patients with CD4 count < 500, but over time provider concerns regarding resistance, medication nonadherence, and adverse effects of medications led to more restrictive prescribing. In the mid-2000s, guidelines recommended ART initiation in asymptomatic HIV patients with CD4 < 350. However, contemporary subgroup analysis of RCT data and other limited observational data suggested that deferring initiation of ART increased rates of progression to AIDS and mortality. Thus the NA-ACCORD authors sought to retrospectively analyze their large dataset to investigate the mortality effect of early vs. deferred ART initiation.

Population:

Treatment-naïve patients with HIV and no hx of AIDS-defining illness, treated 1996-2005

Two subpopulations analyzed retrospectively:
1. CD4 count 351-500
2. CD4 count 500+

 

Intervention: none

Outcome: within each CD4 sub-population, mortality in patients treated with ART within 6 months after the first CD4 count within the range of interest vs. mortality in patients for whom ART was deferred until the CD4 count fell below the range of interest

Results:
8362 eligible patients had a CD4 count of 351-500, and of these, 2084 (25%) initiated ART within 6 months, whereas 6278 (75%) patients deferred therapy until CD4 < 351.

9155 eligible patients had a CD4 count of 500+, and of these, 2220 (24%) initiated ART within 6 months, whereas 6935 (76%) patients deferred therapy until CD4 < 500.

In both CD4 subpopulations, patients in the early-ART group were older, more likely to be white, more likely to be male, less likely to have HCV, and less likely to have a history of injection drug use. Cause of death information was obtained in only 16% of all deceased patients. The majority of these deaths in the both early- and deferred-therapy groups were from non-AIDS-defining conditions.

In the CD4 351-500 subpopulation, there were 137 deaths in the early-therapy group vs. 238 deaths in the deferred-therapy group. Relative risk of death for deferred therapy was 1.69 (95% CI 1.26-2.226, p < 0.001) per Cox regression stratified by year. After adjustment for history of injection drug use, RR = 1.28 (95% CI 0.85-1.93, p = 0.23). In an unadjusted analysis, HCV infection was a risk factor for mortality (RR 1.85, p= 0.03). After exclusion of patients with HCV infection, RR for deferred therapy = 1.52 (95% CI 1.01-2.28, p = 0.04).

In the CD4 500+ subpopulation, there were 113 deaths in the early-therapy group vs. 198 in the deferred-therapy group. Relative risk of death for deferred therapy was 1.94 (95% CI 1.37-2.79, p < 0.001). After adjustment for history of injection drug use, RR = 1.73 (95% CI 1.08-2.78, p = 0.02). Again, HCV infection was a risk factor for mortality (RR = 2.03, p < 0.001). After exclusion of patients with HCV infection, RR for deferred therapy = 1.90 (95% CI 1.14-3.18, p = 0.01).

Implication/Discussion:
In a large retrospective study, deferred initiation of antiretrovirals in asymptomatic HIV infection was associated with higher mortality.

This was the first retrospective study of early initiation of ART in HIV that was large enough to power mortality as an endpoint while controlling for covariates. However, it is limited significantly by its observational, non-randomized design that introduced substantial unmeasured confounders. A notable example is the absence of socioeconomic confounders (e.g. insurance status). Perhaps early-initiation patients were more well-off, and their economic advantage was what drove the mortality benefit rather than the early initiation of ART. This study also made no mention of the tolerability of ART or adverse reactions to it.

In the years that followed this trial, NIH and WHO consensus guidelines shifted the trend toward earlier treatment of HIV. In 2015, the INSIGHT START trial (the first large RCT of immediate vs. deferred ART) showed a definitive mortality benefit of immediate initiation of ART in CD4 500+ patients. Since that time, the standard of care has been to treat “essentially all” HIV-infected patients with ART [UpToDate].

Further Reading/References:
1. Wiki Journal Club
2. 2 Minute Medicine
3. INSIGHT START (2015), Pubmed, NEJM PDF
4. UpToDate, “When to initiate antiretroviral therapy in HIV-infected patients”

Summary by Duncan F. Moore, MD

Week 21 – HACA

“Mild Therapeutic Hypothermia to Improve the Neurologic Outcome After Cardiac Arrest”

by the Hypothermia After Cardiac Arrest Study Group

N Engl J Med. 2002 Feb 21;346(8):549-56. [free full text]

Neurologic injury after cardiac arrest is a significant source of morbidity and mortality. It is hypothesized that brain reperfusion injury (via the generation of free radicals and other inflammatory mediators) following ischemic time is the primary pathophysiologic basis. Animal models and limited human studies have demonstrated that patients treated with mild hypothermia following cardiac arrest have improved neurologic outcome. The 2002 HACA study sought to prospectively evaluate the utility of therapeutic hypothermia in reducing neurologic sequelae and mortality post-arrest.

Population: European patients who achieve return of spontaneous circulation after presenting to the ED in cardiac arrest

inclusion criteria: witnessed arrest, ventricular fibrillation or non-perfusing ventricular tachycardia as initial rhythm, estimated interval 5 to 15 min from collapse to first resuscitation attempt, no more than 60 min from collapse to ROSC, age 18-75

pertinent exclusion: pt already < 30ºC on admission, comatose state prior to arrest d/t CNS drugs, response to commands following ROSC

Intervention: Cooling to target temperature 32-34ºC with maintenance for 24 hrs followed by passive rewarming. Pts received pancuronium for neuromuscular blockade to prevent shivering.

Comparison: Standard intensive care

Outcomes:

Primary: a “favorable neurologic outcome” at 6 months defined as Pittsburgh cerebral-performance scale category 1 (good recovery) or 2 (moderate disability). (Of note, the examiner was blinded to treatment group allocation.)

Secondary:
– all-cause mortality at 6 months
– specific complications within the first 7 days: bleeding “of any severity,” pneumonia, sepsis, pancreatitis, renal failure, pulmonary edema, seizures, arrhythmias, and pressure sores

Results:
3551 consecutive patients were assessed for enrollment and ultimately 275 met inclusion criteria and were randomized. The normothermia group had more baseline DM and CAD and were more likely to have received BLS from a bystander prior to the ED.

Regarding neurologic outcome at 6 months, 75 of 136 (55%) of the hypothermia group had a favorable neurologic outcome, versus 54/137 (39%) in the normothermia group (RR 1.40, 95% CI 1.08-1.81, p = 0.009; NNT = 6). After adjusting for all baseline characteristics, the RR increased slightly to 1.47 (95% CI 1.09-1.82).

Regarding death at 6 months, 41% of the hypothermia group had died, versus 55% of the normothermia group (RR 0.74, 95% CI 0.58-0.95, p = 0.02; NNT = 7). After adjusting for all baseline characteristics, RR = 0.62 (95% CI 0.36-0.95). There was no difference among the two groups in the rate of any complication or in the total number of complications during the first 7 days.

Implication/Discussion:
In ED patients with Vfib or pulseless VT arrest who did not have meaningful response to commands after ROSC, immediate therapeutic hypothermia reduced the rate of neurologic sequelae and mortality at 6 months.

Corresponding practice point from Dr. Sonti and Dr. Vinayak and their Georgetown Critical Care Top 40: “If after ROSC your patient remains unresponsive and does not have refractory hypoxemia/hypotension/coagulopathy, you should initiate therapeutic hypothermia even if the arrest was PEA. The benefit seen was substantial and any proposed biologic mechanism would seemingly apply to all causes of cardiac arrest. The investigators used pancuronium to prevent shivering; [at MGUH] there is a ‘shivering’ protocol in place and if refractory, paralytics can be used.”

This trial, as well as a concurrent publication by Benard et al., ushered in a new paradigm of therapeutic hypothermia or “targeted temperature management” (TTM) following cardiac arrest. Numerous trials in related populations and with modified interventions (e.g. target temperature 36º C) were performed over the following decade, and ultimately led to the current standard of practice.

Per UpToDate, the collective trial data suggest that “active control of the post-cardiac arrest patient’s core temperature, with a target between 32 and 36ºC, followed by active avoidance of fever, is the optimal strategy to promote patient survival.” TTM should be undertaken in all patients who do not follow commands or have purposeful movements following ROSC. Expert opinion at UpToDate recommends maintaining temperature control for at least 48 hours. There is no strict contraindication to TTM.

Further Reading/References:
1. 2 Minute Medicine
2. Wiki Journal Club
3. Georgetown Critical Care Top 40, page 23 (Jan. 2016)
4. PulmCCM.org, “Hypothermia did not help after out-of-hospital cardiac arrest, in largest study yet
5. Cochrane Review, “Hypothermia for neuroprotection in adults after cardiopulmonary resuscitation”
6. The NNT, “Mild Therapeutic Hypothermia for Neuroprotection Following CPR”
7. UpToDate, “Post-cardiac arrest management in adults”

Summary by Duncan F. Moore, MD

Week 20 – CHADS2

“Validation of Clinical Classification Schemes for Predicting Stroke”

JAMA. 2001 June 13;285(22):2864-70. [free full text]

Atrial fibrillation is the most common cardiac arrhythmia and affects 1-2% of the overall population, with increasing prevalence as people age. Atrial fibrillation also carries substantial morbidity and mortality due to the risk of stroke and thromboembolism, although the risk of embolic phenomenon varies widely across various subpopulations. In 2001, the only oral anticoagulation options available were warfarin and aspirin, which had relative risk reductions of 62% and 22%, respectively, consistent across these subpopulations. Clinicians felt that high risk patients should be anticoagulated, but the two common classification schemes, AFI and SPAF, were flawed. Patients were often classified as low risk in one scheme and high risk in the other. The schemes were derived retrospectively and were clinically ambiguous. Therefore, in 2001 a group of investigators combined the two existing schemes to create the CHADS2 scheme and applied it to a new data set.

Population (NRAF cohort): Hospitalized Medicare patients ages 65-95 with non-valvular AF not prescribed warfarin at hospital discharge. Patient records were manually abstracted by five quality improvement organizations in seven US states (California, Connecticut, Louisiana, Maine, Missouri, New Hampshire, and Vermont).

Intervention: Determination of CHADS2 score (1 point for recent CHF, hypertension, age ≥ 75, and DM; 2 points for a history of stroke or TIA)

Comparison: AFI and SPAF risk schemes

Measured Outcome: Hospitalization rates for ischemic stroke (per ICD-9 codes from Medicare claims), stratified by CHADS2 / AFI / SPAF scores.

Calculated Outcome: performance of the various schemes, based on c statistic (a measure of predictive accuracy in a binary logistic regression model)

Results:
1733 patients were identified in the NRAF cohort. When compared to the AFI and SPAF trials, these patients tended be older (81 in NRAF vs. 69 in AFI vs. 69 in SPAF), have a higher burden of CHF (56% vs. 22% vs. 21%), more likely to be female (58% vs. 34% vs. 28%), had a history of DM (23% vs. 15% vs. 15%) and prior stroke or TIA (25% vs. 17% vs. 8%). The stroke rate was lowest in the group with a CHADS2 = 0 (1.9 per 100 patient years, adjusting for the assumption that aspirin was not taken). The stroke rate increased by a factor of approximately 1.5 for each 1-point increase in the CHADS2 score.

CHADS2 score            NRAF Adjusted Stroke Rate per 100 Patient-Years
0                                      1.9
1                                      2.8
2                                      4.0
3                                      5.9
4                                      8.5
5                                      12.5
6                                      18.2

The CHADS2 scheme had a c statistic of 0.82 compared to 0.68 for the AFI scheme and 0.74 for the SPAF scheme.

Implication/Discussion
The CHADS2 scheme provides clinicians with a scoring system to help guide decision making for anticoagulation in patients with non-valvular AF.

The authors note that the application of the CHADS2 score could be useful in several clinical scenarios. First, it easily identifies patients at low risk of stroke (CHADS2 = 0) for whom anticoagulation with warfarin would probably not provide significant benefit. The authors argue that these patients should merely be offered aspirin. Second, the CHADS2 score could facilitate medication selection based on a patient-specific risk of stroke. Third, the CHADS2 score could help clinicians make decisions regarding anticoagulation in the perioperative setting by evaluating the risk of stroke against the hemorrhagic risk of the procedure. Although the CHADS2 is no longer the preferred risk-stratification scheme, the same concepts are still applicable to the more commonly used CHA2DS2-VASc.

This study had several strengths. First, the cohort was from seven states that represented all geographic regions of the United States. Second, CHADS2 was pre-specified based on previous studies and validated using the NRAF data set. Third, the NRAF data set was obtained from actual patient chart review as opposed to purely from an administrative database. Finally, the NRAF patients were older and sicker than those of the AFI and SPAF cohorts, thus the CHADS2 appears to be generalizable to the very large demographic of frail, elderly Medicare patients.

As CHADS2 became widely used clinically in the early 2000s, its application to other cohorts generated a large intermediate-risk group (CHADS2 = 1), which was sometimes > 60% of the cohort (though in the NRAF cohort, CHADS2 = 1 accounted for 27% of the cohort). In clinical practice, this intermediate-risk group was to be offered either warfarin or aspirin. Clearly, a clinical-risk predictor that does not provide clear guidance in over 50% of patients needs to be improved. As a result, the CHA2DS2-VASc scoring system was developed from the Birmingham 2009 scheme. When compared head-to-head in registry data, CHA2DS2-VASc more effectively discriminated stroke risk among patients with a baseline CHADS2 score of 0 to 1. Because of this, CHA2DS2-VASc is the recommended risk stratification scheme in the AHA/ACC/HRS 2014 Practice Guideline for Atrial Fibrillation. In modern practice, anticoagulation is unnecessary when CHA2DS2-VASc score = 0, should be considered (vs. antiplatelet or no treatment) when score = 1, and is recommended when score ≥ 2.

Further Reading:
1. AHA/ACC/HRS 2014 Practice Guideline for Atrial Fibrillation
2. CHA2DS2-VASc (2010)
3. 2 Minute Medicine

Summary by Ryan Commins, MD

Week 19 – RAVE

“Rituximab versus Cyclophosphamide for ANCA-Associated Vasculitis”

by the Rituximab in ANCA-Associated Vasculitis-Immune Tolerance Network (RAVE-ITN) Research Group

N Engl J Med. 2010 Jul 15;363(3):221-32. [free full text]

ANCA-associated vasculitides, such as granulomatosis with polyangiitis (GPA, formerly Wegener’s granulomatosis) and microscopic polyangiitis (MPA) are often rapidly progressive and highly morbid. Mortality in untreated generalized GPA can be as high as 90% at 2 years (PMID 1739240). Since the early 1980s, cyclophosphamide (CYC) with corticosteroids has been the best treatment option for induction of disease remission in GPA and MPA. Unfortunately, the immediate and delayed adverse effect profile of CYC can be burdensome. The role of B lymphocytes in the pathogenesis of these diseases has been increasingly appreciated over the past 20 years, and this association inspired uncontrolled treatment studies with the anti-CD20 agent rituximab that demonstrated promising preliminary results. Thus the RAVE trial was performed to compare rituximab to cyclophosphamide, the standard of care.

Population:
ANCA-positive patients with “severe” GPA or MPA and a Birmingham Vasculitis Activity Score for Wegener’s Granulomatosis (BVAS/WG) of 3+.

notable exclusion: patients intubated due to alveolar hemorrhage, patients with Cr > 4.0

Intervention:
rituximab 375mg/m2 IV weekly x4 + daily placebo-CYC + pulse-dose corticosteroids with oral maintenance and then taper

Comparison:
placebo-rituximab infusion weekly x4 + daily CYC + pulse-dose corticosteroids with oral maintenance and then taper

Outcome:
primary end point = clinical remission, defined as a BVAS/WG of 0 and successful completion of prednisone taper

primary outcome = noninferiority of rituximab relative to CYC in reaching 1º end point

authors specified non-inferiority margin as a -20 percentage point difference in remission rate

subgroup analyses (pre-specified) = type of ANCA-associated vasculitis, type of ANCA, “newly-diagnosed disease,” relapsing disease, alveolar hemorrhage, and severe renal disease

secondary outcomes: rate of disease flares, BVAS/WG of 0 during treatment with prednisone at a dose of less than 10mg/day, cumulative glucocorticoid dose, rates of adverse events, SF-36 scores


Results
:
197 patients were randomized, and baseline characteristics were similar among the two groups (e.g. GPA vs. MPA, relapsed disease, etc.). 75% of patients had GPA. 64% of the patients in the rituximab group reached remission, while 53% of the control patients did. This 11 percentage point difference among the treatment groups was consistent with non-inferiority (p < 0.001). However, although more rituximab patients reached the primary endpoint, the difference between the two groups was statistically insignificant, and thus superiority of rituximab could not be established (95% CI -3.2 – 24.3 percentage points difference, p = 0.09). Subgroup analysis was notable only for superiority of rituximab in relapsed patients (67% remission rate vs. 42% in controls, p=0.01). Rates of adverse events and treatment discontinuation were similar among the two groups.

Implication/Discussion:
Rituximab + steroids is as effective as cyclophosphamide + steroids in inducing remission in severe GPA and MPA.

This study initiated a major paradigm shift in the standard of care of ANCA-associated vasculitis. The following year, the FDA approved rituximab + steroids as the first-ever treatment regimen approved for GPA and MPA.  It spurred numerous follow up trials, and to this day expert opinion is split over whether CYC or rituximab should be the initial immunosuppressive therapy in GPA/MPA with “organ-threatening or life-threatening disease” (UpToDate).

Further Reading/References:
1. “Wegener granulomatosis: an analysis of 158 patients” (1992)
2. RAVE at ClinicalTrials.gov
3. “Challenges in the Design and Interpretation of Noninferiority Trials,” NEJM (2017)
4. “Clinical Trials – Non-inferiority Trials”
5. UpToDate,“Initial Immunosuppressive Therapy in Granulomatosis with Polyangiitis and Microscopic Polyangiitis
6. Wiki Journal Club
7. 2 Minute Medicine

Summary by Duncan F. Moore, MD

Week 18 – VERT

“Effects of Risedronate Treatment on Vertebral and Nonvertebral Fractures in Women With Postmenopausal Osteoporosis”

by the Vertebral Efficacy with Risedronate Therapy (VERT) Study Group

JAMA. 1999 Oct 13;282(14):1344-52. [free full text]

Bisphosphonates are a highly effective and relatively safe class of medications for the prevention of fractures in patients with osteoporosis. The VERT trial published in 1999 was a landmark trial that demonstrated this protective effect with the daily oral bisphosphonate risedronate.

Population: post-menopausal women with either 2 or more vertebral fractures per radiography or 1 vertebral fracture with decreased lumbar spine bone mineral density

Intervention: risedronate 2.5mg mg PO daily or risedronate 5mg PO daily

Comparison: placebo PO daily

Outcomes:
1. prevalence of new vertebral fracture at 3 years follow-up, per annual imaging
2. prevalence of new non-vertebral fracture at 3 years follow-up, per annual imaging
3. change in bone mineral density, per DEXA q6 months

Results:
2458 patients were randomized. During the course of the study, “data from other trials indicated that the 2.5mg risedronate dose was less effective than the 5mg dose,” and thus the authors discontinued further data collection on the 2.5mg treatment arm at 1 year into the study. All treatment groups had similar baseline characteristics. 55% of the placebo group and 60% of the 5mg risedronate group completed 3 years of treatment. The prevalence of new vertebral fracture within 3 years was 11.3% in the risedronate group and 16.3% in the placebo group (RR 0.59, 95% CI 0.43-0.82, p = 0.003; NNT = 20). The prevalence of new non-vertebral fractures at 3 years was 5.2% in the treatment arm and 8.4% in the placebo arm (RR 0.6, 95% CI 0.39-0.94, p = 0.02; NNT = 31). Regarding bone mineral density (BMD), see Figure 4 for a visual depiction of the changes in BMD by treatment group at the various 6-month timepoints. Notably, change from baseline BMD of the lumbar spine and femoral neck was significantly higher (and positive) in the risedronate 5mg group at all follow-up timepoints relative to the placebo group and at all timepoints except 6 months for the femoral trochanter measurements. Regarding adverse events, there was no difference in the incidence of upper GI adverse events among the two groups. GI complaints “were the most common adverse events associated with study discontinuance,” and GI events lead to 42% of placebo withdrawals but only 36% of the 5mg risedronate withdrawals.

Implication/Discussion:
Oral risedronate reduces the risk of vertebral and non-vertebral fractures in patients with osteoporosis while increasing bone mineral density.

Overall, this was a large, well-designed RCT that demonstrated a concrete treatment benefit. As a result, oral bisphosphonate therapy has become the standard of care both for treatment and prevention of osteoporosis. This study, as well as others, demonstrated that such therapies are well-tolerated with relatively few side effects.

A notable strength of this study is that it did not exclude patients with GI comorbidities.  One weakness is the modification of the trial protocol to eliminate the risedronate 2.5mg treatment arm after 1 year of study. Although this arm demonstrated a reduction in vertebral fracture at 1 year relative to placebo (p = 0.02), its elimination raises suspicion that the pre-specified analyses were not yielding the anticipated results during the interim analysis and thus the less-impressive treatment arm was discarded.

Further Reading/References:
1. Weekly alendronate vs. weekly risedronate
2. Comparative effectiveness of pharmacologic treatments to prevent fractures: an updated systematic review (2014)

Summary by Duncan F. Moore, MD

Week 17 – PROSEVA

“Prone Positioning in Severe Acute Respiratory Distress Syndrome”

by the PROSEVA Study Group

N Engl J Med. 2013 June 6; 368(23):2159-2168 [free full text]

Prone positioning had been used for many years in ICU patients with ARDS in order to improve oxygenation. Per Dr. Sonti’s Georgetown Critical Care Top 40, the physiologic basis for benefit with proning lies in the idea that atelectatic regions of lung typically occur in the most dependent portion of an ARDS patient, with hyperinflation affecting the remaining lung. Periodic reversal of these regions via moving the patient from supine to prone and vice versa ensures no one region of the lung will have extended exposure to either atelectasis or overdistention. Although the oxygenation benefits have been long noted, the PROSEVA trial established mortality benefit.

Population:  Patients were selected from 26 ICUs in France and 1 in Spain which had daily practice with prone positioning for at least 5 years.

Inclusion: ARDS patients intubated and ventilated <36hr with severe ARDS (defined as PaO2:FiO2 ratio <150, PEEP>5, and TV of about 6ml/kg of predicted body weight)

(NB: by the Berlin definition for ARDS, severe ARDS is defined as PaO2:FiO2 ratio <100)

Intervention: Proning patients within 36 hours of mechanical ventilation for at least 16 consecutive hours (N=237)

Control: Leaving patients in a semirecumbent (supine) position (N=229)

Outcome:

Primary: mortality at day 28

Secondary: mortality at day 90, rate of successful (no reintubation or use of noninvasive ventilation x48hr) extubation, time to successful extubation, length of stay in the ICU, complications, use of noninvasive ventilation, tracheotomy rate, number of days free from organ dysfunction, ventilator settings, measurements of ABG, and respiratory system mechanics during the first week after randomization

Results:
At the time of randomization in the study, the majority of characteristics were similar between the two groups, although the authors noted differences in the SOFA score and the use of neuromuscular blockers and vasopressors. The supine group at baseline had a higher SOFA score indicating more severe organ failure, and also had higher rate of vasopressor usage. The prone group had a higher rate of usage of neuromuscular blockade.

The primary outcome of 28 day mortality was significantly lower in the prone group than in the supine group, at 16.0% vs 32.8% (P<0.001, NNT = 6.0). This mortality decrease was still statistically significant when adjusted for the SOFA score.

Secondary outcomes were notable for a significantly higher rate of successful extubation in the prone group (hazard ratio 0.45; 95% CI 0.29-0.7, P<0.001). Additionally, the PaO2:FiO2 ratio was significantly higher in the supine group, whereas the PEEP and FiO2 were significantly lower. The remainder of secondary outcomes were statistically similar.

Discussion:
PROSEVA showed a significant mortality benefit with early use of prone positioning in severe ARDS. This mortality benefit was considerably larger than seen in past meta-analyses, which was likely due to this study selecting specifically for patients with severe disease as well as specifying longer prone-positioning sessions than employed in prior studies. Critics have noted the unexpected difference in baseline characteristics between the two arms of the study. While these critiques are reasonable, the authors mitigate at least some of these complaints by adjusting the mortality for the statistically significant differences. With such a radical mortality benefit it might be surprising that more patients are not proned at our institution. One reason is that relatively few of our patients have severe ARDS. Additionally, proning places a high demand on resources and requires a coordinated effort of multiple staff. All treatment centers in this study had specially-trained staff that had been performing proning on a daily basis for at least 5 years, and thus were very familiar with the process. With this in mind, we consider the use of proning in patients meeting criteria for severe ARDS.

References and further reading:
1. 2 Minute Medicine
2. Wiki Journal Club
3. Georgetown Critical Care Top 40, pages 8-9
4. Life in the Fastlane, Critical Care Compendium, “Prone Position and Mechanical Ventilation”
5. PulmCCM.org, “ICU Physiology in 1000 Words: The Hemodynamics of Prone”

Summary by Gordon Pelegrin, MD