Week 34 – HACA

“Mild Therapeutic Hypothermia to Improve the Neurologic Outcome After Cardiac Arrest”

by the Hypothermia After Cardiac Arrest Study Group

N Engl J Med. 2002 Feb 21;346(8):549-56. [free full text]

Neurologic injury after cardiac arrest is a significant source of morbidity and mortality. It is hypothesized that brain reperfusion injury (via the generation of free radicals and other inflammatory mediators) following ischemic time is the primary pathophysiologic basis. Animal models and limited human studies have demonstrated that patients treated with mild hypothermia following cardiac arrest have improved neurologic outcome. The 2002 HACA study sought to evaluate prospectively the utility of therapeutic hypothermia in reducing neurologic sequelae and mortality post-arrest.

Population: European patients who achieve return of spontaneous circulation (ROSC) after presenting to the ED in cardiac arrest

inclusion criteria: witnessed arrest, ventricular fibrillation or non-perfusing ventricular tachycardia as initial rhythm, estimated interval 5 to 15 min from collapse to first resuscitation attempt, no more than 60 min from collapse to ROSC, age 18-75

pertinent exclusion criteria: pt already < 30ºC on admission, comatose state prior to arrest due to CNS drugs, response to commands following ROSC

Intervention: Cooling to target temperature 32-34ºC with maintenance for 24 hrs followed by passive rewarming. Patients received pancuronium for neuromuscular blockade to prevent shivering.

Comparison: Standard intensive care

Outcomes:

Primary: a “favorable neurologic outcome” at 6 months defined as Pittsburgh cerebral-performance scale category 1 (good recovery) or 2 (moderate disability). (Of note, the examiner was blinded to treatment group allocation.)

Secondary:

        • all-cause mortality at 6 months
        • specific complications within the first 7 days: bleeding “of any severity,” pneumonia, sepsis, pancreatitis, renal failure, pulmonary edema, seizures, arrhythmias, and pressure sores

Results:
3551 consecutive patients were assessed for enrollment and ultimately 275 met inclusion criteria and were randomized. The normothermia group had more baseline DM and CAD and were more likely to have received BLS from a bystander prior to the ED.

Regarding neurologic outcome at 6 months, 75 of 136 (55%) of the hypothermia group had a favorable neurologic outcome, versus 54/137 (39%) in the normothermia group (RR 1.40, 95% CI 1.08-1.81, p = 0.009; NNT = 6). After adjusting for all baseline characteristics, the RR increased slightly to 1.47 (95% CI 1.09-1.82).

Regarding death at 6 months, 41% of the hypothermia group had died, versus 55% of the normothermia group (RR 0.74, 95% CI 0.58-0.95, p = 0.02; NNT = 7). After adjusting for all baseline characteristics, RR = 0.62 (95% CI 0.36-0.95). There was no difference among the two groups in the rate of any complication or in the total number of complications during the first 7 days.

Implication/Discussion:
In ED patients with Vfib or pulseless VT arrest who did not have meaningful response to commands after ROSC, immediate therapeutic hypothermia reduced the rate of neurologic sequelae and mortality at 6 months.

Corresponding practice point from Dr. Sonti and Dr. Vinayak and their Georgetown Critical Care Top 40: “If after ROSC your patient remains unresponsive and does not have refractory hypoxemia/hypotension/coagulopathy, you should initiate therapeutic hypothermia even if the arrest was PEA. The benefit seen was substantial and any proposed biologic mechanism would seemingly apply to all causes of cardiac arrest. The investigators used pancuronium to prevent shivering; [at MGUH] there is a ‘shivering’ protocol in place and if refractory, paralytics can be used.”

This trial, as well as a concurrent publication by Benard et al. ushered in a new paradigm of therapeutic hypothermia or “targeted temperature management” (TTM) following cardiac arrest. Numerous trials in related populations and with modified interventions (e.g. target temperature 36º C) were performed over the following decade, and ultimately led to the current standard of practice.

Per UpToDate, the collective trial data suggest that “active control of the post-cardiac arrest patient’s core temperature, with a target between 32 and 36ºC, followed by active avoidance of fever, is the optimal strategy to promote patient survival.” TTM should be undertaken in all patients who do not follow commands or have purposeful movements following ROSC. Expert opinion at UpToDate recommends maintaining temperature control for at least 48 hours.

Further Reading/References:
1. HACA @ 2 Minute Medicine
2. HACA @ Wiki Journal Club
3. Georgetown Critical Care Top 40, page 23 (Jan. 2016)
4. PulmCCM.org, “Hypothermia did not help after out-of-hospital cardiac arrest, in largest study yet”
5. Cochrane Review, “Hypothermia for neuroprotection in adults after cardiopulmonary resuscitation”
6. The NNT, “Mild Therapeutic Hypothermia for Neuroprotection Following CPR”
7. UpToDate, “Post-cardiac arrest management in adults”

Summary by Duncan F. Moore, MD

Image Credit: Sergey Pesterev, CC BY-SA 4.0, via Wikimedia Commons

Week 33 – ALLHAT

“Major Outcomes in High-Risk Hypertensive Patients Randomized to Angiotensin-Converting Enzyme Inhibitor or Calcium Channel Blocker vs. Diuretic”

The Antihypertensive and Lipid-Lowering Treatment to Prevent Heart Attack Trial (ALLHAT)

JAMA. 2002 Dec 18;288(23):2981-97. [free full text]

Hypertension is a ubiquitous disease, and the cardiovascular and mortality benefits of BP control have been well described. However, as the number of available antihypertensive classes proliferated in the past several decades, a head-to-head comparison of different antihypertensive regimens was necessary to determine the optimal first-step therapy. The 2002 ALLHAT trial was a landmark trial in this effort.

Population:
33,357 patients aged 55 years or older with hypertension and at least one other coronary heart disease (CHD) risk factor (previous MI or stroke, LVH by ECG or echo, T2DM, current cigarette smoking, HDL < 35 mg/dL, or documentation of other atherosclerotic cardiovascular disease (CVD)). Notable exclusion criteria: history of hospitalization for CHF, history of treated symptomatic CHF, or known LVEF < 35%.

Intervention:
Prior antihypertensives were discontinued upon initiation of the study drug. Patients were randomized to one of three study drugs in a double-blind fashion. Study drugs and additional drugs were added in a step-wise fashion to achieve a goal BP < 140/90 mmHg.

Step 1: titrate assigned study drug

      • chlorthalidone: 12.5 –> 12.5 (sham titration) –> 25 mg/day
      • amlodipine: 2.5 –> 5 –> 10 mg/day
      • lisinopril: 10 –> 20 –> 40 mg/day

Step 2: add open-label agents at treating physician’s discretion (atenolol, clonidine, or reserpine)

      • atenolol: 25 to 100 mg/day
      • reserpine: 0.05 to 0.2 mg/day
      • clonidine: 0.1 to 0.3 mg BID

Step 3: add hydralazine 25 to 100 mg BID

Comparison:
Pairwise comparisons with respect to outcomes of chlorthalidone vs. either amlodipine or lisinopril. A doxazosin arm existed initially, but it was terminated early due to an excess of CV events, primarily driven by CHF.

Outcomes:
Primary –  combined fatal CAD or nonfatal MI

Secondary

      • all-cause mortality
      • fatal and nonfatal stroke
      • combined CHD (primary outcome, PCI, or hospitalized angina)
      • combined CVD (CHD, stroke, non-hospitalized treated angina, CHF [fatal, hospitalized, or treated non-hospitalized], and PAD)

Results:
Over a mean follow-up period of 4.9 years, there was no difference between the groups in either the primary outcome or all-cause mortality.

When compared with chlorthalidone at 5 years, the amlodipine and lisinopril groups had significantly higher systolic blood pressures (by 0.8 mmHg and 2 mmHg, respectively). The amlodipine group had a lower diastolic blood pressure when compared to the chlorthalidone group (0.8 mmHg).

When comparing amlodipine to chlorthalidone for the pre-specified secondary outcomes, amlodipine was associated with an increased risk of heart failure (RR 1.38; 95% CI 1.25-1.52).

When comparing lisinopril to chlorthalidone for the pre-specified secondary outcomes, lisinopril was associated with an increased risk of stroke (RR 1.15; 95% CI 1.02-1.30), combined CVD (RR 1.10; 95% CI 1.05-1.16), and heart failure (RR 1.20; 95% CI 1.09-1.34). The increased risk of stroke was mostly driven by 3 subgroups: women (RR 1.22; 95% CI 1.01-1.46), blacks (RR 1.40; 95% CI 1.17-1.68), and non-diabetics (RR 1.23; 95% CI 1.05-1.44). The increased risk of CVD was statistically significant in all subgroups except in patients aged less than 65. The increased risk of heart failure was statistically significant in all subgroups.

Discussion:
In patients with hypertension and one risk factor for CAD, chlorthalidone, lisinopril, and amlodipine performed similarly in reducing the risks of fatal CAD and nonfatal MI.

The study has several strengths: a large and diverse study population, a randomized, double-blind structure, and the rigorous evaluation of three of the most commonly prescribed “newer” classes of antihypertensives. Unfortunately, neither an ARB nor an aldosterone antagonist was included in the study. Additionally, the step-up therapies were not reflective of contemporary practice. (Instead, patients would likely be prescribed one or more of the primary study drugs.)

The ALLHAT study is one of the hallmark studies of hypertension and has played an important role in hypertension guidelines since it was published. Following the publication of ALLHAT, thiazide diuretics became widely used as first line drugs in the treatment of hypertension. The low cost of thiazides and their limited side-effect profile are particularly attractive class features. While ALLHAT looked specifically at chlorthalidone, in practice the positive findings were attributed to HCTZ, which has been more often prescribed. The authors of ALLHAT argued that the superiority of thiazides was likely a class effect, but according to the analysis at Wiki Journal Club, “there is little direct evidence that HCTZ specifically reduces the incidence of CVD among hypertensive individuals.” Furthermore, a 2006 study noted that that HCTZ has worse 24-hour BP control than chlorthalidone due to a shorter half-life. The ALLHAT authors note that “since a large proportion of participants required more than 1 drug to control their BP, it is reasonable to infer that a diuretic be included in all multi-drug regimens, if possible.” The 2017 ACC/AHA High Blood Pressure Guidelines state that, of the four thiazide diuretics on the market, chlorthalidone is preferred because of a prolonged half-life and trial-proven reduction of CVD (via the ALLHAT study).

Further Reading / References:
1. 2017 ACC Hypertension Guidelines
2. ALLHAT @ Wiki Journal Club
3. 2 Minute Medicine
4. Ernst et al, “Comparative antihypertensive effects of hydrochlorothiazide and chlorthalidone on ambulatory and office blood pressure.” (2006)
5. Gillis Pharmaceuticals
6. Concepts in Hypertension, Volume 2 Issue 6

Summary by Ryan Commins MD

Image Credit: Kimivanil, CC BY-SA 4.0, via Wikimedia Commons

Week 32 – PneumA

“Comparison of 8 vs 15 Days of Antibiotic Therapy for Ventilator-Associated Pneumonia in Adults”

JAMA. 2003 November 19;290(19):2588-2598. [free full text]

Ventilator-associated pneumonia (VAP) is a frequent complication of mechanical ventilation and, prior to this study, few trials had addressed the optimal duration of antibiotic therapy in VAP. Thus, patients frequently received 14- to 21-day antibiotic courses. As antibiotic stewardship efforts increased and awareness grew of the association between prolonged antibiotic courses and the development of multidrug resistant (MDR) infections, more data were needed to clarify the optimal VAP treatment duration.

This 2003 trial by the PneumA Trial Group was the first large randomized trial to compare shorter (8-day) versus longer (15-day) treatment courses for VAP.

The noninferiority study, carried out in 51 French ICUs, enrolled intubated patients with clinical suspicion for VAP and randomized them to either 8 or 15 days of antimicrobials. Antimicrobial regimens were chosen by the treating clinician. 401 patients met eligibility criteria. 197 were randomized to the 8-day regimen. 204 patients were randomized to the 15-day regimen. Study participants were blinded to randomization assignment until day 8. Analysis was performed using an intention-to-treat model. The primary outcomes measured were death from any cause at 28 days, antibiotic-free days, and microbiologically documented pulmonary infection recurrence.

Study findings demonstrated a similar 28-day mortality in both groups (18.8% mortality in 8-day group vs. 17.2% in 15-day group, group difference 90% CI -3.7% to 6.9%). The 8-day group did not develop more recurrent infections (28.9% in 8-day group vs. 26.0% in 15-day group, group difference 90% CI -3.2% to 9.1%). The 8-day group did have more antibiotic-free days when measured at the 28-day point (13.1 in 8-day group vs. 8.7 in 15-day group, p<0.001). A subgroup analysis did show that more 8-day-group patients who had an initial infection with lactose-nonfermenting GNRs developed a recurrent pulmonary infection, so noninferiority was not established in this specific subgroup (40.6% recurrent GNR infection in 8-day group vs. 25.4% in 15-day group, group difference 90% CI 3.9% to 26.6%).

Implications/Discussion:
There is no benefit to prolonging VAP treatment to 15 days (except perhaps when Pseudomonas aeruginosa is suspected based on gram stain/culture data). Shorter courses of antibiotics for VAP treatment allow for less antibiotic exposure without increasing rates of recurrent infection or mortality.

The 2016 IDSA guidelines on VAP treatment recommend a 7-day course of antimicrobials for treatment of VAP (as opposed to a longer treatment course such as 8-15 days). These guidelines are based on the IDSA’s own large meta-analysis (of 10 randomized trials, including PneumA, as well as an observational study) which demonstrated that shorter courses of antibiotics (7 days) reduce antibiotic exposure and recurrent pneumonia due to MDR organisms without affecting clinical outcomes, such as mortality. Of note, this 7-day course recommendation also applies to treatment of lactose-nonfermenting GNRs, such as Pseudomonas.

When considering the PneumA trial within the context of the newest IDSA guidelines, we see that we now have over 15 years of evidence supporting the use of shorter VAP treatment courses.

Further Reading/References:
1. 2016 IDSA Guidelines for the Management of HAP/VAP
2. PneumA @ Wiki Journal Club
3. PulmCCM “IDSA Guidelines 2016: HAP, VAP & It’s the End of HCAP as We Know It (And I Feel Fine)”
4. PulmCrit “The siren’s call: Double-coverage for ventilator associated PNA”

Summary by Liz Novick, MD

Week 31 – PLCO

“Mortality Results from a Randomized Prostate-Cancer Screening Trial”

by the Prostate, Lung, Colorectal, and Ovarian (PLCO) Cancer Screening Trial project team

N Engl J Med. 2009 Mar 26;360(13):1310-9. [free full text]

The use of prostate-specific-antigen (PSA) testing to screen for prostate cancer has been a contentious subject for decades. Prior to the 2009 PLCO trial, there were no high-quality prospective studies of the potential benefit of PSA testing.

The trial enrolled men ages 55-74 (excluded if history of prostate, lung, or colorectal cancer, current cancer treatment, or > 1 PSA test in the past 3 years). Patients were randomized to annual PSA testing for 6 years with annual digital rectal exam (DRE) for 4 years or to usual care. The primary outcome was the prostate-cancer-attributable death rate, and the secondary outcome was the incidence of prostate cancer.

38,343 patients were randomized to the screening group, and 38,350 were randomized to the usual-care group. Baseline characteristics were similar in both groups. Median follow-up duration was 11.5 years. Patients in the screening group were 85% compliant with PSA testing and 86% compliant with DRE. In the usual-care group, 40% of patients received a PSA test within the first year, and 52% received a PSA test by the sixth year. Cumulative DRE rates in the usual-care group were between 40-50%. By seven years, there was no significant difference in rates of death attributable to prostate cancer. There were 50 deaths in the screening group and only 44 in the usual-care group (rate ratio 1.13, 95% CI 0.75 – 1.70). At ten years, there were 92 and 82 deaths in the respective groups (rate ratio 1.11, 95% CI 0.83–1.50). By seven years, there was a higher rate of prostate cancer detection in the screening group. 2820 patients were diagnosed in the screening group, but only 2322 were diagnosed in the usual-care group (rate ratio 1.22, 95% CI 1.16–1.29). By ten years, there were 3452 and 2974 diagnoses in the respective groups (rate ratio 1.17, 95% CI 1.11–1.22). Treatment-related complications (e.g. infection, incontinence, impotence) were not reported in this study.

In summary, yearly PSA screening increased the prostate cancer diagnosis rate but did not impact prostate-cancer mortality when compared to the standard of care. However, there were relatively high rates of PSA testing in the usual-care group (40-50%). The authors cite this finding as a probable major contributor to the lack of mortality difference. Other factors that may have biased to a null result were prior PSA testing and advances in treatments for prostate cancer during the trial. Regarding the former, 44% of men in both groups had already had one or more PSA tests prior to study enrollment. Prior PSA testing likely contributed to selection bias.

PSA screening recommendations prior to this 2009 study:

      • American Urological Association and American Cancer Society – recommended annual PSA and DRE, starting at age 50 if normal risk and earlier in high-risk men
      • National Comprehensive Cancer Network: “a risk-based screening algorithm, including family history, race, and age”
      • 2008 USPSTF Guidelines: insufficient evidence to determine balance between risks/benefits of PSA testing in men younger than 75; recommended against screening in age 75+ (Grade I Recommendation)

The authors of this study conclude that their results “support the validity of the recent [2008] recommendations of the USPSTF, especially against screening all men over the age of 75.”

However, the conclusions of the European Randomized Study of Screening for Prostate Cancer (ERSPC), which was published concurrently with PLCO in NEJM, differed. In ERSPC, PSA was screened every 4 years. The authors found an increased rate of detection of prostate cancer, but, more importantly, they found that screening decreased prostate cancer mortality (adjusted rate ratio 0.80, 95% CI 0.65–0.98, p = 0.04; NNT 1410 men receiving 1.7 screening visits over 9 years). Like PLCO, this study did not report treatment harms that may have been associated with overly zealous diagnosis.

The USPSTF reexamined its PSA guidelines in 2012. Given the lack of mortality benefit in PLCO, the pitiful mortality benefit in ERSPC, and the assumed harm from over-diagnosis and excessive intervention in patients who would ultimately not succumb to prostate cancer, the USPSTF concluded that PSA-based screening for prostate cancer should not be offered (Grade D Recommendation).

In the following years, the pendulum has swung back partially toward screening. In May 2018, the USPSTF released new recommendations that encourage men ages 55-69 to have an informed discussion with their physician about potential benefits and harms of PSA-based screening (Grade C Recommendation). The USPSTF continues to recommend against screening in patients over 70 years old (Grade D).

Screening for prostate cancer remains a complex and controversial topic. Guidelines from the American Cancer Society, American Urological Association, and USPSTF vary, but ultimately all recommend shared decision-making. UpToDate has a nice summary of talking points culled from several sources.

Further Reading/References:
1. 2 Minute Medicine
2. ERSPC @ Wiki Journal Club
3. UpToDate, Screening for Prostate Cancer

Summary by Duncan F. Moore, MD

Image Credit: Otis Brawley, Public Domain, NIH National Cancer Institute Visuals Online

Week 30 – Rifaximin Treatment in Hepatic Encephalopathy

“Rifaximin Treatment in Hepatic Encephalopathy”

N Engl J Med. 2010 Mar25;362(12):1071-81. [free full text]

As we are well aware at Georgetown, hepatic encephalopathy (HE) is highly prevalent among patients with cirrhosis, and admissions for recurrent HE place a significant burden on the medical system. The authors of this study note that HE is thought to result from “the systemic accumulation of gut-derived neurotoxins, especially ammonia, in patients with impaired liver function and portosystemic shunting.” Lactulose is considered the standard of care for the prevention of HE. It is thought to decrease the absorption of ammonia in the gut lumen through its cathartic effects and by alteration of colonic pH. The minimally absorbable oral antibiotic rifaximin is thought to further reduce ammonia production through direct antibacterial effects within the gut lumen. Thus the authors of this pivotal 2010 study sought to determine the additive effect of daily rifaximin prophylaxis in the prevention of HE.

The study enrolled adults with cirrhosis and 2+ episodes of overt HE during the past 6 months and randomized them to treatment with either rifaximin 550mg PO BID x6 months or placebo 550mg PO BID x6 months. The primary outcome was time to first breakthrough episode of HE (West Haven Score of 2+ or West Haven Score 0 –> 1 with worsening asterixis). Secondary outcomes included time to first hospitalization involving HE and adverse events, including those “possibly related to infection.”

299 patients were randomized. 140 and 159 patients were assigned to rifaximin and placebo, respectively. Baseline characteristics were similar among the two groups. Lactulose use prior to and during the study was similar in both groups at approximately 91%. Breakthrough HE occurred in 31 (22.1%) of the rifaximin patients and 73 (45.9%) of the placebo patients [HR 0.42, 95% CI 0.28-0.64, p < 0.001, absolute risk reduction 23.7%, NNT = 4.2]. This result was consistent within all tested subgroups, except patients with MELD score 19-24 and patients who were not using lactulose at baseline. (See Figure 3.) Hospitalization involving HE occurred in 19 (13.6%) of the rifaximin patients and 36 (22.6%) of the placebo patients [HR 0.50, 95% CI 0.29-0.87, p = 0.01, absolute risk reduction 9.1%, NNT = 11.0]. There were no differences in adverse events among the two treatment groups.

Thus, prophylactic rifaximin reduced the incidence of recurrent HE and its resultant hospitalizations. This landmark trial showed a clear treatment benefit with implied savings in healthcare utilization costs associated with HE recurrences and hospitalizations. This marked effect was demonstrated even in the setting of relatively good (91%) lactulose adherence in both treatment arms prior to and throughout the trial.  On the day this trial was published in 2010, the FDA approved rifaximin for “reduction in risk of overt hepatic encephalopathy recurrence” in adults.

Because rifaximin is not generic and remains quite expensive, its financial utility is limited from an insurance company’s perspective. There is no other comparable nonabsorbable antibiotic for this indication. UpToDate suggests starting with lactulose therapy and then adding a nonabsorbable antibiotic, such as rifaximin, both for the treatment of overt HE and for the prevention of recurrent HE. In practice, most insurance companies will require a prior authorization for outpatient rifaximin treatment, but in my recent experience, this process has been perfunctory and easy.

Further Reading/References:
1. ClinicalTrials.gov, NCT00298038
2. FDA, NDA approval letter for Xifaxan (rifaximin)
3. UpToDate, “Hepatic encephalopathy in adults: Treatment”

Summary by Duncan F. Moore, MD

Image Credit: Centers for Disease Control and Prevention / Dr. Edwin P. Ewing, Jr., US Public Domain, via Wikimedia Commons

Week 29 – CHADS2

“Validation of Clinical Classification Schemes for Predicting Stroke”

JAMA. 2001 June 13;285(22):2864-70. [free full text]

Atrial fibrillation is the most common cardiac arrhythmia and affects 1-2% of the overall population with increasing prevalence as people age. Atrial fibrillation also carries substantial morbidity and mortality due to the risk of stroke and thromboembolism although the risk of embolic phenomena varies widely across various subpopulations. In 2001, the only oral anticoagulation options available were warfarin and aspirin, which had relative risk reductions of 62% and 22%, respectively, consistent across these subpopulations. Clinicians felt that high risk patients should be anticoagulated, but the two common classification schemes, AFI and SPAF, were flawed. Patients were often classified as low risk in one scheme and high risk in the other. The schemes were derived retrospectively and were clinically ambiguous. Therefore, in 2001, a group of investigators combined the two existing schemes to create the CHADS2 scheme and applied it to a new data set.

Population (NRAF cohort): Hospitalized Medicare patients ages 65-95 with non-valvular AF not prescribed warfarin at hospital discharge.

Intervention: Determination of CHADS2 score (1 point for recent CHF, hypertension, age ≥ 75, and DM; 2 points for a history of stroke or TIA)

Comparison: AFI and SPAF risk schemes

Measured Outcome: Hospitalization rates for ischemic stroke (per ICD-9 codes from Medicare claims), stratified by CHADS2 / AFI / SPAF scores.

Calculated Outcome: performance of the various schemes, based on c statistic (a measure of predictive accuracy in a binary logistic regression model)

Results:
1733 patients were identified in the NRAF cohort. When compared to the AFI and SPAF trials, these patients tended be older (81 in NRAF vs. 69 in AFI vs. 69 in SPAF), have a higher burden of CHF (56% vs. 22% vs. 21%), are more likely to be female (58% vs. 34% vs. 28%), and have a history of DM (23% vs. 15% vs. 15%) or prior stroke/TIA (25% vs. 17% vs. 8%). The stroke rate was lowest in the group with a CHADS2 = 0 (1.9 per 100 patient years, adjusting for the assumption that aspirin was not taken). The stroke rate increased by a factor of approximately 1.5 for each 1-point increase in the CHADS2 score.

CHADS2 score            NRAF Adjusted Stroke Rate per 100 Patient-Years
0                                      1.9
1                                       2.8
2                                      4.0
3                                      5.9
4                                      8.5
5                                      12.5
6                                      18.2

The CHADS2 scheme had a c statistic of 0.82 compared to 0.68 for the AFI scheme and 0.74 for the SPAF scheme.

Implication/Discussion
The CHADS2 scheme provides clinicians with a scoring system to help guide decision making for anticoagulation in patients with non-valvular AF.

The authors note that the application of the CHADS2 score could be useful in several clinical scenarios. First, it easily identifies patients at low risk of stroke (CHADS2 = 0) for whom anticoagulation with warfarin would probably not provide significant benefit. The authors argue that these patients should merely be offered aspirin. Second, the CHADS2 score could facilitate medication selection based on a patient-specific risk of stroke. Third, the CHADS2 score could help clinicians make decisions regarding anticoagulation in the perioperative setting by evaluating the risk of stroke against the hemorrhagic risk of the procedure. Although the CHADS2 is no longer the preferred risk-stratification scheme, the same concepts are still applicable to the more commonly used CHA2DS2-VASc.

This study had several strengths. First, the cohort was from seven states that represented all geographic regions of the United States. Second, CHADS2 was pre-specified based on previous studies and validated using the NRAF data set. Third, the NRAF data set was obtained from actual patient chart review as opposed to purely from an administrative database. Finally, the NRAF patients were older and sicker than those of the AFI and SPAF cohorts, and thus the CHADS2 appears to be generalizable to the very large demographic of frail, elderly Medicare patients.

As CHADS2 became widely used clinically in the early 2000s, its application to other cohorts generated a large intermediate-risk group (CHADS2 = 1), which was sometimes > 60% of the cohort (though in the NRAF cohort, CHADS2 = 1 accounted for 27% of the cohort). In clinical practice, this intermediate-risk group was to be offered either warfarin or aspirin. Clearly, a clinical-risk predictor that does not provide clear guidance in over 50% of patients needs to be improved. As a result, the CHA2DS2-VASc scoring system was developed from the Birmingham 2009 scheme. When compared head-to-head in registry data, CHA2DS2-VASc more effectively discriminated stroke risk among patients with a baseline CHADS2 score of 0 to 1. Because of this, CHA2DS2-VASc is the recommended risk stratification scheme in the most recent AHA/ACC/HRS guidelines. In modern practice, anticoagulation is unnecessary when CHA2DS2-VASc score = 0, should be considered (vs. antiplatelet or no treatment) when score = 1, and is recommended when score ≥ 2.

Further Reading:
1. 2019 AHA/ACC/HRS Focused Update of the 2014 AHA/ACC/HRS Guideline for the Management of Patients With Atrial Fibrillation
2. CHA2DS2-VASc in Chest (2010)
3. CHADS2 @ 2 Minute Medicine

Summary by Ryan Commins, MD

Image Credit: Alisa Machalek, NIGMS/NIH – National Institute of General Medical Sciences, Public Domain, via Wikimedia Commons

Week 28 – FACT

“Febuxostat Compared with Allopurinol in Patients with Hyperuricemia and Gout”

aka the Febuxostat versus Allopurinol Controlled Trial (FACT)

N Engl J Med. 2005 Dec 8;353(23):2450-61. [free full text]

Gout is thought to affect approximately 3% of the US population, and its prevalence appears to be rising. Gout occurs due to precipitation of monosodium urate crystals from supersaturated body fluids. Generally, the limit of solubility is 6.8 mg/dL, but local factors such as temperature, pH, and other solutes can lower this threshold. A critical element in the treatment of gout is the lowering of the serum urate concentration below the limit of solubility, and generally, the accepted target is 6.0 mg/dL. The xanthine oxidase inhibitor allopurinol is the most commonly used urate-lowering pharmacologic therapy. Allopurinol rarely can have severe or life-threatening side effects, particularly among patients with renal impairment. Thus drug companies have sought to bring to market other xanthine oxidase inhibitors such as febuxostat (trade name Uloric). In this chronic and increasingly burdensome disease, a more efficacious drug with fewer exclusion criteria and fewer side effects would be a blockbuster.

The study enrolled adults with gout and a serum urate concentration of ≥ 8.0 mg/dL. Exclusion criteria included serum Cr ≥ 1.5 mg/dL or eGFR < 50 ml/min (due to this being a relative contraindication for allopurinol use) as well as a the presence of various conditions or use of various drugs that would affect urate metabolism and/or clearance of the trial drugs. (Patients already on urate-lowering therapy were given a two week washout period prior to randomization.) Patients were randomized to treatment for 52 weeks with either febuxostat 80mg PO daily, febuxostat 120mg PO daily, or allopurinol 300mg PO daily. Because the initiation of urate-lowering therapy places patients at increased risk of gout flares, patients were placed on prophylaxis with either naproxen 250mg PO BID or colchicine 0.6mg PO daily for the first 8 weeks of the study. The primary endpoint was a serum urate level of < 6.0 mg/dL at weeks 44, 48, and 52. Selected secondary endpoints included percentage reduction in serum urate from baseline at each visit, percentage reduction in area of a selected tophus, and prevalence of acute gout flares weeks requiring treatment.

762 patients were randomized. Baseline characteristics were statistically similar among all three groups. A majority of the patients were white males age 50+ who drank alcohol. Average serum urate was slightly less than 10 mg/dL. The primary endpoint (urate < 6.0 at the last three monthly measurements) was achieved in 53% of patients taking febuxostat 80mg, 62% of patients taking febuxostat 120mg, and 21% of patients taking allopurinol 300mg (p < 0.001 for each febuxostat groups versus allopurinol). Regarding selected secondary endpoints:

1) The percent reduction in serum urate from baseline at the final visit was 44.73 ± 19.10 in the febuxostat 80mg group, 52.52 ± 19.91 in the febuxostat 120mg group, and 32.99 ± 15.33 in the allopurinol 300mg group (p < 0.001 for each febuxostat group versus allopurinol, and p < 0.001 for febuxostat 80mg versus 120mg). 2) The percentage reduction in area of a single selected tophus was assessed in 156 patients who had tophi at baseline. At week 52, the median percentage reduction in tophus area was 83% in febuxostat 80mg patients, 66% in febuxostat 120mg patients, and 50% in allopurinol patients (no statistical difference per authors, p values not reported). Additionally, there was no significant reduction in tophus count in any of the groups. 3) During weeks 1-8 (in which acute gout flare prophylaxis was scheduled), 36% of patients in the febuxostat 120mg sustained a flare, whereas only 22% of the febuxostat 80mg group and 21% of the allopurinol group sustained a flare (p < 0.001 for both pairwise comparisons versus febuxostat 120mg). During weeks 9-52 (in which acute gout flare prophylaxis was no longer scheduled), a similar proportion of patients in each treatment group sustained an acute flare of gout (64% in the febuxostat 80mg group, 70% in the febuxostat 120mg group, and 64% in the allopurinol group). Finally, the incidence of treatment-related adverse events was similar among all three groups (see Table 3). Treatment was most frequently discontinued in the febuxostat 120mg group (98 patients, versus 88 patients in the febuxostat 80mg group and 66 patients in the allopurinol group; p = 0.003 for comparison between febuxostat 120mg and allopurinol).

In summary, this large RCT of urate-lowering therapy among gout patients found that febuxostat, dosed at either 80mg or 120mg PO daily, was more efficacious than allopurinol 300mg in reducing serum urate to below 6.0 mg/dL. Febuxostat was not superior to allopurinol with respect to the tested clinical outcomes of tophus size reduction, tophus count, and acute gout flares. Safety profiles were similar among the three regimens.

The authors note that the incidence of gout flares during and after the prophylaxis phase of the study “calls attention to a well-described paradox with important implications for successful management of gout: the risk of acute gout flares is increased early in the course of urate-lowering treatment” and the authors suggest that there is “a role for more sustained prophylaxis during the initiation of urate-lowering therapy than was provided here” (2458).

A limitation of this study is that its comparator group, allopurinol 300mg PO daily, may not have represented optimal use of the drug. Allopurinol should be uptitrated q2-4 weeks to the minimum dose required to maintain the goal serum urate of < 6.0 mg/dL (< 5.0 if tophi are present). According to UpToDate, “a majority of gout patients require doses of allopurinol exceeding 300 mg/day in order to maintain serum urate < 6.0 mg/dL.” In the United States allopurinol has been approved for doses of up to 800 mg daily. The authors state that “titration of allopurinol would have compromised the blinding of the study” (2459) but this is not true – blinded protocolized titration of study or comparator drugs has been performed in numerous other RCTs and could have been achieved simply at greater cost to and effort from the study sponsor (which happens to be the drug company TAP Pharmaceuticals). The likelihood that such titration would have shifted the results toward a null effect does not go unnoted. Another limitation is the relatively short duration of the trial – follow-up may have been insufficient to establish superiority in clinical outcomes, given the chronic nature of the disease.

In the UK, the National Institute for Health and Care Excellence (NICE), the agency tasked with assessing cost-effectiveness of various medical therapies, recommended as of 2008 that febuxostat be used for the treatment of hyperuricemia in gout “only for people who are intolerant of allopurinol or for whom allopurinol is contraindicated.”

Of note, a recent study funded by Takeda Pharmaceuticals demonstrated the non-inferiority of febuxostat relative to allopurinol with respect to rates of adverse cardiovascular events in patient with gout and major pre-existing cardiovascular conditions.

Allopurinol started at 100mg PO daily and titrated gradually to goal serum urate is the current general practice in the US. However, patients of Chinese, Thai, Korean, or “another ethnicity with similarly increased frequency of HLA-B*5801” should be tested for HLA-B*5801 prior to initiation of allopurinol therapy, as those patients are at increased risk of a severe cutaneous adverse reaction to allopurinol.

Further Reading/References:
1. FACT @ ClinicalTrials.gov
2. UpToDate “Pharmacologic urate-lowering therapy and treatment of tophi in patients with gout”
3. NICE: “Febuxostat for the management of hyperuricemia in people with gout”
4. “Cardiovascular Safety of Febuxostat or Allopurinol in Patients with Gout.” N Engl J Med. 2018 Mar 29;378(13):1200-1210.

Summary by Duncan F. Moore, MD

Image Credit: James Gilray, US Public Domain, via Wikimedia Commons

Week 27 – ELITE-Symphony

“Reduced Exposure to Calcineurin Inhibitors in Renal Transplantation”

by the Efficacy Limiting Toxicity Elimination (ELITE)-Symphony investigators

N Engl J Med. 2007 Dec 20;357(25):2562-75. [free full text]

A maintenance immunosuppressive regimen following kidney transplantation must balance the benefit of immune tolerance of the transplanted kidney against the adverse effects of the immunosuppressive regimen. Calcineurin inhibitors, such as cyclosporine (CsA) and tacrolimus, are nephrotoxic and can cause long-term renal dysfunction. They can also cause neurologic and infectious complications. At the time of this study, tacrolimus had been only recently introduced but already was appearing to be better than CsA at preventing acute rejection. Sirolimus, an mTOR inhibitor, is notable for causing delayed wound healing, among other adverse effects. The goal of the ELITE-Symphony study was to directly compare two different dosing regimens of CsA (standard- and low-dose) versus low-dose tacrolimus versus low-dose sirolimus, all while on background mycophenolate mofetil (MMF) and prednisone in order to determine which of these immunosuppressive regimens had the lowest nephrotoxicity, most efficacious prevention of rejection, and fewest other adverse effects.

The trial enrolled adults aged 18-75 scheduled to receive kidney transplants. There was a detailed set of exclusion criteria, including the need for treatment with immunosuppressants outside of the aforementioned regimens, specific poor prognostic factors regarding the allograft match or donor status, and specific comorbid or past medical conditions of the recipients. Patients were randomized open-label to one of four immunosuppressive treatment regimens in addition to MMF 2 gm daily and corticosteroids (“according to practice at the center” but with a pre-specified taper of minimum maintenance doses): 1) standard-dose CsA (target trough 150-300 ng/mL x3 months, then target trough 100-200 ng/mL), 2) daclizumab induction accompanied by low-dose cyclosporine (target trough 50-100 ng/mL), 3) daclizumab induction accompanied by low-dose tacrolimus (target trough 3-7 ng/mL), and 4) daclizumab induction accompanied by low-dose sirolimus (target trough 4-8 ng/mL). The primary endpoint was the eGFR at 12 months after transplantation. Secondary endpoints included acute rejection, incidence of delayed allograft function, and frequency of treatment failure (defined as use of additional immunosuppressive medication, discontinuation of any study medication for > 14 consecutive days or > 30 cumulative days, allograft loss, or death) within the first 12 months.

1645 patients were randomized. There were no significant differences in baseline characteristics among the four treatment groups. At 12 months following transplantation, mean eGFR differed among the four groups (p < 0.001). Low-dose tacrolimus patients had an eGFR of 65.4 ± 27.0 ml/min while standard-dose cyclosporine patients had an eGFR of 57.1 ± 25.1 ml/min (p < 0.001 for pairwise comparison with tacrolimus), low-dose cyclosporine patients had an eGFR of 59.4 ± 25.1 ml/min (p = 0.001 for pairwise comparison with tacrolimus), and low-dose sirolimus patients had an eGFR of 56.7 ± 26.9 ml/min (p < 0.001 for pairwise comparison with tacrolimus). The incidence of biopsy-proven acute rejection (excluding borderline values) at 6 months was only 11.3% in the low-dose tacrolimus group; however it was 24.0% in the standard-dose cyclosporine, 21.9% in the low-dose cyclosporine, and 35.3% in the low-dose sirolimus (p < 0.001 for each pairwise comparison with tacrolimus). Values were similar in magnitude and proportionality at 12-month follow-up. Delayed allograft function (among recipients of a deceased donor kidney) was lowest in the sirolimus group at 21.1% while it was 35.7% in the low-dose tacrolimus group (p = 0.001), 33.6% in the standard-dose cyclosporine group, and 32.4% (p = 0.73 for pairwise comparison with tacrolimus) in the low-dose cyclosporine group (p = 0.51 for pairwise comparison with tacrolimus). Treatment failure occurred in 12.2% of the low-dose tacrolimus group, 22.8% of the standard-dose cyclosporine group (p < 0.001 for pairwise comparison with tacrolimus), 20.1% of the low-dose cyclosporine group (p = 0.003 for pairwise comparison with tacrolimus), and in 35.8% of the low-dose sirolimus group (p < 0.001 for pairwise comparison with tacrolimus). Regarding safety events, the incidence of new-onset diabetes after transplantation (NODAT) at 12 months was highest among the low-dose tacrolimus group at 10.6% but only 6.4% among the standard-dose cyclosporine group, 4.7% among the low-dose cyclosporine group, and 7.8% among the low-dose sirolimus group (p = 0.02 for between-group difference per log-rank test). Opportunistic infections were most common in the standard-dose cyclosporine group at 33% (p = 0.03 for between-group difference per log-rank test).

In summary, the post-kidney transplant immunosuppression maintenance regimen with low-dose tacrolimus was superior to the standard- and low-dose cyclosporine regimens and sirolimus regimens with respect to renal function at 12 months, acute rejection at 6 and 12 months, and treatment failure during follow-up. However, this improved performance came at the cost of a higher rate of new-onset diabetes after transplantation. The findings of this study were integral to the 2009 KDIGO Clinical Practice Guideline for the Care of Kidney Transplant Recipients which recommends maintenance with a calcineurin inhibitor (tacrolimus first-line), and antiproliferative agent (MMF first-line), and corticosteroids (can consider discontinuation within 1 week in the relatively few patients at low immunologic risk for acute rejection, though expert opinion at UpToDate disagrees with this recommendation).

Further Reading/References:
1. ELITE-Symphony @ Wiki Journal Club
2. “The ELITE & the Rest in Kidney Transplantation.” Renal Fellow Network.
3. “HARMONY: Is it safe to withdraw steroids early after kidney transplant?” NephJC
4. 2009 KDIGO Clinical Practice Guideline for the Care of Kidney Transplant Recipients
5. “Maintenance immunosuppressive therapy in kidney transplantation in adults.” UpToDate

Summary by Duncan F. Moore, MD

Image Credit: Rmarlin, CC BY-SA 4.0, via Wikimedia Commons

Week 26 – ARISTOTLE

“Apixaban versus Warfarin in Patients with Atrial Fibrillation”

N Engl J Med. 2011 Sep 15;365(11):981-92. [free full text]

Prior to the development of the DOACs, warfarin was the standard of care for the reduction of risk of stroke in atrial fibrillation. Drawbacks of warfarin include a narrow therapeutic range, numerous drug and dietary interactions, the need for frequent monitoring, and elevated bleeding risk. Around 2010, the definitive RCTs for the oral direct thrombin inhibitor dabigatran (RE-LY) and the oral factor Xa inhibitor rivaroxaban (ROCKET AF) showed equivalence or superiority to warfarin. Shortly afterward, the ARISTOTLE trial demonstrated the superiority of the oral factor Xa inhibitor apixaban (Eliquis).

The trial enrolled patients with atrial fibrillation or flutter with at least one additional risk factor for stroke (age 75+, prior CVA/TIA, symptomatic CHF, or reduced LVEF). Notably, patients with Cr > 2.5 were excluded. Patients were randomized to treatment with either apixaban BID + placebo warfarin daily (reduced 2.5mg apixaban dose given in patients with 2 or more of the following: age 80+, weight < 60, Cr > 1.5) or to placebo apixaban BID + warfarin daily. The primary efficacy outcome was the incidence of stroke, and the primary safety outcome was “major bleeding” (clinically overt and accompanied by Hgb drop of ≥ 2, “occurring at a critical site,” or resulting in death). Secondary outcomes included all-cause mortality and a composite of major bleeding and “clinically-relevant non-major bleeding.”

9120 patients were assigned to the apixaban group, and 9081 were assigned to the warfarin group. Mean CHADS2 score was 2.1. Fewer patients in the apixaban group discontinued their assigned study drug. Median duration of follow-up was 1.8 years. The incidence of stroke was 1.27% per year in the apixaban group vs. 1.60% per year in the warfarin group (HR 0.79, 95% CI 0.66-0.95, p < 0.001). This reduction was consistent across all major subgroups (see Figure 2). Notably, the rate of hemorrhagic stroke was 49% lower in the apixaban group, and the rate of ischemic stroke was 8% lower in the apixaban group. All-cause mortality was 3.52% per year in the apixaban group vs. 3.94% per year in the warfarin group (HR 0.89, 95% CI 0.80-0.999, p = 0.047). The incidence of major bleeding was 2.13% per year in the apixaban group vs. 3.09% per year in the warfarin group (HR 0.69, 95% CI 0.60-0.80, p<0.001). The rate of intracranial hemorrhage was 0.33% per year in the apixaban group vs. 0.80% per year in the warfarin group (HR 0.42, 95% CI 0.30-0.58, p < 0.001). The rate of any bleeding was 18.1% per year in the apixaban group vs. 25.8% in the warfarin group (p <  0.001).

In patients with non-valvular atrial fibrillation and at least one other risk factor for stroke, anticoagulation with apixaban significantly reduced the risk of stroke, major bleeding, and all-cause mortality relative to anticoagulation with warfarin. This was a large RCT that was designed and powered to demonstrate non-inferiority but in fact was able to demonstrate the superiority of apixaban. Along with ROCKET AF and RE-LY, the ARISTOTLE trial ushered in the modern era of DOACs in atrial fibrillation. Apixaban was approved by the FDA for the treatment of non-valvular atrial fibrillation in 2012. Patient prescription cost is no longer a major barrier to prescription. These three major DOACs are all preferred in the DC Medicaid formulary (see page 13). To date, no trial has compared the various DOACs directly.

Further Reading/References:
1. ARISTOTLE @ Wiki Journal Club
2. ARISTOTLE @ 2 Minute Medicine
3. “Oral anticoagulants for prevention of stroke in atrial fibrillation: systematic review, network meta-analysis, and cost-effectiveness analysis,” BMJ 2017

Summary by Duncan F. Moore, MD

Week 25 – The Oregon Experiment

“The Oregon Experiment – Effects of Medicaid on Clinical Outcomes”

N Engl J Med. 2013 May 2;368(18):1713-22. [free full text]

Access to health insurance is not synonymous with access to healthcare. However, it has been generally assumed that increased access to insurance should improve healthcare outcomes among the newly insured. In 2008, Oregon expanded its Medicaid program by approximately 30,000 patients. These policies were lotteried among approximately 90,000 applicants. The authors of the Oregon Health Study Group sought to study the impact of this “randomized” intervention, and the results were hotly anticipated given the impending Medicaid expansion of the 2010 PPACA.

Population: Portland, Oregon residents who applied for the 2008 Medicaid expansion

Not all applicants were actually eligible.

Eligibility criteria: age 19-64, US citizen, Oregon resident, ineligible for other public insurance, uninsured for the previous 6 months, income below 100% of the federal poverty level, and assets < $2000.

Intervention: winning the Medicaid-expansion lottery

Comparison: The statistical analyses of clinical outcomes in this study do not actually compare winners to non-winners. Instead, they compare non-winners to winners who ultimately received Medicaid coverage. Winning the lottery increased the chance of being enrolled in Medicaid by about 25 percentage points. Given the assumption that “the lottery affected outcomes only by changing Medicaid enrollment, the effect of being enrolled in Medicaid was simply about 4 times…as high as the effect of being able to apply for Medicaid.” This allowed the authors to conclude causal inferences regarding the benefits of new Medicaid coverage.

Outcomes:
Values or point prevalence of the following at approximately 2 years post-lottery:

      1. blood pressure, diagnosis of hypertension
      2. cholesterol levels, diagnosis of hyperlipidemia
      3. HgbA1c, diagnosis of diabetes
      4. Framingham risk score for cardiovascular events
      5. positive depression screen, depression dx after lottery, antidepressant use
      6. health-related quality of life measures
      7. measures of financial hardship (e.g. catastrophic expenditures)
      8. measures of healthcare utilization (e.g. estimated total annual expenditure)

These outcomes were assessed via in-person interviews, assessment of blood pressure, and a blood draw for biomarkers.

Results:
The study population included 10,405 lottery winners and 10,340 non-winners. Interviews were performed ~25 months after the lottery. While there were no significant differences in baseline characteristics among winners and non-winners, “the subgroup of lottery winners who ultimately enrolled in Medicaid was not comparable to the overall group of persons ho did not win the lottery” (no demographic or other data provided).

At approximately 2 years following the lottery, there were no differences in blood pressure or prevalence of diagnosed hypertension between the lottery non-winners and those who enrolled in Medicaid. There were also no differences between the groups in cholesterol values, prevalence of diagnosis of hypercholesterolemia after the lottery, or use of medications for high cholesterol. While more Medicaid enrollees were diagnosed with diabetes after the lottery (absolute increase of 3.8 percentage points, 95% CI 1.93-5.73, p < 0.001; prevalence 1.1% in non-winners) and were more likely to be using medications for diabetes than the non-winners (absolute increase of 5.43 percentage points, 95% CI 1.39-9.48, p= 0.008), there was no statistically significant difference in HgbA1c values among the two groups. Medicaid coverage did not significantly alter 10-year Framingham cardiovascular event risk. At follow-up, fewer Medicaid-enrolled patients screened positive for depression (decrease of 9.15 percentage points, 95% CI -16.70 to -1.60,  p= 0.02), while more had formally been diagnosed with depression during the interval since the lottery (absolute increase of 3.81 percentage points, 95% CI 0.15-7.46, p= 0.04). There was no significant difference in prevalence of antidepressant use.

Medicaid-enrolled patients were more likely to report that their health was the same or better since 1 year prior (increase of 7.84 percentage points, 95% CI 1.45-14.23, p = 0.02). There were no significant differences in scores for quality of life related to physical health or in self-reported levels of pain or global happiness. As seen in Table 4, Medicaid enrollment was associated with decreased out-of-pocket spending (15% had a decrease, average decrease $215), decreased prevalence of medical debt, and a decreased prevalence of catastrophic expenditures (absolute decrease of 4.48 percentage points, 95% CI -8.26 to 0.69, p = 0.02).

Medicaid-enrolled patients were prescribed more drugs and had more office visits but no change in number of ED visits or hospital admissions. Medicaid coverage was estimated to increase total annual medical spending by $1,172 per person (an approximately 35% increase). Of note, patients enrolled in Medicaid were more likely to have received a pap smear or mammogram during the study period.

Implication/Discussion:
This study was the first major study to “randomize” health insurance coverage and study the health outcome effects of gaining insurance.

Overall, this study demonstrated that obtaining Medicaid coverage “increased overall health care utilization, improved self-reported health, and reduced financial strain.” However, its effects on patient-level health outcomes were much more muted. Medicaid coverage did not impact the prevalence or severity of hypertension or hyperlipidemia. Medicaid coverage appeared to aid in the detection of diabetes mellitus and use of antihyperglycemics but did not affect average A1c. Accordingly, there was no significant difference in Framingham risk score among the two groups.

The glaring limitation of this study was that its statistical analyses compared two groups with unequal baseline characteristics, despite the purported “randomization” of the lottery. Effectively, by comparing Medicaid enrollees (and not all lottery winners) to the lottery non-winners, the authors failed to perform an intention-to-treat analysis. This design engendered significant confounding, and it is remarkable that the authors did not even attempt to report baseline characteristics among the final two groups, let alone control for any such differences in their final analyses. Furthermore, the fact that not all reported analyses were pre-specified raises suspicion of post hoc data dredging for statistically significant results (“p-hacking”). Overall, power was limited in this study due to the low prevalence of the conditions studied.

Contemporary analysis of this study, both within medicine and within the political sphere, was widely divergent. Medicaid-expansion proponents noted that new access to Medicaid provided a critical financial buffer from potentially catastrophic medical expenditures and allowed increased access to care (as measured by clinic visits, medication use, etc.), while detractors noted that, despite this costly program expansion and fine-toothed analysis, little hard-outcome benefit was realized during the (admittedly limited) follow-up at two years.

Access to insurance is only the starting point in improving the health of the poor. The authors note that “the effects of Medicaid coverage may be limited by the multiple sources of slippage…[including] access to care, diagnosis of underlying conditions, prescription of appropriate medications, compliance with recommendations, and effectiveness of treatment in improving health.”

Further Reading/References:
1. Baicker et al. (2013), “The Impact of Medicaid on Labor Force Activity and Program Participation: Evidence from the Oregon Health Insurance Experiment”
2. Taubman et al. (2014), “Medicaid Increases Emergency-Department Use: Evidence from Oregon’s Health Insurance Experiment”
3. The Washington Post, “Here’s what the Oregon Medicaid study really said” (2013)
4. Michael Cannon, “Oregon Study Throws a Stop Sign in Front of ObamaCare’s Medicaid Expansion”
5. HealthAffairs Policy Brief, “The Oregon Health Insurance Experiment”
6. The Oregon Health Insurance Experiment

Summary by Duncan F. Moore, MD

Image Credit: Centers for Medicare and Medicaid Services, Public Domain, via Wikimedia Commons