Week 34 – PLCO

“Mortality Results from a Randomized Prostate-Cancer Screening Trial”

by the Prostate, Lung, Colorectal, and Ovarian (PLCO) Cancer Screening Trial project team

N Engl J Med. 2009 Mar 26;360(13):1310-9. [free full text]

The use of prostate-specific-antigen (PSA) testing to screen for prostate cancer has been a contentious subject for decades. Prior to the 2009 PLCO trial, there were no high-quality prospective studies of the potential benefit of PSA testing.

The trial enrolled men ages 55-74 (excluded if hx prostate, lung, or colorectal cancer, current cancer treatment, or > 1 PSA test in the past 3 years). Patients were randomized to annual PSA testing for 6 years with annual digital rectal exam (DRE) for 4 years or to usual care. The primary outcome was the prostate-cancer-attributable death rate, and the secondary outcome was the incidence of prostate cancer.

38,343 patients were randomized to the screening group, and 38,350 were randomized to the usual-care group. Baseline characteristics were similar in both groups. Median follow-up duration was 11.5 years. Patients in the screening group were 85% compliant with PSA testing and 86% compliant with DRE. In the usual-care group, 40% of patients received a PSA test within the first year, and 52% received a PSA test by the sixth year. Cumulative DRE rates in the usual-care group were between 40-50%. By seven years, there was no significant difference in rates of death attributable to prostate cancer. There were 50 deaths in the screening group and only 44 in the usual-care group (rate ratio 1.13, 95% CI 0.75 – 1.70). At ten years, there were 92 and 82 deaths in the respective groups (rate ratio 1.11, 95% CI 0.83–1.50). By seven years, there was a higher rate of prostate cancer detection in the screening group. 2820 patients were diagnosed in the screening group, but only 2322 were diagnosed in the usual-care group (rate ratio 1.22, 95% CI 1.16–1.29). By ten years, there were 3452 and 2974 diagnoses in the respective groups (rate ratio 1.17, 95% CI 1.11–1.22). Treatment-related complications (e.g. infection, incontinence, impotence) were not reported in this study.

In summary, yearly PSA screening increased the prostate cancer diagnosis rate but did not impact prostate-cancer mortality when compared to the standard of care. However, there were relatively high rates of PSA testing in the usual-care group (40-50%). The authors cite this finding as a probable major contributor to the lack of mortality difference. Other factors that may have biased to a null result were prior PSA testing and advances in treatments for prostate cancer during the trial. Regarding the former, 44% of men in both groups had already had one or more PSA tests prior to study enrollment. Prior PSA testing likely contributed to selection bias.

PSA screening recommendations prior to this 2009 study:

      • American Urological Association and American Cancer Society – recommended annual PSA and DRE, starting at age 50 if normal risk and earlier in high-risk men
      • National Comprehensive Cancer Network: “a risk-based screening algorithm, including family history, race, and age”
      • 2008 USPSTF Guidelines: insufficient evidence to determine balance between risks/benefits of PSA testing in men younger than 75; recommended against screening in age 75+ (Grade I Recommendation)

The authors of this study conclude that their results “support the validity of the recent [2008] recommendations of the USPSTF, especially against screening all men over the age of 75.”

However, the conclusions of the European Randomized Study of Screening for Prostate Cancer (ERSPC), which was published concurrently with PLCO in NEJM, differed. In ERSPC, PSA was screened every 4 years. The authors found an increased rate of detection of prostate cancer, but, more importantly, they found that screening decreased prostate cancer mortality (adjusted rate ratio 0.80, 95% CI 0.65–0.98, p = 0.04; NNT 1410 men receiving 1.7 screening visits over 9 years). Like PLCO, this study did not report treatment harms that may have been associated with overly zealous diagnosis.

The USPSTF reexamined its PSA guidelines in 2012. Given the lack of mortality benefit in PLCO, the pitiful mortality benefit in ERSPC, and the assumed harm from over-diagnosis and excessive intervention in patients who would ultimately not succumb to prostate cancer, the USPSTF concluded that PSA-based screening for prostate cancer should not be offered (Grade D Recommendation).

In the following years, the pendulum has swung back partially toward screening. In May 2018, the USPSTF released new recommendations that encourage men ages 55-69 to have an informed discussion with their physician about potential benefits and harms of PSA-based screening (Grade C Recommendation). The USPSTF continues to recommend against screening in patients over 70 years old (Grade D).

Screening for prostate cancer remains a complex and controversial topic. Guidelines from the American Cancer Society, American Urological Association, and USPSTF vary, but ultimately all recommend shared decision-making. UpToDate has a nice summary of talking points culled from several sources.

Further Reading/References:
#. PLCO @ 2 Minute Medicine
#. ERSPC @ Wiki Journal Club
#. UpToDate, Screening for Prostate Cancer

Summary by Duncan F. Moore, MD

Image Credit: Otis Brawley, Public Domain, NIH National Cancer Institute Visuals Online

Week 33 – Varenicline vs. Bupropion and Placebo for Smoking Cessation

“Varenicline, an α2β2 Nicotinic Acetylcholine Receptor Partial Agonist, vs Sustained-Release Bupropion and Placebo for Smoking Cessation”

JAMA. 2006 Jul 5;296(1):56-63. [free full text]

Assisting our patients in smoking cessation is a fundamental aspect of outpatient internal medicine. At the time of this trial, the only approved pharmacotherapies for smoking cessation were nicotine replacement therapy and bupropion. As the α2β2 nicotinic acetylcholine receptor (nAChR) was thought to be crucial to the reinforcing effects of nicotine, it was hypothesized that a partial agonist for this receptor could yield sufficient effect to satiate cravings and minimize withdrawal symptoms but also limit the reinforcing effects of exogenous nicotine. Thus Pfizer designed this large phase 3 trial to test the efficacy of its new α2β2 nAChR partial agonist varenicline (Chantix) against the only other non-nicotine pharmacotherapy at the time (bupropion) as well as placebo.

The trial enrolled adult smokers (10+ cigarettes per day) with fewer than three months of smoking abstinence in the past year (notable exclusion criteria included numerous psychiatric and substance use comorbidities). Patients were randomized to 12 weeks of treatment with either varenicline uptitrated by day 8 to 1mg BID, bupropion SR uptitrated by day 4 to 150mg BID, or placebo BID. Patients were also given a smoking cessation self-help booklet at the index visit and encouraged to set a quit date of day 8. Patients were followed at weekly clinic visits for the first 12 weeks (treatment duration) and then a mixture of clinic and phone visits for weeks 13-52. Non-smoking status during follow-up was determined by patient self-report combined with exhaled carbon monoxide < 10ppm. The primary endpoint was the 4-week continuous abstinence rate for study weeks 9-12 (as confirmed by exhaled CO level). Secondary endpoints included the continuous abstinence rate for weeks 9-24 and for weeks 9-52.

1025 patients were randomized. Compliance was similar among the three groups and the median duration of treatment was 84 days. Loss to follow-up was similar among the three groups. CO-confirmed continuous abstinence during weeks 9-12 was 44.0% among the varenicline group vs. 17.7% among the placebo group (OR 3.85, 95% CI 2.70–5.50, p < 0.001) vs. 29.5% among the bupropion group (OR vs. varenicline group 1.93, 95% CI 1.40–2.68, p < 0.001). (OR for bupropion vs. placebo was 2.00, 95% CI 1.38–2.89, p < 0.001.)  Continuous abstinence for weeks 9-24 was 29.5% among the varenicline group vs. 10.5% among the placebo group (p < 0.001) vs. 20.7% among the bupropion group (p = 0.007). Continuous abstinence rates weeks 9-52 were 21.9% among the varenicline group vs. 8.4% among placebo group (p < 0.001) vs. 16.1% among the bupropion group (p = 0.057). Subgroup analysis of the primary outcome by sex did not yield significant differences in drug efficacy by sex.

This study demonstrated that varenicline was superior to both placebo and bupropion in facilitating smoking cessation at up to 24 weeks. At greater than 24 weeks, varenicline remained superior to placebo but was similarly efficacious as bupropion. This was a well-designed and executed large, double-blind, placebo- and active-treatment-controlled multicenter US trial. The trial was completed in April 2005 and a new drug application for varenicline (Chantix) was submitted to the FDA in November 2005. Of note, an “identically designed” (per this study’s authors), manufacturer-sponsored phase 3 trial was performed in parallel and reported very similar results in the in the same July 2006 issue of JAMA (PMID: 16820547) as the above study by Gonzales et al. These robust, positive-outcome pre-approval trials of varenicline helped the drug rapidly obtain approval in May 2006.

Per expert opinion at UpToDate, varenicline remains a preferred first-line pharmacotherapy for smoking cessation. Bupropion is a suitable, though generally less efficacious, alternative, particularly when the patient has comorbid depression. Per UpToDate, the recent (2016) EAGLES trial demonstrated that “in contrast to earlier concerns, varenicline and bupropion have no higher risk of associated adverse psychiatric effects than [nicotine replacement therapy] in smokers with comorbid psychiatric disorders.”

Further Reading/References:
1. This trial @ ClinicalTrials.gov
2. Sister trial: “Efficacy of varenicline, an alpha4beta2 nicotinic acetylcholine receptor partial agonist, vs placebo or sustained-release bupropion for smoking cessation: a randomized controlled trial.” JAMA. 2006 Jul 5;296(1):56-63.
3. Chantix FDA Approval Letter 5/10/2006
4. Rigotti NA. Pharmacotherapy for smoking cessation in adults. Post TW, ed. UpToDate. Waltham, MA: UpToDate Inc. [https://www.uptodate.com/contents/pharmacotherapy-for-smoking-cessation-in-adults] (Accessed on February 16, 2019).
5. “Neuropsychiatric safety and efficacy of varenicline, bupropion, and nicotine patch in smokers with and without psychiatric disorders (EAGLES): a double-blind, randomised, placebo-controlled clinical trial.” Lancet. 2016 Jun 18;387(10037):2507-20.
6. 2 Minute Medicine: “Varenicline and bupropion more effective than varenicline alone for tobacco abstinence”
7. 2 Minute Medicine: “Varenicline safe for smoking cessation in patients with stable major depressive disorder”

Summary by Duncan F. Moore, MD

Image Credit: Сергей Фатеев, CC BY-SA 3.0, via Wikimedia Commons

Week 32 – ARISTOTLE

“Apixaban versus Warfarin in Patients with Atrial Fibrillation”

N Engl J Med. 2011 Sep 15;365(11):981-92. [free full text]

Prior to the development of the DOACs, warfarin was the standard of care for the reduction of risk of stroke in atrial fibrillation. Drawbacks of warfarin include a narrow therapeutic range, numerous drug and dietary interactions, the need for frequent monitoring, and elevated bleeding risk. Around 2010, the definitive RCTs for the oral direct thrombin inhibitor dabigatran (RE-LY) and the oral factor Xa inhibitor rivaroxaban (ROCKET AF) showed equivalence or superiority to warfarin. Shortly afterward, the ARISTOTLE trial demonstrated the superiority of the oral factor Xa inhibitor apixaban (Eliquis).

The trial enrolled patients with atrial fibrillation or flutter with at least one additional risk factor for stroke (age 75+, prior CVA/TIA, symptomatic CHF, or reduced LVEF). Notably, patients with Cr > 2.5 were excluded. Patients were randomized to treatment with either apixaban BID + placebo warfarin daily (reduced 2.5mg apixaban dose given in patients with 2 or more of the following: age 80+, weight < 60, Cr > 1.5) or to placebo apixaban BID + warfarin daily. The primary efficacy outcome was the incidence of stroke, and the primary safety outcome was “major bleeding” (clinically overt and accompanied by Hgb drop of ≥ 2, “occurring at a critical site,” or resulting in death). Secondary outcomes included all-cause mortality and a composite of major bleeding and “clinically-relevant non-major bleeding.”

9120 patients were assigned to the apixaban group, and 9081 were assigned to the warfarin group. Mean CHADS2 score was 2.1. Fewer patients in the apixaban group discontinued their assigned study drug. Median duration of follow-up was 1.8 years. The incidence of stroke was 1.27% per year in the apixaban group vs. 1.60% per year in the warfarin group (HR 0.79, 95% CI 0.66-0.95, p<0.001). This reduction was consistent across all major subgroups (see Figure 2). Notably, the rate of hemorrhagic stroke was 49% lower in the apixaban group, and the rate of ischemic stroke was 8% lower in the apixaban group. All-cause mortality was 3.52% per year in the apixaban group vs. 3.94% per year in the warfarin group (HR 0.89, 95% CI 0.80-0.999, p=0.047). The incidence of major bleeding was 2.13% per year in the apixaban group vs. 3.09% per year in the warfarin group (HR 0.69, 95% CI 0.60-0.80, p<0.001). The rate of intracranial hemorrhage was 0.33% per year in the apixaban group vs. 0.80% per year in the warfarin group (HR 0.42, 95% CI 0.30-0.58, p<0.001). The rate of any bleeding was 18.1% per year in the apixaban group vs. 25.8% in the warfarin group (p<0.001).

In patients with non-valvular atrial fibrillation and at least one other risk factor for stroke, anticoagulation with apixaban significantly reduced the risk of stroke, major bleeding, and all-cause mortality relative to anticoagulation with warfarin. This was a large RCT that was designed and powered to demonstrate non-inferiority but in fact was able to demonstrate the superiority of apixaban. Along with ROCKET AF and RE-LY, the ARISTOTLE trial ushered in the modern era of DOACs in atrial fibrillation. Apixaban was approved by the FDA for the treatment of non-valvular atrial fibrillation in 2012. Patient prescription cost is no longer a major barrier to prescription. These three major DOACs are all preferred in the DC Medicaid formulary (see page 14). To date, no trial has compared the various DOACs directly.

Further Reading/References:
1. ARISTOTLE @ Wiki Journal Club
2. 2 Minute Medicine
3. “Oral anticoagulants for prevention of stroke in atrial fibrillation: systematic review, network meta-analysis, and cost-effectiveness analysis,” BMJ 2017

Summary by Duncan F. Moore, MD

Week 31 – Early TIPS in Cirrhosis with Variceal Bleeding

“Early Use of TIPS in Patients with Cirrhosis and Variceal Bleeding”

N Engl J Med. 2010 Jun 24;362(25):2370-9. [free full text]

Variceal bleeding is a major cause of morbidity and mortality in decompensated cirrhosis. The standard of care for an acute variceal bleed includes a combination of vasoactive drugs, prophylactic antibiotics, and endoscopic techniques (e.g. banding). Transjugular intrahepatic portosystemic shunt (TIPS) can be used to treat refractory bleeding. This 2010 trial sought to determine the utility of early TIPS during the initial bleed in high-risk patients when compared to standard therapy.

The trial enrolled cirrhotic patients (Child-Pugh class B or C with score ≤ 13) with acute esophageal variceal bleeding. All patients received endoscopic band ligation (EBL) or endoscopic injection sclerotherapy (EIS) at the time of diagnostic endoscopy. All patients also received vasoactive drugs (terlipressin, somatostatin, or octreotide). Patients were randomized to either TIPS performed within 72 hours after diagnostic endoscopy or to “standard therapy” by 1) treatment with vasoactive drugs with transition to nonselective beta blocker when patients were free of bleeding followed by 2) addition of isosorbide mononitrate to maximum tolerated dose, and 3) a second session of EBL at 7-14 days after the initial session (repeated q10-14 days until variceal eradication was achieved). The primary outcome was a composite of failure to control acute bleeding or failure to prevent “clinically significant” variceal bleeding (requiring hospital admission or transfusion) at 1 year after enrollment. Selected secondary outcomes included 1-year mortality, development of hepatic encephalopathy (HE), ICU days, and hospital LOS.

359 patients were screened for inclusion, but ultimately only 63 were randomized. Baseline characteristics were similar among the two groups except that the early TIPS group had a higher rate of patients with previous hepatic encephalopathy. The primary composite endpoint of failure to control acute bleeding or rebleeding within 1 year occurred in 14 of 31 (45%) patients in the pharmacotherapy-EBL group and in only 1 of 32 (3%) of the early TIPS group (p = 0.001). The 1-year actuarial probability of remaining free of the primary outcome was 97% in the early TIPS group vs. 50% in the pharmacotherapy-EBL group (ARR 47 percentage points, 95% CI 25-69 percentage points, NNT 2.1). Regarding mortality, at one year, 12 of 31 (39%) patients in the pharmacotherapy-EBL group had died, while only 4 of 32 (13%) in the early TIPS group had died (p = 0.001, NNT = 4.0). There were no group differences in prevalence of HE at one year (28% in the early TIPS group vs. 40% in the pharmacotherapy-EBL group, p = 0.13). Additionally, there were no group differences in 1-year actuarial probability of new or worsening ascites. There were also no differences in length of ICU stay or hospitalization duration.

Early TIPS in acute esophageal variceal bleeding, when compared to standard pharmacotherapy and endoscopic band ligation, improved control of index bleeding, reduced recurrent variceal bleeding at 1 year, and reduced all-cause mortality. Prior studies had demonstrated that TIPS reduced the rebleeding rate but increased the rate of hepatic encephalopathy without improving survival. As such, TIPS had only been recommended as a rescue therapy. Obviously, this study presents compelling data that challenge these paradigms. The authors note that in “patients with Child-Pugh class C or in class B with active variceal bleeding, failure to initially control the bleeding or early rebleeding contributes to further deterioration in liver function, which in turn worsens the prognosis and may preclude the use of rescue TIPS.” Authors at UpToDate note that, given the totality of evidence to date, the benefit of early TIPS in preventing rebleeding “is offset by its failure to consistently improve survival and increasing morbidity due to the development of liver failure and encephalopathy.” Today, TIPS remains primarily a salvage therapy for use in cases of recurrent bleeding despite standard pharmacotherapy and EBL. There may be a subset of patients in whom early TIPS is the ideal strategy, but further trials will be required to identify this subset.

Further Reading/References:
1. Wiki Journal Club []
2. 2 Minute Medicine []
3. UpToDate, “Prevention of recurrent variceal hemorrhage in patients with cirrhosis

Summary by Duncan F. Moore, MD

Week 30 – Bicarbonate and Progression of CKD

“Bicarbonate Supplementation Slows Progression of CKD and Improves Nutritional Status”

J Am Soc Nephrol. 2009 Sep;20(9):2075-84. [free full text]

Metabolic acidosis is a common complication of advanced CKD. Some animal models of CKD have suggested that worsening metabolic acidosis is associated with worsening proteinuria, tubulointerstitial fibrosis, and acceleration of decline of renal function. Short-term human studies have demonstrated that bicarbonate administration reduces protein catabolism and that metabolic acidosis is an independent risk factor for acceleration of decline of renal function. However, until this 2009 study by de Brito-Ashurst et al., there were no long-term studies demonstrating the beneficial effects of oral bicarbonate administration on CKD progression and nutritional status.

The study enrolled CKD patients with CrCl 15-30ml/min and plasma bicarbonate 16-20 mEq/L and randomized them to treatment with either sodium bicarbonate 600mg PO TID (with protocolized uptitration to achieve plasma HCO3  ≥ 23 mEq/L) for 2 years, or to routine care. The primary outcomes were: 1) the decline in CrCl at 2 years, 2) “rapid progression of renal failure” (defined as decline of CrCl > 3 ml/min per year), and 3) development of ESRD requiring dialysis. Secondary outcomes included 1) change in dietary protein intake, 2) change in normalized protein nitrogen appearance (nPNA), 3) change in serum albumin, and 4) change in mid-arm muscle circumference.

134 patients were randomized, and baseline characteristics were similar among the two groups. Serum bicarbonate levels increased significantly in the treatment arm. (See Figure 2.) At two years, CrCl decline was 1.88 ml/min in the treatment group vs. 5.93 ml/min in the control group (p < 0.01). Rapid progression of renal failure was noted in 9% of intervention group vs. 45% of the control group (RR 0.15, 95% CI 0.06–0.40, p < 0.0001, NNT = 2.8), and ESRD developed in 6.5% of the intervention group vs. 33% of the control group (RR 0.13, 95% CI 0.04–0.40, p < 0.001; NNT = 3.8). Regarding nutritional status, dietary protein intake increased in the treatment group relative to the control group (p < 0.007). Normalized protein nitrogen appearance decreased in the treatment group and increased in the control group (p < 0.002). Serum albumin increased in the treatment group but was unchanged in the control group, and mean mid-arm muscle circumference increased by 1.5 cm in the intervention group vs. no change in the control group (p < 0.03).

In conclusion, oral bicarbonate supplementation in CKD patients with metabolic acidosis reduces the rate of CrCl decline and progression to ESRD and improves nutritional status. Primarily on the basis of this study, the KDIGO 2012 guidelines for the management of CKD recommend oral bicarbonate supplementation to maintain serum bicarbonate within the normal range (23-29 mEq/L). This is a remarkably cheap and effective intervention. Importantly, the rates of adverse events, particularly worsening hypertension and increasing edema, were unchanged among the two groups. Of note, sodium bicarbonate induces much less volume expansion than a comparable sodium load of sodium chloride.

In their discussion, the authors suggest that their results support the hypothesis of Nath et al. (1985) that “compensatory changes [in the setting of metabolic acidosis] such as increased ammonia production and the resultant complement cascade activation in remnant tubules in the declining renal mass [are] injurious to the tubulointerstitium.” The hypercatabolic state of advanced CKD appears to be mitigated by bicarbonate supplementation. The authors note that “an optimum nutritional status has positive implications on the clinical outcomes of dialysis patients, whereas [protein-energy wasting] is associated with increased morbidity and mortality.”

Limitations to this trial include its open-label, no-placebo design. Also, the applicable population is limited by study exclusion criteria of morbid obesity, overt CHF, and uncontrolled HTN.

Further Reading:
1. Nath et al. “Pathophysiology of chronic tubulo-interstitial disease in rats: Interactions of dietary acid load, ammonia, and complement component-C3” (1985)
2. KDIGO 2012 Clinical Practice Guideline for the Evaluation and Management of Chronic Kidney Disease (see page 89)
3. UpToDate, “Pathogenesis, consequences, and treatment of metabolic acidosis in chronic kidney disease”

Week 29 – PneumA

“Comparison of 8 vs 15 Days of Antibiotic Therapy for Ventilator-Associated Pneumonia in Adults”

JAMA. 2003 November 19;290(19):2588-2598. [free full text]

Ventilator-associated pneumonia (VAP) is a frequent complication of mechanical ventilation and, prior to this study, few trials had addressed the optimal duration of antibiotic therapy in VAP. Thus, patients frequently received 14- to 21-day antibiotic courses. As antibiotic stewardship efforts increased and awareness grew of the association between prolonged antibiotic courses and the development of multidrug resistant (MDR) infections, more data were needed to clarify the optimal VAP treatment duration.

This 2003 trial by the PneumA Trial Group was the first large randomized trial to compare shorter (8-day) versus longer (15-day) treatment courses for VAP.

The noninferiority study, carried out in 51 French ICUs, enrolled intubated patients with clinical suspicion for VAP and randomized them to either 8 or 15 days of antimicrobials. Antimicrobial regimens were chosen by the treating clinician. 401 patients met eligibility criteria. 197 were randomized to the 8-day regimen. 204 patients were randomized to the 15-day regimen. Study participants were blinded to randomization assignment until day 8. Analysis was performed using an intention-to-treat model. The primary outcomes measured were death from any cause at 28 days, antibiotic-free days, and microbiologically documented pulmonary infection recurrence.

Study findings demonstrated a similar 28-day mortality in both groups (18.8% mortality in 8-day group vs. 17.2% in 15-day group, group difference 90% CI -3.7% to 6.9%). The 8-day group did not develop more recurrent infections (28.9% in 8-day group vs. 26.0% in 15-day group, group difference 90% CI -3.2% to 9.1%). The 8-day group did have more antibiotic-free days when measured at the 28-day point (13.1 in 8-day group vs. 8.7 in 15-day group, p<0.001). A subgroup analysis did show that more 8-day-group patients who had an initial infection with lactose-nonfermenting GNRs developed a recurrent pulmonary infection, so noninferiority was not established in this specific subgroup (40.6% recurrent GNR infection in 8-day group vs. 25.4% in 15-day group, group difference 90% CI 3.9% to 26.6%).

Implications/Discussion:
There is no benefit to prolonging VAP treatment to 15 days (except perhaps when Pseudomonas aeruginosa is suspected based on gram stain/culture data). Shorter courses of antibiotics for VAP treatment allow for less antibiotic exposure without increasing rates of recurrent infection or mortality.

The 2016 IDSA guidelines on VAP treatment recommend a 7-day course of antimicrobials for treatment of VAP (as opposed to a longer treatment course such as 8-15 days). These guidelines are based on the IDSA’s own large meta-analysis (of 10 randomized trials, including PneumA, as well as an observational study) which demonstrated that shorter courses of antibiotics (7 days) reduce antibiotic exposure and recurrent pneumonia due to MDR organisms without affecting clinical outcomes, such as mortality. Of note, this 7-day course recommendation also applies to treatment of lactose-nonfermenting GNRs, such as Pseudomonas.

When considering the PneumA trial within the context of the newest IDSA guidelines, we see that we now have over 15 years of evidence supporting the use of shorter VAP treatment courses.

Further Reading/References:
1. 2016 IDSA Guidelines for the Management of HAP/VAP
2. Wiki Journal Club
3. PulmCCM “IDSA Guidelines 2016: HAP, VAP & It’s the End of HCAP as We Know It (And I Feel Fine)”
4. PulmCrit “The siren’s call: Double-coverage for ventilator associated PNA”

Summary by Liz Novick, MD

Image Credit: Joseaperez, CC BY-SA 3.0, via Wikimedia Commons

Week 28 – Symptom-Triggered Benzodiazepines in Alcohol Withdrawal

“Symptom-Triggered vs Fixed-Schedule Doses of Benzodiazepine for Alcohol Withdrawal”

Arch Intern Med. 2002 May 27;162(10):1117-21. [free full text]

Treatment of alcohol withdrawal with benzodiazepines has been the standard of care for decades. However, in the 1990s, benzodiazepine therapy for alcohol withdrawal was generally given via fixed doses. In 1994, a double-blind RCT by Saitz et al. demonstrated that symptom-triggered therapy based on responses to the CIWA-Ar scale reduced treatment duration and the amount of benzodiazepine used relative to a fixed-schedule regimen. This trial had little immediate impact in the treatment of alcohol withdrawal. The authors of the 2002 double-blind RCT sought to confirm the findings from 1994 in a larger population that did not exclude patients with a history of seizures or severe alcohol withdrawal.

The trial enrolled consecutive patients admitted to the inpatient alcohol treatment units at two European universities (excluding those with “major cognitive, psychiatric, or medical comorbidity”) and randomized them to treatment with either scheduled placebo (30mg q6hrs x4, followed by 15mg q6hrs x8) with additional PRN oxazepam 15mg for CIWA score 8-15 and 30mg for CIWA score > 15 or to treatment with scheduled oxazepam (30mg q6hrs x4, followed by 15mg q6hrs x8) with additional PRN oxazepam 15mg for CIWA score 8-15 and 30mg for CIWA score > 15.

The primary outcomes were cumulative oxazepam dose at 72 hours and duration of treatment with oxazepam. Subgroup analysis included the exclusion of symptomatic patients who did not require any oxazepam. Secondary outcomes included incidence of seizures, hallucinations, and delirium tremens at 72 hours.

Results:
117 patients completed the trial. 56 had been randomized to the symptom-triggered group, and 61 had been randomized to the fixed-schedule group. The groups were similar in all baseline characteristics except that the fixed-schedule group had on average a 5-hour longer interval since last drink prior to admission. While only 39% of the symptom-triggered group actually received oxazepam, 100% of the fixed-schedule group did (p < 0.001). Patients in the symptom-triggered group received a mean cumulative dose of 37.5mg versus 231.4mg in the fixed-schedule group (p < 0.001). The mean duration of oxazepam treatment was 20.0 hours in the symptom-triggered group versus 62.7 hours in the fixed-schedule group. The group difference in total oxazepam dose persisted even when patients who did not receive any oxazepam were excluded. Among patients who did receive oxazepam, patients in the symptom-triggered group received 95.4 ± 107.7mg versus 231.4 ± 29.4mg in the fixed-dose group (p < 0.001). Only one patient in the symptom-triggered group sustained a seizure. There were no seizures, hallucinations, or episodes of delirium tremens in any of the other 116 patients. The two treatment groups had similar quality-of-life and symptom scores aside from slightly higher physical functioning in the symptom-triggered group (p < 0.01). See Table 2.

Implication/Discussion:
Symptom-triggered administration of benzodiazepines in alcohol withdrawal led to a six-fold reduction in cumulative benzodiazepine use and a much shorter duration of pharmacotherapy than fixed-schedule administration. This more restrictive and responsive strategy did not increase the risk of major adverse outcomes such as seizure or DTs and also did not result in increased patient discomfort.

Overall, this study confirmed the findings of the landmark study by Saitz et al. from eight years prior. Additionally, this trial was larger and did not exclude patients with a prior history of withdrawal seizures or severe withdrawal. The fact that both studies took place in inpatient specialty psychiatry units limits their generalizability to our inpatient general medicine populations.

Why the initial 1994 study did not gain clinical traction remains unclear. Both studies have been well-cited over the ensuing decades, and the paradigm has shifted firmly toward symptom-triggered benzodiazepine regimens using the CIWA scale. While a 2010 Cochrane review cites only the 1994 study, Wiki Journal Club and 2 Minute Medicine have entries on this 2002 study but not on the equally impressive 1994 study.

Further Reading/References:
1. “Individualized treatment for alcohol withdrawal. A randomized double-blind controlled trial.” JAMA. 1994.
2. Clinical Institute Withdrawal Assessment of Alcohol Scale, Revised (CIWA-Ar)
3. Wiki Journal Club
4. 2 Minute Medicine
5. “Benzodiazepines for alcohol withdrawal.” Cochrane Database Syst Rev. 2010

Summary by Duncan F. Moore, MD

Image Credit: VisualBeo, CC BY-SA 3.0, via Wikimedia Commons

Week 27 – Mortality in Patients on Dialysis and Transplant Recipients

“Comparison of Mortality in All Patients on Dialysis, Patients on Dialysis Awaiting Transplantation, and Recipients of a First Cadaveric Transplant”

N Engl J Med. 1999 Dec 2;341(23):1725-30. [free full text]

Renal transplant is the treatment of choice in patients with ESRD. Since the advent of renal transplant, it has been known that transplant improves both quality of life and survival relative to dialysis. However, these findings were derived from retrospective data and reflected inherent selection bias (patients who received transplants were healthier, younger, and of higher socioeconomic status than patients who remained on dialysis). While some smaller studies (i.e. single center or statewide database) published in the early to mid 1990s attempted to account for this selection bias by comparing outcomes among patients who received a transplant versus patients who were listed for transplant but had not yet received one, this 1999 study by Wolfe et al. was a notable step forward in that it used the large, nationwide US Renal Data System dataset and a robust multivariate hazards model to control for baseline covariates. To this day, Wolfe et al. remains a defining testament to the sustained, life-prolonging benefit of renal transplantation itself.

Using the comprehensive US Renal Data System database, the authors evaluated patients who began treatment for ESRD between 1991 and 1996. Notable exclusion criteria were age ≥ 70 and transplant prior to initiating dialysis. Of the 228,552 patients evaluated, 46,164 were placed on the transplant waitlist, and 23,275 received a transplant by the end of the study period (12/31/1997). The primary outcome was survival reported in unadjusted death rates per 100 patient-years, standardized mortality ratios (adjusted for age, race, sex, and diabetes as the cause of ESRD), and adjusted relative risk of death in transplant patients relative to waitlisted patients. Subgroup analyses were performed.

Results:
Regarding baseline characteristics, listed or transplanted patients were younger, more likely to be white or Asian, and less likely to have diabetes as the cause of their ESRD (see Table 1). Unadjusted death rates per 100 patient-years at risk: dialysis 16.1, waitlist 6.3, and transplant recipients 3.8 (no p value given, see Table 2). The standardized mortality ratio (adjusted for age, race, sex, and diabetes as the cause of ESRD) was 49% lower (RR 0.51, 95% CI 0.49–0.53, p<0.001) among patients on the waitlist and 69% lower among transplant recipients (p value not reported). The lower standardized mortality ratio of waitlisted patients relative to dialysis patients was sustained in all subgroup analyses (see Figure 1). The relative risk of death (adjusted for age, sex, race, cause of ESRD, year placed on waitlist, and time from first treatment of ESRD to placement on waitlist) is visually depicted in Figure 2. Importantly, relative to waitlisted patients, transplant recipients had a 2.8x higher risk of death during the first two weeks post-transplant. Thereafter, risk declined until the likelihood of survival equalized at 106 days post-transplant. Long term (3-4 years of follow-up in this study), mortality risk was 68% lower among transplanted patients than among waitlisted patients (RR 0.32, 95% CI 0.30–0.35, p< 0.001). The magnitude of this survival benefit varied by subgroup but was strong and statistically significant in all subgroups (ranging from 3 to 17 additional projected years of life, see Table 3).

Implication/Discussion:
Retrospective analysis of this nationwide ESRD database has clearly demonstrated the marked mortality benefit of renal transplantation over waitlisted status. This finding is present to varying degrees in all subgroups and leads to a projected additional 3 to 17 years of lifespan post-transplant. (There is an expected, mild increase in mortality risk immediately following transplantation. This increase reflects operative risk and immediate complications but is present for only 2 weeks post-transplantation.) As expected and as previously described in other datasets, this study also demonstrated that substantially healthier ESRD patients are selected for transplantation listing in the US in comparison to patients who remain on dialysis not on the waitlist.

Relative strengths of this study include its comprehensive national dataset and intention-to-treat analysis. Its multivariate analyses robustly controlled for factors, such as time on the waitlist, that may have influenced mortality. However, this study is limited in that its retrospective comparison of listed to transplanted does not entirely eliminate selection bias. (For example, listed patients may have developed illnesses that ultimately prevented transplant and lead to death.) Additionally, the mortality benefits demonstrated in this study from the first half of the 1990s may not reflect those of current practice, given that prevention and treatment of ASCVD (a primary driver of mortality in ESRD) has improved markedly in the ensuing decades and may favor one group disproportionately.

As suggested by the authors at UpToDate, improved survival post-transplant may be due to the following factors: increased clearance of uremic toxins, reduction in inflammation and/or oxidative stress, reduced microvascular disease in diabetes mellitus, and improvement of LVH.

As a final note: in this modern era, it is surprising to see both a retrospective cohort study published in NEJM as well as the lack of preregistration of its analysis protocol prior to the study being conducted. Preregistration, even of interventional trials, did not become routine until the years following the announcement of the International Committee of Medical Journal Editors (ICMJE) trial registration policy in 2004 (Zarin et al.). Although, even today, retrospective cohort studies are not routinely preregistered, high profile journals increasingly require it because it helps differentiate between confirmatory versus exploratory research and reduce the appearance of post-hoc data dredging (i.e. p-hacking). Please see the Center for Open Science – Preregistration for further information. Here is another helpful discussion in PowerPoint form by Deborah A. Zarin, MD, Director of ClinicalTrials.gov.

Further Reading/References:
1. UpToDate, “Patient Survival After Renal Transplantation”
2. Zarin et al. “Update on Trial Registration 11 Years after the ICMJE Policy Was Established.” NEJM 2017

Summary by Duncan F. Moore, MD

Image Credit: Anna Frodesiak, CC0 1.0, via Wikimedia Commons

Week 26 – HACA

“Mild Therapeutic Hypothermia to Improve the Neurologic Outcome After Cardiac Arrest”

by the Hypothermia After Cardiac Arrest Study Group

N Engl J Med. 2002 Feb 21;346(8):549-56. [free full text]

Neurologic injury after cardiac arrest is a significant source of morbidity and mortality. It is hypothesized that brain reperfusion injury (via the generation of free radicals and other inflammatory mediators) following ischemic time is the primary pathophysiologic basis. Animal models and limited human studies have demonstrated that patients treated with mild hypothermia following cardiac arrest have improved neurologic outcome. The 2002 HACA study sought to evaluate prospectively the utility of therapeutic hypothermia in reducing neurologic sequelae and mortality post-arrest.

Population: European patients who achieve return of spontaneous circulation (ROSC) after presenting to the ED in cardiac arrest

inclusion criteria: witnessed arrest, ventricular fibrillation or non-perfusing ventricular tachycardia as initial rhythm, estimated interval 5 to 15 min from collapse to first resuscitation attempt, no more than 60 min from collapse to ROSC, age 18-75

pertinent exclusion: pt already < 30ºC on admission, comatose state prior to arrest d/t CNS drugs, response to commands following ROSC

Intervention: Cooling to target temperature 32-34ºC with maintenance for 24 hrs followed by passive rewarming. Pts received pancuronium for neuromuscular blockade to prevent shivering.

Comparison: Standard intensive care

Outcomes:
Primary: a “favorable neurologic outcome” at 6 months defined as Pittsburgh cerebral-performance scale category 1 (good recovery) or 2 (moderate disability). (Of note, the examiner was blinded to treatment group allocation.)

Secondary:

  • all-cause mortality at 6 months
  • specific complications within the first 7 days: bleeding “of any severity,” pneumonia, sepsis, pancreatitis, renal failure, pulmonary edema, seizures, arrhythmias, and pressure sores

Results:
3551 consecutive patients were assessed for enrollment and ultimately 275 met inclusion criteria and were randomized. The normothermia group had more baseline DM and CAD and were more likely to have received BLS from a bystander prior to the ED.

Regarding neurologic outcome at 6 months, 75 of 136 (55%) of the hypothermia group had a favorable neurologic outcome, versus 54/137 (39%) in the normothermia group (RR 1.40, 95% CI 1.08-1.81, p = 0.009; NNT = 6). After adjusting for all baseline characteristics, the RR increased slightly to 1.47 (95% CI 1.09-1.82).

Regarding death at 6 months, 41% of the hypothermia group had died, versus 55% of the normothermia group (RR 0.74, 95% CI 0.58-0.95, p = 0.02; NNT = 7). After adjusting for all baseline characteristics, RR = 0.62 (95% CI 0.36-0.95). There was no difference among the two groups in the rate of any complication or in the total number of complications during the first 7 days.

Implication/Discussion:
In ED patients with Vfib or pulseless VT arrest who did not have meaningful response to commands after ROSC, immediate therapeutic hypothermia reduced the rate of neurologic sequelae and mortality at 6 months.

Corresponding practice point from Dr. Sonti and Dr. Vinayak and their Georgetown Critical Care Top 40: “If after ROSC your patient remains unresponsive and does not have refractory hypoxemia/hypotension/coagulopathy, you should initiate therapeutic hypothermia even if the arrest was PEA. The benefit seen was substantial and any proposed biologic mechanism would seemingly apply to all causes of cardiac arrest. The investigators used pancuronium to prevent shivering; [at MGUH] there is a ‘shivering’ protocol in place and if refractory, paralytics can be used.”

This trial, as well as a concurrent publication by Benard et al. ushered in a new paradigm of therapeutic hypothermia or “targeted temperature management” (TTM) following cardiac arrest. Numerous trials in related populations and with modified interventions (e.g. target temperature 36º C) were performed over the following decade, and ultimately led to the current standard of practice.

Per UpToDate, the collective trial data suggest that “active control of the post-cardiac arrest patient’s core temperature, with a target between 32 and 36ºC, followed by active avoidance of fever, is the optimal strategy to promote patient survival.” TTM should be undertaken in all patients who do not follow commands or have purposeful movements following ROSC. Expert opinion at UpToDate recommends maintaining temperature control for at least 48 hours.

Further Reading/References:
1. HACA @ 2 Minute Medicine
2. HACA @ Wiki Journal Club
3. HACA @ Visualmed
4. Georgetown Critical Care Top 40, page 23 (Jan. 2016)
5. PulmCCM.org, “Hypothermia did not help after out-of-hospital cardiac arrest, in largest study yet”
6. Cochrane Review, “Hypothermia for neuroprotection in adults after cardiopulmonary resuscitation”
7. The NNT, “Mild Therapeutic Hypothermia for Neuroprotection Following CPR”
8. UpToDate, “Post-cardiac arrest management in adults”

Summary by Duncan F. Moore, MD

Week 25 – ALLHAT

“Major Outcomes in High-Risk Hypertensive Patients Randomized to Angiotensin-Converting Enzyme Inhibitor or Calcium Channel Blocker vs. Diuretic”

The Antihypertensive and Lipid-Lowering Treatment to Prevent Heart Attack Trial (ALLHAT)

JAMA. 2002 Dec 18;288(23):2981-97. [free full text]

Hypertension is a ubiquitous disease, and the cardiovascular and mortality benefits of BP control have been well described. However, as the number of available antihypertensive classes proliferated in the past several decades, a head-to-head comparison of different antihypertensive regimens was necessary to determine the optimal first-step therapy. The 2002 ALLHAT trial was a landmark trial in this effort.

Population:
33,357 patients aged 55 years or older with hypertension and at least one other coronary heart disease (CHD) risk factor (previous MI or stroke, LVH by ECG or echo, T2DM, current cigarette smoking, HDL < 35 mg/dL, or documentation of other atherosclerotic cardiovascular disease (CVD)). Notable exclusion criteria: history of hospitalization for CHF, history of treated symptomatic CHF, or known LVEF < 35%.

Intervention:
Prior antihypertensives were discontinued upon initiation of the study drug. Patients were randomized to one of three study drugs in a double-blind fashion. Study drugs and additional drugs were added in a step-wise fashion to achieve a goal BP < 140/90 mmHg.

Step 1: titrate assigned study drug

  • chlorthalidone: 12.5 –> 5 (sham titration) –> 25 mg/day
  • amlodipine: 2.5 –> 5 –>  10 mg/day
  • lisinopril: 10 –> 20 –> 40 mg/day

Step 2: add open-label agents at treating physician’s discretion (atenolol, clonidine, or reserpine)

  • atenolol: 25 to 100 mg/day
  • reserpine: 0.05 to 0.2 mg/day
  • clonidine: 0.1 to 0.3 mg BID

Step 3: add hydralazine 25 to 100 mg BID

Comparison:
Pairwise comparisons with respect to outcomes of chlorthalidone vs. either amlodipine or lisinopril. A doxazosin arm existed initially, but it was terminated early due to an excess of CV events, primarily driven by CHF.

Outcomes:
Primary –  combined fatal CAD or nonfatal MI

Secondary

  • all-cause mortality
  • fatal and nonfatal stroke
  • combined CHD (primary outcome, PCI, or hospitalized angina)
  • combined CVD (CHD, stroke, non-hospitalized treated angina, CHF [fatal, hospitalized, or treated non-hospitalized], and PAD)

Results:
Over a mean follow-up period of 4.9 years, there was no difference between the groups in either the primary outcome or all-cause mortality.

When compared with chlorthalidone at 5 years, the amlodipine and lisinopril groups had significantly higher systolic blood pressures (by 0.8 mmHg and 2 mmHg, respectively). The amlodipine group had a lower diastolic blood pressure when compared to the chlorthalidone group (0.8 mmHg).

When comparing amlodipine to chlorthalidone for the pre-specified secondary outcomes, amlodipine was associated with an increased risk of heart failure (RR 1.38; 95% CI 1.25-1.52).

When comparing lisinopril to chlorthalidone for the pre-specified secondary outcomes, lisinopril was associated with an increased risk of stroke (RR 1.15; 95% CI 1.02-1.30), combined CVD (RR 1.10; 95% CI 1.05-1.16), and heart failure (RR 1.20; 95% CI 1.09-1.34). The increased risk of stroke was mostly driven by 3 subgroups: women (RR 1.22; 95% CI 1.01-1.46), blacks (RR 1.40; 95% CI 1.17-1.68), and non-diabetics (RR 1.23; 95% CI 1.05-1.44). The increased risk of CVD was statistically significant in all subgroups except in patients aged less than 65. The increased risk of heart failure was statistically significant in all subgroups.

Discussion:
In patients with hypertension and one risk factor for CAD, chlorthalidone, lisinopril, and amlodipine performed similarly in reducing the risks of fatal CAD and nonfatal MI.

The study has several strengths: a large and diverse study population, a randomized, double-blind structure, and the rigorous evaluation of three of the most commonly prescribed “newer” classes of antihypertensives. Unfortunately, neither an ARB nor an aldosterone antagonist was included in the study. Additionally, the step-up therapies were not reflective of contemporary practice. (Instead, patients would likely be prescribed one or more of the primary study drugs.)

The ALLHAT study is one of the hallmark studies of hypertension and has played an important role in hypertension guidelines since it was published. Following the publication of ALLHAT, thiazide diuretics became widely used as first line drugs in the treatment of hypertension. The low cost of thiazides and their limited side-effect profile are particularly attractive class features. While ALLHAT looked specifically at chlorthalidone, in practice the positive findings were attributed to HCTZ, which has been more often prescribed. The authors of ALLHAT argued that the superiority of thiazides was likely a class effect, but according to the analysis at Wiki Journal Club, “there is little direct evidence that HCTZ specifically reduces the incidence of CVD among hypertensive individuals.” Furthermore, a 2006 study noted that that HCTZ has worse 24-hour BP control than chlorthalidone due to a shorter half-life. The ALLHAT authors note that “since a large proportion of participants required more than 1 drug to control their BP, it is reasonable to infer that a diuretic be included in all multi-drug regimens, if possible.” The 2017 ACC/AHA High Blood Pressure Guidelines state that, of the four thiazide diuretics on the market, chlorthalidone is preferred because of a prolonged half-life and trial-proven reduction of CVD (via the ALLHAT study).

Further Reading / References:
1. 2017 ACC Hypertension Guidelines
2. Wiki Journal Club
3. 2 Minute Medicine
4. Ernst et al, “Comparative antihypertensive effects of hydrochlorothiazide and chlorthalidone on ambulatory and office blood pressure.” (2006)
5. Gillis Pharmaceuticals [https://www.youtube.com/watch?v=HOxuAtehumc]
6. Concepts in Hypertension, Volume 2 Issue 6

Summary by Ryan Commins MD

Image Credit: Kimivanil, CC BY-SA 4.0, via Wikimedia Commons