Week 11 – CAST

“Mortality and Morbidity in Patients Receiving Encainide, Flecainide, or Placebo”

The Cardiac Arrhythmia Suppression Trial (CAST) [free full text]

N Engl J Med. 1991 Mar 21;324(12):781-8.

Ventricular arrhythmias are common following MI, and studies have demonstrated that PVCs and other arrhythmias such as non-sustained ventricular tachycardia (NSVT) are independent risk factors for cardiac mortality following MI. As such, by the late 1980s, many patients with PVCs post-MI were treated with antiarrhythmic drugs in an attempt to reduce mortality. The 1991 CAST trial sought to prove what predecessor trials had failed to prove – that suppression of such rhythms post-MI would improve survival.

Population:     

·       post-MI patients with ≥ 6 asymptomatic PVCs per hour and no runs of VT ≥ 15 beats, LVEF < 55% if within 90 days of MI, or LVEF < 40% if greater than 90 days since MI

o   patients were further selected by an open-label titration period in which patients were assigned to treatment with encainide, flecainide, or moricizine

o   “responders” had at least 80% suppression of PVCs and 90% suppression of runs of VT

Intervention: continuation of antiarrhythmic drug assigned during titration period

Comparison: transition from titration antiarrhythmic drug to placebo

Outcome:

Primary – death or cardiac arrest with resuscitation “either of which was due to arrhythmia”

Secondary
1. all-cause mortality or cardiac arrest
2. cardiac death or cardiac arrest due to any cardiac cause
3. VT ≥ 15 or more beats at rate ≥ 120 bpm
4. syncope
5. permanent pacemaker implantation
6. recurrent MI
7. CHF
8. angina pectoris
9. coronary artery revascularization

Results:
The trial was terminated early due to increased mortality in the encainide and flecainide treatment groups. 1498 patients were randomized following successful titration during the open-label period, and they were reported in this paper. The results of the moricizine arm were reported later in a different paper (CAST-II).

RR of death or cardiac arrest due to arrhythmia was 2.64 (95% CI 1.60–4.36). The number needed to harm was 28.2. See Figure 1 on page 783 for a striking Kaplan-Meier curve.

RR of death or cardiac arrest due to all causes was 2.38 (95% CI 1.59–3.57). The number needed to harm was 20.6. See Figure 2 on page 784 for the relevant Kaplan-Meier curve.

Regarding the other secondary outcomes, cardiac death/arrest due to any cardiac cause was similarly elevated in the treatment group, and there were no significant differences in non-lethal endpoints among the treatment and placebo arms.

Implication/Discussion:
Treatment of asymptomatic ventricular arrhythmias with encainide and flecainide in patients with LV dysfunction following MI results in increased mortality.

This study is a classic example of how a treatment that is thought to make intuitive sense based on observational data (i.e. PVCs and NSVT are associated with cardiac death post-MI, thus reducing these arrhythmias will reduce death) can be easily and definitively disproven with a placebo-controlled trial with hard endpoints (e.g. death). Correlation does not equal causation.

Modern expert opinion at UpToDate notes no role for suppression of asymptomatic PVCs or NSVT in the peri-infarct period. Indeed such suppression may increase mortality. As noted on Wiki Journal Club, modern ACC/AHA guidelines “do not comment on the use of antiarrhythmic medications in ACS care.”

Further Reading:
1. CAST-I Trial at ClinicalTrials.gov
2. CAST-II trial publication, NEJM (1992)
3. Wiki Journal Club
4. 2 Minute Medicine
5. UpToDate “Clinical features and treatment of ventricular arrhythmias during acute myocardial infarction”

Summary by Duncan F. Moore, MD

Week 10 – MELD

“A Model to Predict Survival in Patients With End-Stage Liver Disease”

Hepatology. 2001 Feb;33(2):464-70. [free full text]

Prior to the adoption of the Model for End-Stage Liver Disease (MELD) score for the allocation of liver transplants, determination of medical urgency was dependent on the Child-Pugh score. The Child-Pugh score was limited by the inclusion of two subjective variables (severity of ascites and severity of encephalopathy), limited discriminatory ability, and a ceiling effect of laboratory abnormalities. Stakeholders sought an objective, continuous, generalizable index that more accurately and reliably represented disease severity. The MELD score had originally been developed in 2000 to estimate the survival of patients undergoing TIPS. The authors of this 2001 study hypothesized that the MELD score would accurately estimate short-term survival in a wide range of severities and etiologies of liver dysfunction and thus serve as a suitable replacement measure for the Child-Pugh score in the determination of medical urgency in transplant allocation.

This study reported a series of retrospective validation cohorts for the use of MELD in prediction of mortality in advanced liver disease.

Methods:

Populations:

  1. cirrhotic inpatients, Mayo Clinic, 1994-1999, n = 282 (see exclusion criteria)
  2. ambulatory patients with noncholestatic cirrhosis, newly-diagnosed, single-center in Italy, 1981-1984, n = 491 consecutive patients
  3. ambulatory patients with primary biliary cirrhosis, Mayo Clinic, 1973-1984, n = 326 (92 lacked all necessary variables for calculation of MELD)
  4. cirrhotic patients, Mayo Clinic, 1984-1988, n = 1179 patients with sufficient follow-up (≥ 3 months) and laboratory data

Index MELD score was calculated for each patient. Death during follow-up was assessed by chart review.

MELD score = 3.8*ln([bilirubin]) + 11.2*ln(INR) + 9.6*ln([Cr])+6.4*(etiology: 0 if cholestatic or alcoholic, 1 otherwise)

Primary study outcome was the concordance c-statistic between MELD score and 3-month survival. The c-statistic is equivalent to the area under receiver operating characteristic (AUROC). Per the authors, “a c-statistic between 0.8 and 0.9 indicates excellent diagnostic accuracy and a c-statistic greater than 0.7 is generally considered as a useful test.” (See page 455 for further explanation.)

There was no reliable comparison statistic (e.g. c-statistic of MELD vs. Child-Pugh in all groups).

Results:

Primary:

  • hospitalized Mayo patients (late 1990s): c-statistic for prediction of 3-month survival = 0.87 (95% CI 0.82-0.92)
  • ambulatory, non-cholestatic Italian patients: c-statistic for 3-month survival = 0.80 (95% CI 0.69-0.90)
  • ambulatory PBC patients at Mayo: c-statistic for 3-month survival = 0.87 (95% CI 0.83-0.99)
  • cirrhotic patients at Mayo (1980s): c-statistic for 3-month survival = 0.78 (95% CI 0.74-0.81)

Secondary:

  • There was minimal improvement in the c-statistics for 3-month survival with the individual addition of SBP, variceal bleed, ascites, and encephalopathy to the MELD score (see Table 4, highest increase in c-statistic was 0.03).
  • When the etiology of liver disease was excluded from the MELD score, there was minimal change in the c-statistics (see Table 5, all paired CIs overlap).
  • C-statistics for 1-week mortality ranged from 0.80 to 0.95.

Implication/Discussion:
The MELD score is an excellent predictor of short-term mortality in patients with end-stage liver disease of diverse etiology and severity.

Despite the retrospective nature of this study, this study represented a significant improvement upon the Child-Pugh score in determining medical urgency in patients who require liver transplant.

In 2002, the United Network for Organ Sharing (UNOS) adopted a modified version of the MELD score for the prioritization of deceased-donor liver transplants in cirrhosis.

Concurrent with the 2001 publication of this study, Wiesner et al. performed a prospective validation of the use of MELD in the allocation of liver transplantation. When published in 2003, it demonstrated that MELD score accurately predicted 3-month mortality among patients with chronic liver disease on the waitlist.

The MELD score has also been validated in other conditions such as alcoholic hepatitis, hepatorenal syndrome, and acute liver failure (see UpToDate).

Subsequent additions to the MELD score have come out over the years. In 2006, the MELD Exception Guidelines offered extra points for severe comorbidities (e.g HCC, hepatopulmonary syndrome). In January 2016, the MELDNa score was adopted and is now used for liver transplant prioritization.

References and Further Reading:
1. “A model to predict poor survival in patients undergoing transjugular intrahepatic portosystemic shunts” (2000)
2. MDCalc “MELD Score”
3. Wiesner et al. “Model for end-stage liver disease (MELD) and allocation of donor livers” (2003)
4. Freeman Jr. et al. “MELD exception guidelines” (2006) 
5. 2 Minute Medicine
6. UpToDate “Model for End-stage Liver Disease (MELD)”

Summary by Duncan F. Moore, MD

Week 9 – Bicarbonate supplementation in CKD

“Bicarbonate Supplementation Slows Progression of CKD and Improves Nutritional Status”

J Am Soc Nephrol. 2009 Sep;20(9):2075-84. [free full text]

Metabolic acidosis is a common complication of advanced CKD. Some animal models of CKD have suggested that worsening metabolic acidosis is associated with worsening proteinuria, tubulointerstitial fibrosis, and acceleration of decline of renal function. Short-term human studies have demonstrated that bicarbonate administration reduces protein catabolism and that metabolic acidosis is an independent risk factor for acceleration of decline of renal function. However, until the 2009 study by de Brito-Ashurst et al., there were no long-term studies demonstrating the beneficial effects of oral bicarbonate administration on CKD progression and nutritional status.

Population: CKD patients with CrCl 15-30ml/min and plasma bicarbonate 16-20 mEq/L

Intervention: sodium bicarbonate 600mg PO TID with protocolized uptitration to achieve plasma HCO3 ≥ 23 mEq/L, for 2 years

Comparison: routine care

Outcomes:
primary:
1) decline in CrCl at 2 years
2) “rapid progression of renal failure” (defined as decline of CrCl > 3 ml/min per year)
3) development of ESRD requiring dialysis

secondary:
1) change in dietary protein intake
2) change in normalized protein nitrogen appearance (nPNA)
3) change in serum albumin
4) change in mid-arm muscle circumference

Results:
134 patients were randomized, and baseline characteristics were similar among the two groups. Serum bicarbonate levels increased significantly in the treatment arm (see Figure 2). At two years, CrCl decline was 1.88 ml/min in the treatment group vs. 5.93 ml/min in the control group (p<0.01); rapid progression of renal failure was noted in 9% of intervention group vs. 45% of the control group (RR 0.15, 95% CI 0.06–0.40, p<0.0001, NNT = 2.8); and ESRD developed in 6.5% of the intervention group vs. 33% of the control group (RR 0.13, 95% CI 0.04–0.40, p<0.001; NNT = 3.8). Regarding nutritional status: dietary protein intake increased in the treatment group relative to the control group (p<0.007), normalized protein nitrogen appearance decreased in the treatment group and increased in the control group (p<0.002), serum albumin increased in the treatment group but was unchanged in the control group, and mean mid-arm muscle circumference increased by 1.5 cm in the intervention group vs. no change in the control group (p<0.03).

Implication/Discussion:
Oral bicarbonate supplementation in CKD patients with metabolic acidosis reduces the rate of CrCl decline and progression to ESRD and improves nutritional status.

Primarily on the basis of this study, the KDIGO 2012 guidelines for the management of CKD recommend oral bicarbonate supplementation to maintain serum bicarbonate within the normal range (23-29 mEq/L).

This is a remarkably cheap and effective intervention. Importantly, the rates of adverse events, particularly worsening hypertension and increasing edema, were unchanged among the two groups. Of note, sodium bicarbonate induces much less volume expansion than a comparable sodium load of sodium chloride.

In their discussion, the authors suggest that their results support the hypothesis of Nath et al. (1985) that “compensatory changes [in the setting of metabolic acidosis] such as increased ammonia production and the resultant complement cascade activation in remnant tubules in the declining renal mass [are] injurious to the tubulointerstitium.”

The hypercatabolic state of advanced CKD appears to be mitigated by bicarbonate supplementation. The authors note that “an optimum nutritional status has positive implications on the clinical outcomes of dialysis patients, whereas [protein-energy wasting] is associated with increased morbidity and mortality.”

Limitations to this trial include its open label, no placebo design. Also, the applicable population is limited by study exclusion criteria of morbid obesity, overt CHF, and uncontrolled HTN.

Further Reading:
1. Nath et al. “Pathophysiology of chronic tubulo-interstitial disease in rats: Interactions of dietary acid load, ammonia, and complement component-C3” (1985)
2. KDIGO 2012 Clinical Practice Guideline for the Evaluation and Management of Chronic Kidney Disease (see page 89)
3. UpToDate

Summary by Duncan F. Moore, MD

Week 8 – CORTICUS

“Hydrocortisone Therapy for Patients with Septic Shock”

N Engl J Med. 2008 Jan 10;358(2):111-24. [free full text]

Steroid therapy in septic shock has been a hotly debated topic since the 1980s. The Annane trial in 2002 suggested that there was a mortality benefit to early steroid therapy and so for almost a decade this became standard of care. In 2008 the CORTICUS trial was performed suggesting otherwise.

Population:
– inclusion criteria: ICU patients with septic shock onset with past 72 hrs (defined as SBP < 90 despite fluids or need for vasopressors, and hypoperfusion or organ dysfunction from sepsis)
– exclusion criteria: “underlying disease with a poor prognosis,” life expectancy < 24hrs, immunosuppression, recent corticosteroid use

Intervention: hydrocortisone 50mg IV q6h x5 days with taper

Comparison: placebo injections q6h x5 days plus taper

Outcome:

Primary: 28 day mortality among patients who did not have a response to ACTH stim test (cortisol rise < 9mcg/dL)

Secondary:
– 28 day mortality in patients who had a positive response to ACTH stim test
– 28 day mortality in all patients
– reversal of shock (defined as SBP ≥ 90 for at least 24hrs without vasopressors) in all patients
– time to reversal of shock in all patients

Results:
In ACTH non-responders (N=233): intervention vs. control 28 day mortality was 39.2% vs. 36.1% (p=0.69)

In ACTH responders (N=254): intervention vs. control 28 day mortality was 28.8% vs. 28.7% (p=1.00); reversal of shock 84.7%% vs. 76.5% (p=0.13)

Among all patients:
– intervention vs. control 28 day mortality was 34.3% vs. 31.5% (p=0.51)
– reversal of shock 79.7% vs. 74.2% (p=0.18)
– duration of time to reversal of shock was significantly shorter among patients receiving hydrocortisone (per Kaplan-Meier analysis, p<0.001; see Figure 2), median time to reversal 3.3 days vs. 5.8 days (95% CI 5.2 – 6.9)

Discussion:
The CORTICUS trial demonstrated no mortality benefit of steroid therapy in septic shock, regardless of a patient’s response to ACTH. Despite the lack of mortality benefit, it demonstrated an earlier resolution of shock with steroids. This lack of mortality benefit sharply contrasted with the previous Annane study. Several reasons have been posited for this including poor powering of the CORTICUS study (it did not reach the desired N=800), CORTICUS inclusion starting within 72 hrs of septic shock vs. Annane starting within 8 hrs, and Annane patients generally being sicker (including their inclusion criterion of mechanical ventilation). Subsequent meta-analyses disagree about the mortality benefit of steroids, but meta-regression analyses suggest benefit among the sickest patients. All studies agree about the improvement in shock reversal. The 2016 Surviving Sepsis Campaign guidelines recommend IV hydrocortisone in septic shock in patients who continue to be hemodynamically unstable despite adequate fluid resuscitation and vasopressor therapy.

Per Drs. Sonti and Vinayak of the GUH MICU (excerpted from their excellent Georgetown Critical Care Top 40): “Practically, we use steroids when reaching for a second pressor or if there is multiorgan system dysfunction. Our liver patients may have deficient cortisol production due to inadequate precursor lipid production; use of corticosteroids in these patients represents physiologic replacement rather than adjunct supplement.”

References / Further Reading
:
1. Wiki Journal Club
2. 2 Minute Medicine
3. Surviving Sepsis Campaign: International Guidelines for Management of Sepsis and Septic Shock (2016), section “Corticosteroids”
4. Annane trial (2002) [free full text]
5. Georgetown Critical Care Top 40 [iTunes / iBooks link]
6. UpToDate,“Glucocorticoid therapy in septic shock”

Summary by Gordon Pelegrin, MD

Week 7 – FUO

“Fever of Unexplained Origin: Report on 100 Cases”

Medicine (Baltimore). 1961 Feb;40:1-30. [free full text]

In our modern usage, fever of unknown origin (FUO) refers to a persistent unexplained fever despite an adequate medical workup. The most commonly used criteria for this diagnosis stem from the 1961 series by Petersdorf and Beeson.

This study analyzed a prospective cohort of patients evaluated at Yale’s hospital for FUO between 1952 and 1957. Their FUO criteria: 1) illness of more than three week’s duration, 2) fever higher than 101º F on several occasions, 3) diagnosis uncertain after one week of study in hospital. After 126 cases had been noted, retrospective investigation was undertaken to determine the ultimate etiologies of the fevers. The authors winnowed this group to 100 cases based on availability of follow up data and the exclusion of cases that “represented combinations of such common entities as urinary tract infection and thrombophlebitis.”

Results:
126 cases were reviewed as noted above, and ultimately 100 were selected for analysis. In 93 cases “a reasonably certain diagnosis was eventually possible.” 6 of the 7 undiagnosed patients ultimately made a full recovery. Underlying etiology (see table 1 on page 3): infectious 36% (including TB in 11%), neoplastic diseases 19%, collagen disease (e.g. SLE) 13%, pulmonary embolism 3%, benign non-specific pericarditis 2%, sarcoidosis 2%, hypersensitivity reaction 4%, cranial arteritis 2%, periodic disease 5%, miscellaneous disease 4%, factitious fever 3%, no diagnosis made 7%.

Implication/Discussion:
Clearly, diagnostic modalities have improved markedly since this 1961 study. However, the core etiologies of infection, malignancy, and connective tissue disease / non-infectious inflammatory disease remain most prominent, while the percentage of patients with no ultimate diagnosis has been increasing (for example, see PMIDs 9413425, 12742800, and 17220753). Modifications to the 1961 criteria have been proposed (e.g. 1 week duration of hospital stay not required if certain diagnostic measures have been performed) and implemented in recent FUO trials. One modern definition of FUO: fever ≥ 38.3º C, lasting at least 2-3 weeks, with no identified cause after three days of hospital evaluation or three outpatient visits.

Per UpToDate, the following minimum diagnostic workup is recommended in suspected FUO: blood cultures, ESR or CRP, LDH, HIV, RF, heterophile antibody test, CK, ANA, TB testing, SPEP, CT of abdomen and chest.

Further Reading:
1. “Fever of unknown origin (FUO). I A. prospective multicenter study of 167 patients with FUO, using fixed epidemiologic entry criteria. The Netherlands FUO Study Group.” Medicine (Baltimore). 1997 Nov;76(6):392-400.
2. “From prolonged febrile illness to fever of unknown origin: the challenge continues.” Arch Intern Med. 2003 May 12;163(9):1033-41.
3. “A prospective multicenter study on fever of unknown origin: the yield of a structured diagnostic protocol.” Medicine (Baltimore). 2007 Jan;86(1):26-38.
4. UpToDate, “Approach to the Adult with Fever of Unknown Origin”
5. “Robert Petersdorf, 80, Major Force in U.S. Medicine, Dies” The New York Times, 2006

Summary by Duncan F. Moore, MD

Week 6 – COURAGE

“Optimal Medical Therapy with or without PCI for Stable Coronary Disease”

by the Clinical Outcomes Utilizing Revascularization and Aggressive Drug Evaluation (COURAGE) Trial Research Group

N Engl J Med. 2007 Apr 12;356(15):1503-16 [free full text]

The optimal medical management of stable coronary artery disease has been well-described. However, prior to the 2007 COURAGE trial, the role of percutaneous coronary intervention (PCI) in the initial management of stable coronary artery disease was unclear. It was known that PCI improved angina symptoms and short-term exercise performance in stable disease, but its mortality benefit and reduction of future myocardial infarction and ACS were unknown.

Population: US and Canadian patients with stable coronary artery disease
(See paper for inclusion/exclusion criteria. Disease had to be sufficiently and objectively severe, but not too severe, and symptoms could not be sustained at the highest CCS grade.)
Intervention: optimal medical management and PCI
(Optimal medical management included antiplatelet, anti-anginal, ACEi/ARB, and cholesterol-lowering therapy.)
Comparison: optimal medical management alone
Outcome:
1º: composite of all-cause mortality and non-fatal MI
2º: composite of all-cause mortality, non-fatal MI, and stroke, and hospitalization for unstable angina

Results:
2287 patients were randomized. Both groups had similar baseline characteristics with the exception of a higher prevalence of proximal LAD disease in the medical-therapy group. Median duration of follow-up was 4.6 years in both groups. Death or non-fatal MI occurred in 18.4% of the PCI group and in 17.8% of the medical-therapy group (p=0.62). Death, non-fatal MI, or stroke occurred in 20.0% of the PCI group and 19.5% of the medical-therapy group (p=0.62). Hospitalization for ACS occurred in 12.4% of the PCI group and 11.8% of the medical-therapy group (p=0.56). Revascularization during follow-up was performed in 21.1% of the PCI group but in 32.6% of the medical-therapy group (HR 0.60, 95% CI 0.51–0.71, p<0.001). Finally, 66% of PCI patients were free of angina at 1 year follow-up compared with 58% of medical-therapy patients (p<0.001); rates were 72% and 67% at 3 years (p=0.02) and 72% and 74% at five years (not significant).

Implication/Discussion:
In the initial management of stable coronary artery disease, PCI in addition to optimal medical management provided no mortality benefit over optimal medical management alone.

However, initial management with PCI did provide a time-limited improvement in angina symptoms.

As the authors of COURAGE nicely summarize on page 1512, the atherosclerotic plaques of ACS and stable CAD are different. Vulnerable, ACS-prone plaques have thin caps and spread outward along the wall of the coronary artery, as opposed to the plaques of stable CAD which have thick fibrous caps and are associated with inward-directed remodeling that narrows the artery lumen (and thus cause reliable angina symptoms and luminal narrowing on coronary angiography).

Notable limitations in this study: 1) the population was largely male, white, and 42% came from VA hospitals, thus limiting generalizability of the study; 2) drug-eluting stents were not clinically available until the last 6 months of the study, so most stents placed were bare metal.

Later meta-analyses were weakly suggestive of an association of PCI with improved all-cause mortality. It is thought that there may be a subset of patients with stable CAD who achieve a mortality benefit from PCI. Per UpToDate, there are ongoing RCTs investigating this possibility.

It is important to note that all of the above discussions assume that the patient does not have specific coronary artery anatomy in which initial CABG would provide a mortality benefit (e.g. left main disease, multi-vessel disease with decreased LVEF). Finally, PCI should be considered in patients whose physical activity is limited by angina symptoms despite optimal medical therapy.

Further Reading:
1. Wiki Journal Club
2. 2 Minute Medicine
3. Canadian Cardiovascular Society grading of angina pectoris
4. https://www.uptodate.com/contents/stable-ischemic-heart-disease-indications-for-revascularization

Summary by Duncan F. Moore, MD

Week 5 – Albumin in SBP

“Effect of Intravenous Albumin on Renal Impairment and Mortality in Patients with Cirrhosis and Spontaneous Bacterial Peritonitis”

N Engl J Med. 1999 Aug 5;341(6):403-9. [free full text]

Renal failure commonly develops in the setting of SBP, and its development is a sensitive predictor of in-hospital mortality. The renal impairment is thought to stem from decreased effective arterial blood volume secondary to the systemic inflammatory response to the infection. In our current practice, there are certain circumstances in which we administer albumin early in the SBP disease course in order to reduce the risk of renal failure and mortality. Ultimately, our current protocol originated from the 1999 study of albumin in SBP by Sort et al.

Population: adults with SBP (see paper for extensive list of exclusion criteria)
Intervention: cefotaxime and albumin infusion 1.5gm/kg within 6hrs of enrollment, followed by 1gm/kg on day 3
Comparison: cefotaxime alone
Outcome:
1º: development of “renal impairment” (a “nonreversible” increase in BUN or Cr by more than 50% to a value greater than 30 mg/dL or 1.5 mg/dL, respectively) during hospitalization
2º: mortality during hospitalization

Results:
126 patients were randomized. Both groups had similar baseline characteristics, and both had similar rates of resolution of infection. Renal impairment occurred in 10% of the albumin group and 33% of the cefotaxime-alone group (p=0.02). In-hospital mortality was 10% in the albumin group and 29% in the cefotaxime-alone group (p=0.01). 78% of patients that developed renal impairment died in-hospital, while only 3% of patients who did not develop renal impairment died. Plasma renin activity was significantly higher on days 3, 6, and 9 in the cefotaxime-alone group than in the albumin group, while there were no significant differences in MAP among the two groups at those time intervals. Multivariate analysis of all trial participants revealed that baseline serum bilirubin and creatinine were independent predictors of the development of renal impairment.

Implication/Discussion:
Albumin administration reduces renal impairment and improves mortality in patients with SBP.

The findings of this landmark trial were refined by a brief 2007 report by Sigal et al. “Restricted use of albumin for spontaneous bacterial peritonitis.” “High-risk” patients, identified by baseline serum bilirubin of ≥ 4.0 mg/dL or Cr ≥ 1.0 mg/dL were given the intervention of albumin 1.5gm/kg on day 1 and 1gm/kg on day 3, and low-risk patients were not given albumin. None of the 15 low-risk patients developed renal impairment or died, whereas 12 of 21 (57%) of the high-risk group developed renal impairment, and 5 of the 21 (24%) died. The authors concluded that patients with bilirubin < 4.0 and Cr < 1.0 do not need scheduled albumin in the treatment of SBP.

The current (2012) American Association for the Study of Liver Diseases guidelines for the management of adult patients with ascites due to cirrhosis do not definitively recommend criteria for albumin administration in SBP – they instead summarize the above two studies.

A 2013 meta-analysis of four reports/trials (including the two above) concluded that albumin infusion reduced renal impairment and improved mortality with pooled odds ratios approximately commensurate with those of the 1999 study by Sort et al.

Ultimately, the current recommended practice per expert opinion is to perform albumin administration per the protocol outlined by Sigal et al. (2007).

Further Reading:
1. AASLD Guidelines for Management of Adult Patients with Ascites Due to Cirrhosis (skip to page 77)
2. Sigal et al. “Restricted use of albumin for spontaneous bacterial peritonitis”
3. Meta-analysis: “Albumin infusion improves outcomes of patients with spontaneous bacterial peritonitis: a meta-analysis of randomized trials
4. Wiki Journal Club
5. 2 Minute Medicine

Summary by Duncan F. Moore, MD

Week 4 – NLST

“Reduced Lung-Cancer Mortality with Low-Dose Computed Tomographic Screening”

by the National Lung Cancer Screening Trial Research Team

N Engl J Med. 2011 Aug 4;365(5):395-409 [NEJM free full text]

Despite a reduction in smoking rates in the United States, lung cancer remains the number one cause of cancer death in the United States, as well as worldwide. Earlier studies of plain chest radiograph for lung cancer screening demonstrated no benefit, and thus in 2002 the National Lung Screening Trial (NLST) was undertaken to determine whether then-recent advances in CT technology could lead to an effective lung cancer screening method.

Population: adults age 55-74 with 30+ pack-years of smoking (if former smokers, they must have quit within the past 15 years)
Intervention: three annual screenings for lung cancer with low-dose CT
Comparison: three annual screenings for lung cancer with PA chest radiograph
Outcome: 1º = mortality from lung cancer, 2º = mortality from any cause and incidence of lung cancer

Results/Conclusion:
53,454 patients were randomized, and both groups had similar baseline characteristics. The low-dose CT group demonstrated 247 deaths from lung cancer per 100,000 person-years, whereas the radiography group demonstrated 309 deaths per 100,000 person-years. Thus a relative reduction in rate of death by 20.0% was seen in the CT group (95% CI 6.8 – 26.7%, p = 0.004). The number needed to screen with CT to prevent one lung cancer death was 320. There were 1877 deaths from any cause in the CT group and 2000 deaths in the radiography group; thus CT screening demonstrated a risk reduction of death from any cause of 6.7% (95% CI 1.2% – 13.6%, p = 0.02). Incidence of lung cancer in the CT group was 645 per 100,000 person-years and 941 per 100,000 person-years in the radiography group (RR 1.13, 95% CI 1.03 – 1.23).

Implication/Discussion:
Lung cancer screening with low-dose CT scan in high-risk patients provides a significant mortality benefit.

This trial was stopped early because the mortality benefit was so high. The benefit was driven by the reduction in deaths attributed to lung cancer – and when deaths from lung cancer were excluded from the overall mortality analysis there was no significant difference among the two arms. Largely on the basis of this study, the 2013 USPSTF guidelines for lung cancer screening recommend annual low-dose CT scan in patients who meet NLST inclusion criteria.

Per UpToDate, there are seven low-dose CT screening trials in progress in Europe. It is hoped that meta-analysis of all such RCTs will allow for further refinement in risk stratification, frequency of screening, and management of positive screening findings.

Of note, no randomized trial has ever demonstrated a mortality benefit of chest radiography for lung cancer screening. The Prostate, Lung, Colorectal, and Ovarian (PLCO) Cancer Screening Trial tested this modality vs. “community care,” and because the PLCO trial was ongoing at the time of creation of the NSLT, the NSLT authors trial decided to compare their intervention (CT) to chest radiography, in case the results of chest radiography in PLCO were positive (ultimately they were not).

Further Reading:
1. USPSTF Guidelines for Lung Cancer Screening (2013)
2. ClinicalTrials.gov
3. Wiki Journal Club
4. 2 Minute Medicine

Summary by Duncan F. Moore, MD

Week 3 – Dexamethasone in Bacterial Meningitis

“Dexamethasone in Adults With Bacterial Meningitis”

N Engl J Med 2002; 347:1549-1556 [NEJM free full text]

The current standard of care in the treatment of suspected bacterial meningitis in the developed world includes the administration of dexamethasone prior to or at the time of antibiotic initiation. The initial evaluation of this practice in part stemmed from animal studies which demonstrated that dexamethasone reduces CSF concentrations of inflammatory markers as well as neurologic sequelae after meningitis. RCTs in the pediatric literature also demonstrated clinical benefit. The best prospective trial in adults was this 2002 study by de Gans et al.

Population: adults with suspected meningitis

Intervention: dexamethasone 10mg IV q6hrs x4 days started 15-20 minutes before first IV abx

Comparison: placebo IV with same administration as above

Outcome:
primary = Glasgow Outcome Scale at 8 weeks (1 = death, 2 = vegetative state, 3 = unable to live independently, 4 = unable to return to school/work, 5 = able to return to school/work)
secondary = death, focal neurologic abnormalities, and others
subgroup analyses performed by organism

Results/Conclusion:
301 patients were randomized. At 8 weeks, 15% of dexamethasone patients had an unfavorable outcome (Glasgow Outcome Scale score of 1-4), vs. 25% of placebo patients (RR 0.59, 95% CI 0.37 – 0.94, p= 0.03). Among patients with pneumococcal meningitis, 26% of dexamethasone patients had an unfavorable outcome, vs. 52% of placebo patients. There was no significant difference among treatment arms within the subgroup of patients infected with meningococcal meningitis. Overall, death occurred in 7% of dexamethasone patients and 15% of placebo patients (RR 0.48, 95% CI 0.24 – 0.96, p = 0.04). In pneumococcal meningitis, 14% of dexamethasone patients died, vs. 34% of placebo patients. There was no difference in rates of focal neurologic abnormalities or hearing loss in either treatment arm (including within any subgroup).

Implication/Discussion:
Early adjunctive dexamethasone improves mortality in bacterial meningitis.

As noted in the above subgroup analysis, this benefit appears to be driven by the efficacy within the pneumococcal meningitis subgroup. Of note, the standard initial treatment regimen in this study was amoxicillin 2gm q4hrs for 7-10 days, not our standard ceftriaxone + vancomycin +/- ampicillin. Largely on the basis of this study alone, the IDSA guidelines for the treatment of bacterial meningitis (2004) recommend dexamethasone 0.15 mg/kg q6hrs for 2-4 days with first dose administered 10-20 min before or concomitant with initiation of antibiotics. Dexamethasone should be continued only if CSF Gram stain, CSF culture, or blood cultures are consistent with pneumococcus.

Further Reading:
1. IDSA guidelines for management of bacterial meningitis (2004)
2. Wiki Journal Club
3. 2 Minute Medicine

Summary by Duncan F. Moore, MD

Week 2 – AFFIRM

“A Comparison of Rate Control and Rhythm Control in Patients with Atrial Fibrillation”

by the Atrial Fibrillation Follow-Up Investigation of Rhythm Management (AFFRIM) Investigators

N Engl J Med. 2002 Dec 5;347(23):1825-33. [NEJM free full text]

It seems like the majority of patients with atrial fibrillation that we encounter as residents today are being treated with a rate control strategy, as opposed to a rhythm control strategy. There was a time when both approaches were considered acceptable, and perhaps rhythm control was even the preferred initial strategy. The AFFIRM trial was the landmark study to address this debate.

Population: patients with atrial fibrillation (judged “likely to be recurrent”), age 65 or older “or who had other risk factors for stroke or death,” and in whom anticoagulation was not contraindicated

Intervention: rhythm control strategy with one or more drugs from a pre-specified list and/or cardioversion to achieve sinus rhythm

Comparison: rate control strategy with beta-blockers, CCBs, and/or digoxin to a target resting HR ≤ 80 and a six-minute walk test HR ≤ 110

Outcome:
– primary endpoint – death during follow-up (per Kaplan-Meier estimator)
– secondary endpoint – composite end point of death, disabling stroke, disabling anoxic encephalopathy, major bleeding, and cardiac arrest
– secondary analyses – primary end point in pre-specified subgroups (e.g. age ≥ 65, comorbid CAD, etc.)

Results/Conclusion:
4060 patients were randomized in this multi-center RCT. Death occurred in 26.7% of rhythm control patients versus 25.9% of rate control patients (HR 1.15, 95% CI 0.99 – 1.34, p = 0.08). The composite secondary endpoint occurred in 32.0% of rhythm control patients versus 32.7% of rate control patients (p = 0.33). Rhythm control strategy was associated a higher risk of death among patients older than 65 and patients with CAD (see Figure 2). Additionally, rhythm control patients were more likely to be hospitalized during follow-up (80.1% vs. 73.0%, p < 0.001) and to develop torsades de pointes (0.8% vs. 0.2%, p = 0.007).

Implication/Discussion:
A rhythm control strategy in atrial fibrillation offers no mortality benefit over a rate control strategy.

At the time of publication, the authors wrote that rate control was an “accepted, though often secondary alternative” to rhythm control. Their study clearly demonstrated that there was no significant mortality benefit to either strategy, that hospitalizations were greater in the rhythm control group, and in subgroup analysis that rhythm control led to higher mortality among the elderly and those with CAD. Notably, 37.5% of rhythm control patients had crossed over to rate control strategy by 5 years follow-up, whereas only 14.9% of rate control patients had switched over to rhythm control.

But what does this study mean for our practice today? Generally speaking, rate control is preferred in most patients, particularly the elderly and patients with CHF, whereas rhythm control may be pursued in patients with persistent symptoms despite rate control, patients unable to achieve rate control on AV nodal agents alone, and patients younger than 65. Both the AHA/ACC (2014) and the European Society of Cardiology (2016) guidelines have extensive recommendations that detail specific patient scenarios.

Further Reading:
1. Cardiologytrials.org
2. Wiki Journal Club
3. 2 Minute Medicine

Summary by Duncan F. Moore, MD