Week 20 – MELD

“A Model to Predict Survival in Patients With End-Stage Liver Disease”

Hepatology. 2001 Feb;33(2):464-70. [free full text]

Prior to the adoption of the Model for End-Stage Liver Disease (MELD) score for the allocation of liver transplants, the determination of medical urgency was dependent on the Child-Pugh score. The Child-Pugh score was limited by the inclusion of two subjective variables (severity of ascites and severity of encephalopathy), limited discriminatory ability, and a ceiling effect of laboratory abnormalities. Stakeholders sought an objective, continuous, generalizable index that more accurately and reliably represented disease severity. The MELD score had originally been developed in 2000 to estimate the survival of patients undergoing TIPS. The authors of this 2001 study hypothesized that the MELD score would accurately estimate short-term survival in a wide range of severities and etiologies of liver dysfunction and thus serve as a suitable replacement measure for the Child-Pugh score in the determination of medical urgency in transplant allocation.

This study reported a series of four retrospective validation cohorts for the use of MELD in prediction of mortality in advanced liver disease. The index MELD score was calculated for each patient. Death during follow-up was assessed by chart review.

MELD score = 3.8*ln([bilirubin])+11.2*ln(INR)+9.6*ln([Cr])+6.4*(etiology: 0 if cholestatic or alcoholic, 1 otherwise)

The primary study outcome was the concordance c-statistic between MELD score and 3-month survival. The c-statistic is equivalent to the area under receiver operating characteristic (AUROC). Per the authors, “a c-statistic between 0.8 and 0.9 indicates excellent diagnostic accuracy and a c-statistic greater than 0.7 is generally considered as a useful test.” (See page 455 for further explanation.) There was no reliable comparison statistic (e.g. c-statistic of MELD vs. that of Child-Pugh in all groups).

C-statistic for 3-month survival in the four cohorts ranged from 0.78 to 0.87 (no 95% CIs exceeded 1.0). There was minimal improvement in the c-statistics for 3-month survival with the individual addition of spontaneous bacterial peritonitis, variceal bleed, ascites, and encephalopathy to the MELD score (see Table 4, highest increase in c-statistic was 0.03). When the etiology of liver disease was excluded from the MELD score, there was minimal change in the c-statistics (see Table 5, all paired CIs overlap). C-statistics for 1-week mortality ranged from 0.80 to 0.95.

In conclusion, the MELD score is an excellent predictor of short-term mortality in patients with end-stage liver disease of diverse etiology and severity. Despite the retrospective nature of this study, this study represented a significant improvement upon the Child-Pugh score in determining medical urgency in patients who require liver transplant. In 2002, the United Network for Organ Sharing (UNOS) adopted a modified version of the MELD score for the prioritization of deceased-donor liver transplants in cirrhosis. Concurrent with the 2001 publication of this study, Wiesner et al. performed a prospective validation of the use of MELD in the allocation of liver transplantation. When published in 2003, it demonstrated that MELD score accurately predicted 3-month mortality among patients with chronic liver disease on the waitlist. The MELD score has also been validated in other conditions such as alcoholic hepatitis, hepatorenal syndrome, and acute liver failure (see UpToDate). Subsequent additions to the MELD score have come out over the years. In 2006, the MELD Exception Guidelines offered extra points for severe comorbidities (e.g HCC, hepatopulmonary syndrome). In January 2016, the MELDNa score was adopted and is now used for liver transplant prioritization.

References and Further Reading:
1. “A model to predict poor survival in patients undergoing transjugular intrahepatic portosystemic shunts” (2000)
2. MDCalc “MELD Score”
3. Wiesner et al. “Model for end-stage liver disease (MELD) and allocation of donor livers” (2003)
4. Freeman Jr. et al. “MELD exception guidelines” (2006)
5. MELD @ 2 Minute Medicine
6. UpToDate “Model for End-stage Liver Disease (MELD)”

Summary by Duncan F. Moore, MD

Image Credit: Ed Uthman, CC-BY-2.0, via WikiMedia Commons

Week 17 – CURB-65

“Defining community acquired pneumonia severity on presentation to hospital: an international derivation and validation study”

Thorax. 2003 May;58(5):377-82. [free full text]

Community-acquired pneumonia (CAP) is frequently encountered by the admitting medicine team. Ideally, the patient’s severity at presentation and risk for further decompensation should determine the appropriate setting for further care, whether as an outpatient, on an inpatient ward, or in the ICU. At the time of this 2003 study, the predominant decision aid was the 20-variable Pneumonia Severity Index. The authors of this study sought to develop a simpler decision aid for determining the appropriate level of care at presentation.

The study examined the 30-day mortality rates of adults admitted for CAP via the ED at three non-US academic medical centers (data from three previous CAP cohort studies). 80% of the dataset was analyzed as a derivation cohort – meaning it was used to identify statistically significant, clinically relevant prognostic factors that allowed for mortality risk stratification. The resulting model was applied to the remaining 20% of the dataset (the validation cohort) in order to assess the accuracy of its predictive ability.

The following variables were integrated into the final model (CURB-65):

      1. Confusion
      2. Urea > 19mg/dL (7 mmol/L)
      3. Respiratory rate ≥ 30 breaths/min
      4. low Blood pressure (systolic BP < 90 mmHg or diastolic BP < 60 mmHg)
      5. age ≥ 65

1068 patients were analyzed. 821 (77%) were in the derivation cohort. 86% of patients received IV antibiotics, 5% were admitted to the ICU, and 4% were intubated. 30-day mortality was 9%. 9 of 11 clinical features examined in univariate analysis were statistically significant (see Table 2).

Ultimately, using the above-described CURB-65 model, in which 1 point is assigned for each clinical characteristic, patients with a CURB-65 score of 0 or 1 had 1.5% mortality, patients with a score of 2 had 9.2% mortality, and patients with a score of 3 or more had 22% mortality. Similar values were demonstrated in the validation cohort. Table 5 summarizes the sensitivity, specificity, PPVs, and NPVs of each CURB-65 score for 30-day mortality in both cohorts. As we would expect from a good predictive model, the sensitivity starts out very high and decreases with increasing score, while the specificity starts out very low and increases with increasing score. For the clinical application of their model, the authors selected the cut points of 1, 2, and 3 (see Figure 2).

In conclusion, CURB-65 is a simple 5-variable decision aid that is helpful in the initial stratification of mortality risk in patients with CAP.

The wide range of specificities and sensitivities at different values of the CURB-65 score makes it a robust tool for risk stratification. The authors felt that patients with a score of 0-1 were “likely suitable for home treatment,” patients with a score of 2 should have “hospital-supervised treatment,” and patients with score of  ≥ 3 had “severe pneumonia” and should be admitted (with consideration of ICU admission if score of 4 or 5).

Following the publication of the CURB-65 Score, the creator of the Pneumonia Severity Index (PSI) published a prospective cohort study of CAP that examined the discriminatory power (area under the receiver operating characteristic curve) of the PSI vs. CURB-65. His study found that the PSI “has a higher discriminatory power for short-term mortality, defines a greater proportion of patients at low risk, and is slightly more accurate in identifying patients at low risk” than the CURB-65 score.

Expert opinion at UpToDate prefers the PSI over the CURB-65 score based on its more robust base of confirmatory evidence. Of note, the author of the PSI is one of the authors of the relevant UpToDate article. In an important contrast from the CURB-65 authors, these experts suggest that patients with a CURB-65 score of 0 be managed as outpatients, while patients with a score of 1 and above “should generally be admitted.”

Further Reading/References:
1. Original publication of the PSI, NEJM (1997)
2. PSI vs. CURB-65 (2005)
3. CURB-65 @ Wiki Journal Club
4. CURB-65 @ 2 Minute Medicine
5. UpToDate, “CAP in adults: assessing severity and determining the appropriate level of care”

Summary by Duncan F. Moore, MD

Week 13 – VERT

“Effects of Risedronate Treatment on Vertebral and Nonvertebral Fractures in Women With Postmenopausal Osteoporosis”

by the Vertebral Efficacy with Risedronate Therapy (VERT) Study Group

JAMA. 1999 Oct 13;282(14):1344-52. [free full text]

Bisphosphonates are a highly effective and relatively safe class of medications for the prevention of fractures in patients with osteoporosis. The VERT trial published in 1999 was a landmark trial that demonstrated this protective effect with the daily oral bisphosphonate risedronate.

The trial enrolled post-menopausal women with either 2 or more vertebral fractures per radiography or 1 vertebral fracture with decreased lumbar spine bone mineral density. Patients were randomized to the treatment arm (risedronate 2.5mg PO daily or risedronate 5mg PO daily) to the daily PO placebo control arm. Measured outcomes included: 1) the prevalence of new vertebral fracture at 3 years follow-up, per annual imaging, 2) the prevalence of new non-vertebral fracture at 3 years follow-up, per annual imaging, and 3) change in bone mineral density, per DEXA q6 months.

2458 patients were randomized. During the course of the study, “data from other trials indicated that the 2.5mg risedronate dose was less effective than the 5mg dose,” and thus the authors discontinued further data collection on the 2.5mg treatment arm at 1 year into the study. All treatment groups had similar baseline characteristics. 55% of the placebo group and 60% of the 5mg risedronate group completed 3 years of treatment. The prevalence of new vertebral fracture within 3 years was 11.3% in the risedronate group and 16.3% in the placebo group (RR 0.59, 95% CI 0.43-0.82, p = 0.003; NNT = 20). The prevalence of new non-vertebral fractures at 3 years was 5.2% in the treatment arm and 8.4% in the placebo arm (RR 0.6, 95% CI 0.39-0.94, p = 0.02; NNT = 31). Regarding bone mineral density (BMD), see Figure 4 for a visual depiction of the changes in BMD by treatment group at the various 6-month timepoints. Notably, change from baseline BMD of the lumbar spine and femoral neck was significantly higher (and positive) in the risedronate 5mg group at all follow-up timepoints relative to the placebo group and at all timepoints except 6 months for the femoral trochanter measurements. Regarding adverse events, there was no difference in the incidence of upper GI adverse events among the two groups. GI complaints “were the most common adverse events associated with study discontinuance,” and GI events lead to 42% of placebo withdrawals but only 36% of the 5mg risedronate withdrawals.

Oral risedronate reduces the risk of vertebral and non-vertebral fractures in patients with osteoporosis while increasing bone mineral density. Overall, this was a large, well-designed RCT that demonstrated a concrete treatment benefit. As a result, oral bisphosphonate therapy has become the standard of care both for treatment and prevention of osteoporosis. This study, as well as others, demonstrated that such therapies are well-tolerated with relatively few side effects. A notable strength of this study is that it did not exclude patients with GI comorbidities.  One weakness is the modification of the trial protocol to eliminate the risedronate 2.5mg treatment arm after 1 year of study. Although this arm demonstrated a reduction in vertebral fracture at 1 year relative to placebo (p = 0.02), its elimination raises suspicion that the pre-specified analyses were not yielding the anticipated results during the interim analysis and thus the less-impressive treatment arm was discarded.

Further Reading/References:
1. Weekly alendronate vs. weekly risedronate
2. Comparative effectiveness of pharmacologic treatments to prevent fractures: an updated systematic review (2014)

Summary by Duncan F. Moore, MD

Image Credit: Nick Smith, CC BY-SA 3.0, via Wikimedia Commons

Week 11 – Varenicline vs. Bupropion and Placebo for Smoking Cessation

“Varenicline, an α2β2 Nicotinic Acetylcholine Receptor Partial Agonist, vs Sustained-Release Bupropion and Placebo for Smoking Cessation”

JAMA. 2006 Jul 5;296(1):56-63. [free full text]

Assisting our patients in smoking cessation is a fundamental aspect of outpatient internal medicine. At the time of this trial, the only approved pharmacotherapies for smoking cessation were nicotine replacement therapy and bupropion. As the α2β2 nicotinic acetylcholine receptor (nAChR) was thought to be crucial to the reinforcing effects of nicotine, it was hypothesized that a partial agonist for this receptor could yield sufficient effect to satiate cravings and minimize withdrawal symptoms but also limit the reinforcing effects of exogenous nicotine. Thus Pfizer designed this large phase 3 trial to test the efficacy of its new α2β2 nAChR partial agonist varenicline (Chantix) against the only other non-nicotine pharmacotherapy at the time (bupropion) as well as placebo.

The trial enrolled adult smokers (10+ cigarettes per day) with fewer than three months of smoking abstinence in the past year (notable exclusion criteria included numerous psychiatric and substance use comorbidities). Patients were randomized to 12 weeks of treatment with either varenicline uptitrated by day 8 to 1mg BID, bupropion SR uptitrated by day 4 to 150mg BID, or placebo BID. Patients were also given a smoking cessation self-help booklet at the index visit and encouraged to set a quit date of day 8. Patients were followed at weekly clinic visits for the first 12 weeks (treatment duration) and then a mixture of clinic and phone visits for weeks 13-52. Non-smoking status during follow-up was determined by patient self-report combined with exhaled carbon monoxide < 10ppm. The primary endpoint was the 4-week continuous abstinence rate for study weeks 9-12 (as confirmed by exhaled CO level). Secondary endpoints included the continuous abstinence rate for weeks 9-24 and for weeks 9-52.

1025 patients were randomized. Compliance was similar among the three groups and the median duration of treatment was 84 days. Loss to follow-up was similar among the three groups. CO-confirmed continuous abstinence during weeks 9-12 was 44.0% among the varenicline group vs. 17.7% among the placebo group (OR 3.85, 95% CI 2.70–5.50, p < 0.001) vs. 29.5% among the bupropion group (OR vs. varenicline group 1.93, 95% CI 1.40–2.68, p < 0.001). (OR for bupropion vs. placebo was 2.00, 95% CI 1.38–2.89, p < 0.001.) Continuous abstinence for weeks 9-24 was 29.5% among the varenicline group vs. 10.5% among the placebo group (p < 0.001) vs. 20.7% among the bupropion group (p = 0.007). Continuous abstinence rates weeks 9-52 were 21.9% among the varenicline group vs. 8.4% among placebo group (p < 0.001) vs. 16.1% among the bupropion group (p = 0.057). Subgroup analysis of the primary outcome by sex did not yield significant differences in drug efficacy by sex.

This study demonstrated that varenicline was superior to both placebo and bupropion in facilitating smoking cessation at up to 24 weeks. At greater than 24 weeks, varenicline remained superior to placebo but was similarly efficacious as bupropion. This was a well-designed and executed large, double-blind, placebo- and active-treatment-controlled multicenter US trial. The trial was completed in April 2005 and a new drug application for varenicline (Chantix) was submitted to the FDA in November 2005. Of note, an “identically designed” (per this study’s authors), manufacturer-sponsored phase 3 trial was performed in parallel and reported very similar results in the in the same July 2006 issue of JAMA (PMID: 16820547) as the above study by Gonzales et al. These robust, positive-outcome pre-approval trials of varenicline helped the drug rapidly obtain approval in May 2006.

Per expert opinion at UpToDate, varenicline remains a preferred first-line pharmacotherapy for smoking cessation. Bupropion is a suitable, though generally less efficacious, alternative, particularly when the patient has comorbid depression. Per UpToDate, the recent (2016) EAGLES trial demonstrated that “in contrast to earlier concerns, varenicline and bupropion have no higher risk of associated adverse psychiatric effects than [nicotine replacement therapy] in smokers with comorbid psychiatric disorders.”

Further Reading/References:
1. This trial @ ClinicalTrials.gov
2. Sister trial: “Efficacy of varenicline, an alpha4beta2 nicotinic acetylcholine receptor partial agonist, vs placebo or sustained-release bupropion for smoking cessation: a randomized controlled trial.” JAMA. 2006 Jul 5;296(1):56-63.
3. Chantix FDA Approval Letter 5/10/2006
4. Rigotti NA. Pharmacotherapy for smoking cessation in adults. Post TW, ed. UpToDate. Waltham, MA: UpToDate Inc.
5. “Neuropsychiatric safety and efficacy of varenicline, bupropion, and nicotine patch in smokers with and without psychiatric disorders (EAGLES): a double-blind, randomised, placebo-controlled clinical trial.” Lancet. 2016 Jun 18;387(10037):2507-20.
6. 2 Minute Medicine: “Varenicline and bupropion more effective than varenicline alone for tobacco abstinence”
7. 2 Minute Medicine: “Varenicline safe for smoking cessation in patients with stable major depressive disorder”

Summary by Duncan F. Moore, MD

Image Credit: Сергей Фатеев, CC BY-SA 3.0, via Wikimedia Commons

Week 8 – 4S

“Randomised trial of cholesterol lowering in 4444 patients with coronary heart disease: the Scandinavian Simvastatin Survival Study (4S)”

Lancet. 1994 Nov 19;344(8934):1383-9 [free full text]

Statins are an integral part of modern primary and secondary prevention of atherosclerotic cardiovascular disease (ASCVD). Hypercholesterolemia is regarded as a major contributory factor to the development of atherosclerosis, and in the 1980s, a handful of clinical trials demonstrated reduction in MI/CAD incidence with cholesterol-lowering agents, such as cholestyramine and gemfibrozil. However, neither drug demonstrated a mortality benefit. By the late 1980s, there was much hope that the emerging drug class of HMG-CoA reductase inhibitors (statins) would confer a mortality benefit, given their previously demonstrated LDL-lowering effects. The 1994 Scandinavian Simvastatin Survival Study was the first large clinical trial to assess this hypothesis.

4444 adults ages 35-70 with a history of angina pectoris or MI and elevated serum total cholesterol (212 – 309 mg/dL) were recruited from 94 clinical centers in Scandinavia (and in Finland, which is technically a Nordic country but not a Scandinavian country…) and randomized to treatment with either simvastatin 20mg PO qPM or placebo. Dosage was increased at 12 weeks and 6 months to target a serum total cholesterol of 124 to 201 mg/dL. (Placebo patients were randomly uptitrated as well.) The primary endpoint was all-cause mortality. The secondary endpoint was time to first “major coronary event,” which included coronary deaths, nonfatal MI, resuscitated cardiac arrest, and definite silent MI per EKG.

The study was stopped early in 1994 after an interim analysis demonstrated a significant survival benefit in the treatment arm. At a mean 5.4 years of follow-up, 256 (12%) in the placebo group versus 182 (8%) in the simvastatin group had died (RR 0.70, 95% CI 0.58-0.85, p=0.0003, NNT = 30.1). The mortality benefit was driven exclusively by a reduction in coronary deaths. Dropout rates were similar (13% of placebo group and 10% of simvastatin group). The secondary endpoint, occurrence of a major coronary event, occurred in 622 (28%) of the placebo group and 431 (19%) of the simvastatin group (RR 0.66, 95% CI 0.59-0.75, p < 0.00001). Subgroup analyses of women and patients aged 60+ demonstrated similar findings for the primary and secondary outcomes. Over the entire course of the study, the average changes in lipid values from baseline in the simvastatin group were -25% total cholesterol, -35% LDL, +8% HDL, and -10% triglycerides. The corresponding percent changes from baseline in the placebo group were +1%, +1%, +1%, and +7%, respectively.

In conclusion, simvastatin therapy reduced mortality in patients with known CAD and hypercholesterolemia via reduction of major coronary events. This was a large, well-designed, double-blind RCT that ushered in the era of widespread statin use for secondary, and eventually, primary prevention of ASCVD. For further information about modern guidelines for the use of statins, please see the 2018 “ACC/AHA Multisociety Guideline on the Management of Blood Cholesterol” and the 2016 USPSTF guideline “Statin use for the Primary Prevention of Cardiovascular Disease in Adults: Preventive Medication”.

Finally, for history buffs interested in a brief history of the discovery and development of this drug class, please see this paper by Akira Endo.

References / Additional Reading:
1. 4S @ Wiki JournalClub
2. “2018 ACC/AHA Multisociety Guideline on the Management of Blood Cholesterol”
3. “Statin use for the Primary Prevention of Cardiovascular Disease in Adults: Preventive Medication” (2016)
4. UpToDate, “Society guideline links: Lipid disorders in adults”
5. “A historical perspective on the discovery of statins” (2010)

Summary by Duncan F. Moore, MD

Image Credit: Siol, CC BY-SA 3.0, via Wikimedia Commons

Week 7 – FUO

“Fever of Unexplained Origin: Report on 100 Cases”

Medicine (Baltimore). 1961 Feb;40:1-30. [free full text]

In our modern usage, fever of unknown origin (FUO) refers to a persistent unexplained fever despite an adequate medical workup. The most commonly used criteria for this diagnosis stem from this 1961 series by Petersdorf and Beeson.

This study analyzed a prospective cohort of patients evaluated at Yale’s hospital for FUO between 1952 and 1957. Their FUO criteria: 1) illness of more than three week’s duration, 2) fever higher than 101º F on several occasions, and 3) diagnosis uncertain after one week of study in hospital. After 126 cases had been noted, retrospective investigation was undertaken to determine the ultimate etiologies of the fevers. The authors winnowed this group to 100 cases based on availability of follow-up data and the exclusion of cases that “represented combinations of such common entities as urinary tract infection and thrombophlebitis.”

In 93 cases, “a reasonably certain diagnosis was eventually possible.” 6 of the 7 undiagnosed patients ultimately made a full recovery. Underlying etiologies (see table 1 on page 3) included: infectious 36% (with TB in 11%), neoplastic diseases 19%, collagen disease (e.g. SLE) 13%, pulmonary embolism 3%, benign non-specific pericarditis 2%, sarcoidosis 2%, hypersensitivity reaction 4%, cranial arteritis 2%, periodic disease 5%, miscellaneous disease 4%, factitious fever 3%, no diagnosis 7%.

Clearly, diagnostic modalities have improved markedly since this 1961 study. However, the core etiologies of infection, malignancy, and connective tissue disease/non-infectious inflammatory disease remain most prominent, while the percentage of patients with no ultimate diagnosis has been increasing (for example, see PMIDs 9413425, 12742800, and 17220753). Modifications to the 1961 criteria have been proposed (for example: 1 week duration of hospital stay not required if certain diagnostic measures have been performed) and implemented in recent FUO trials. One modern definition of FUO: fever ≥ 38.3º C, lasting at least 2-3 weeks, with no identified cause after three days of hospital evaluation or three outpatient visits. Per UpToDate, the following minimum diagnostic workup is recommended in suspected FUO: blood cultures, ESR or CRP, LDH, HIV, RF, heterophile antibody test, CK, ANA, TB testing, SPEP, and CT of abdomen and chest.

Further Reading/References:
1. “Fever of unknown origin (FUO). I A. prospective multicenter study of 167 patients with FUO, using fixed epidemiologic entry criteria. The Netherlands FUO Study Group.” Medicine (Baltimore). 1997 Nov;76(6):392-400.
2. “From prolonged febrile illness to fever of unknown origin: the challenge continues.” Arch Intern Med. 2003 May 12;163(9):1033-41.
3. “A prospective multicenter study on fever of unknown origin: the yield of a structured diagnostic protocol.” Medicine (Baltimore). 2007 Jan;86(1):26-38.
4. UpToDate, “Approach to the Adult with Fever of Unknown Origin”
5. “Robert Petersdorf, 80, Major Force in U.S. Medicine, Dies” The New York Times, 2006

Summary by Duncan F. Moore, MD

Week 6 – Bicarbonate and Progression of CKD

“Bicarbonate Supplementation Slows Progression of CKD and Improves Nutritional Status”

J Am Soc Nephrol. 2009 Sep;20(9):2075-84. [free full text]

Metabolic acidosis is a common complication of advanced CKD. Some animal models of CKD have suggested that worsening metabolic acidosis is associated with worsening proteinuria, tubulointerstitial fibrosis, and acceleration of decline of renal function. Short-term human studies have demonstrated that bicarbonate administration reduces protein catabolism and that metabolic acidosis is an independent risk factor for acceleration of decline of renal function. However, until this 2009 study by de Brito-Ashurst et al., there were no long-term studies demonstrating the beneficial effects of oral bicarbonate administration on CKD progression and nutritional status.

The study enrolled CKD patients with CrCl 15-30ml/min and plasma bicarbonate 16-20 mEq/L and randomized them to treatment with either sodium bicarbonate 600mg PO TID (with protocolized uptitration to achieve plasma HCO3  ≥ 23 mEq/L) for 2 years, or to routine care. The primary outcomes were: 1) the decline in CrCl at 2 years, 2) “rapid progression of renal failure” (defined as decline of CrCl > 3 ml/min per year), and 3) development of ESRD requiring dialysis. Secondary outcomes included 1) change in dietary protein intake, 2) change in normalized protein nitrogen appearance (nPNA), 3) change in serum albumin, and 4) change in mid-arm muscle circumference.

134 patients were randomized, and baseline characteristics were similar among the two groups. Serum bicarbonate levels increased significantly in the treatment arm. (See Figure 2.) At two years, CrCl decline was 1.88 ml/min in the treatment group vs. 5.93 ml/min in the control group (p < 0.01). Rapid progression of renal failure was noted in 9% of intervention group vs. 45% of the control group (RR 0.15, 95% CI 0.06–0.40, p < 0.0001, NNT = 2.8), and ESRD developed in 6.5% of the intervention group vs. 33% of the control group (RR 0.13, 95% CI 0.04–0.40, p < 0.001; NNT = 3.8). Regarding nutritional status, dietary protein intake increased in the treatment group relative to the control group (p < 0.007). Normalized protein nitrogen appearance decreased in the treatment group and increased in the control group (p < 0.002). Serum albumin increased in the treatment group but was unchanged in the control group, and mean mid-arm muscle circumference increased by 1.5 cm in the intervention group vs. no change in the control group (p < 0.03).

In conclusion, oral bicarbonate supplementation in CKD patients with metabolic acidosis reduces the rate of CrCl decline and progression to ESRD and improves nutritional status. Primarily on the basis of this study, the KDIGO 2012 guidelines for the management of CKD recommend oral bicarbonate supplementation to maintain serum bicarbonate within the normal range (23-29 mEq/L). This is a remarkably cheap and effective intervention. Importantly, the rates of adverse events, particularly worsening hypertension and increasing edema, were unchanged among the two groups. Of note, sodium bicarbonate induces much less volume expansion than a comparable sodium load of sodium chloride.

In their discussion, the authors suggest that their results support the hypothesis of Nath et al. (1985) that “compensatory changes [in the setting of metabolic acidosis] such as increased ammonia production and the resultant complement cascade activation in remnant tubules in the declining renal mass [are] injurious to the tubulointerstitium.” The hypercatabolic state of advanced CKD appears to be mitigated by bicarbonate supplementation. The authors note that “an optimum nutritional status has positive implications on the clinical outcomes of dialysis patients, whereas [protein-energy wasting] is associated with increased morbidity and mortality.”

Limitations to this trial include its open-label, no-placebo design. Also, the applicable population is limited by study exclusion criteria of morbid obesity, overt CHF, and uncontrolled HTN.

Further Reading:
1. Nath et al. “Pathophysiology of chronic tubulo-interstitial disease in rats: Interactions of dietary acid load, ammonia, and complement component-C3” (1985)
2. KDIGO 2012 Clinical Practice Guideline for the Evaluation and Management of Chronic Kidney Disease (see page 89)
3. UpToDate, “Pathogenesis, consequences, and treatment of metabolic acidosis in chronic kidney disease”

Summary by Duncan F. Moore, MD

Week 50 – VERT

“Effects of Risedronate Treatment on Vertebral and Nonvertebral Fractures in Women With Postmenopausal Osteoporosis”

by the Vertebral Efficacy with Risedronate Therapy (VERT) Study Group

JAMA. 1999 Oct 13;282(14):1344-52. [free full text]

Bisphosphonates are a highly effective and relatively safe class of medications for the prevention of fractures in patients with osteoporosis. The VERT trial published in 1999 was a landmark trial that demonstrated this protective effect with the daily oral bisphosphonate risedronate.

The trial enrolled post-menopausal women with either 2 or more vertebral fractures per radiography or 1 vertebral fracture with decreased lumbar spine bone mineral density. Patients were randomized to the treatment arm (risedronate 2.5mg PO daily or risedronate 5mg PO daily) to the daily PO placebo control arm. Measured outcomes included: 1) the prevalence of new vertebral fracture at 3 years follow-up, per annual imaging, 2) the prevalence of new non-vertebral fracture at 3 years follow-up, per annual imaging, and 3) change in bone mineral density, per DEXA q6 months.

2458 patients were randomized. During the course of the study, “data from other trials indicated that the 2.5mg risedronate dose was less effective than the 5mg dose,” and thus the authors discontinued further data collection on the 2.5mg treatment arm at 1 year into the study. All treatment groups had similar baseline characteristics. 55% of the placebo group and 60% of the 5mg risedronate group completed 3 years of treatment. The prevalence of new vertebral fracture within 3 years was 11.3% in the risedronate group and 16.3% in the placebo group (RR 0.59, 95% CI 0.43-0.82, p = 0.003; NNT = 20). The prevalence of new non-vertebral fractures at 3 years was 5.2% in the treatment arm and 8.4% in the placebo arm (RR 0.6, 95% CI 0.39-0.94, p = 0.02; NNT = 31). Regarding bone mineral density (BMD), see Figure 4 for a visual depiction of the changes in BMD by treatment group at the various 6-month timepoints. Notably, change from baseline BMD of the lumbar spine and femoral neck was significantly higher (and positive) in the risedronate 5mg group at all follow-up timepoints relative to the placebo group and at all timepoints except 6 months for the femoral trochanter measurements. Regarding adverse events, there was no difference in the incidence of upper GI adverse events among the two groups. GI complaints “were the most common adverse events associated with study discontinuance,” and GI events lead to 42% of placebo withdrawals but only 36% of the 5mg risedronate withdrawals.

Oral risedronate reduces the risk of vertebral and non-vertebral fractures in patients with osteoporosis while increasing bone mineral density. Overall, this was a large, well-designed RCT that demonstrated a concrete treatment benefit. As a result, oral bisphosphonate therapy has become the standard of care both for treatment and prevention of osteoporosis. This study, as well as others, demonstrated that such therapies are well-tolerated with relatively few side effects. A notable strength of this study is that it did not exclude patients with GI comorbidities.  One weakness is the modification of the trial protocol to eliminate the risedronate 2.5mg treatment arm after 1 year of study. Although this arm demonstrated a reduction in vertebral fracture at 1 year relative to placebo (p = 0.02), its elimination raises suspicion that the pre-specified analyses were not yielding the anticipated results during the interim analysis and thus the less-impressive treatment arm was discarded.

Further Reading/References:
1. Weekly alendronate vs. weekly risedronate [https://www.ncbi.nlm.nih.gov/pubmed/15619680]
2. Comparative effectiveness of pharmacologic treatments to prevent fractures: an updated systematic review (2014) [https://www.ncbi.nlm.nih.gov/pubmed/25199883]

Summary by Duncan F. Moore, MD

Image Credit: Nick Smith, CC BY-SA 3.0, via Wikimedia Commons

Week 42 – BeSt

“Clinical and Radiographic Outcomes of Four Different Treatment Strategies in Patients with Early Rheumatoid Arthritis (the BeSt Study).”

Arthritis & Rheumatism. 2005 Nov;52(11):3381-3390. [free full text]

Rheumatoid arthritis (RA) is among the most prevalent of the rheumatic diseases with a lifetime prevalence of 3.6% in women and 1.7% in men [1]. It is a chronic, systemic, inflammatory autoimmune disease of variable clinical course that can severely impact physical functional status and even mortality. Over the past 30 years, as the armamentarium of therapies for RA has exploded, there has been increased debate about the ideal initial therapy. The BeSt (Dutch: Behandel-Strategieën “treatment strategies”) trial was designed to compare, according to the authors, four of “the most frequently used and discussed strategies.” Regimens incorporating traditional disease-modifying antirheumatic drugs (DMARDs), such as methotrexate, and newer therapies, such as TNF-alpha inhibitors, were compared directly.

The trial enrolled 508 DMARD-naïve patients with early rheumatoid arthritis. Pertinent exclusion criteria included history of cancer and pre-existing laboratory abnormalities or comorbidities (e.g. elevated creatinine or ALT, alcohol abuse, pregnancy or desire to conceive, etc.) that would preclude the use of various DMARDs. Patients were randomized to one of four treatment groups. Within each regimen, the Disease Activity Score in 44 joints (DAS-44) was assessed q3 months, and, if > 2.4, the medication regimen was uptitrated to the next step within the treatment group.

Four Treatment Groups

  1. Sequential monotherapy: methotrexate (MTX) 15mg/week, uptitrated PRN to 25-30mg/week. If insufficient control, the following sequence was pursued: sulfasalazine (SSZ) monotherapy, leflunomide monotherapy, MTX + infliximab, gold with methylprednisolone, MTX + cyclosporin A (CSA) + prednisone
  2. Step-up combination therapy: MTX 15mg/week, uptitrated PRN to 25-30mg/week. If insufficient control, SSZ was added, followed by hydroxychloroquine (HCQ), followed by prednisone. If patients failed to respond to those four drugs, they were switched to MTX + infliximab, then MTX + CSA + prednisone, and finally to leflunomide.
  3. Initial combination therapy with tapered high-dose prednisone: MTX 7.5 mg/week + SSZ 2000 mg/day + prednisone 60mg/day (tapered in 7 weeks to 7.5 mg/day). If insufficient control, MTX was uptitrated to 25-30 mg/week. Next, combination would be switched to MTX + CSA + prednisone, then MTX + infliximab, then leflunomide monotherapy, gold with methylprednisolone, and finally azathioprine with prednisone.
  4. Initial combination therapy with infliximab: MTX 25-30 mg/week + infliximab 3 mg/kg at weeks 0, 2, 6, and q8 weeks thereafter. There was a protocol for infliximab-dose uptitration starting at 3 months. If insufficient control on MTX and infliximab 10 mg/kg, patients were switched to SSZ, then leflunomide, then MTX + CSA + prednisone, then gold + methylprednisolone, and finally AZA with prednisone.

Once clinical response was adequate for at least 6 months, there was a protocol for tapering the drug regimen.

The primary endpoints were: 1) functional ability per the Dutch version of the Health Assessment Questionnaire (D-HAQ), collected by a blinded research nurse q3 months and 2) radiographic joint damage per the modified Sharp/Van der Heijde score (SHS). Pertinent secondary outcomes included DAS-44 score and laboratory evidence of treatment toxicity.

At randomization, enrolled RA patients had a median duration of symptoms of 23 weeks and median duration since diagnosis of RA of 2 weeks. Mean DAS-44 was 4.4 ± 0.9. 72% of patients had erosive disease. Mean D-HAQ score at 3 months was 1.0 in groups 1 and 2 and 0.6 in groups 3 and 4 (p < 0.001 for groups 1 and 2 vs. groups 3 and 4; paired tests otherwise insignificant). Mean D-HAQ at 1 year was 0.7 in groups 1 and 2 and 0.5 in groups 3 and 4 (p = 0.010 for group 1 vs. group 3, p = 0.003 for group 1 vs. group 4; paired tests otherwise insignificant). At 1 year, patients in group 3 or 4 had less radiographic progression in joint damage per SHS than patients in group 1 or 2. Median increases in SHS were 2.0, 2.5., 1.0, and 0.5 in groups 1-4, respectively (p = 0.003 for group 1 vs. group 3, p < 0.001 for group 1 versus group 4, p = 0.007 for group 2 vs. group 3, p < 0.001 for group 2 vs. group 4). Regarding DAS-44 score: low disease activity (DAS-44 ≤ 2.4) at 1 year was reached in 53%, 64%, 71%, 74% of groups 1-4, respectively (p = 0.004 for group 1 vs. group 3, p = 0.001 for group 1 vs. group 4, p not significant for other comparisons). There were no group differences in prevalence of adverse effects.

Overall, among patients with early RA, initial combination therapy that included either prednisone (group 3) or infliximab (group 4) resulted in better functional and radiographic improvement than did initial therapy with sequential monotherapy (group 1) or step-up combination therapy (group 2). In the discussion, the authors note that given the treatment group differences in radiographic progression of disease, “starting therapy with a single DMARD would be a missed opportunity in a considerable number of patients.” Contemporary commentary by Weisman notes that “the authors describe both an argument and a counterargument arising from their observations: aggressive treatment with combinations of expensive drugs would ‘overtreat’ a large proportion of patients, yet early suppression of disease activity may have an important influence on subsequent long‐term disability and damage.”

Fourteen years later, it is a bit difficult to place the specific results of this trial in our current practice. Its trial design is absolutely byzantine and compares the 1-year experience of a variety of complex protocols that theoretically have substantial eventual potential overlap. Furthermore, it is difficult to assess if the relatively small group differences in symptom (D-HAQ) and radiographic (SHS) scales were truly clinically significant even if they were statistically significant. The American College of Rheumatology 2015 Guideline for the Treatment of Rheumatoid Arthritis synthesized the immense body of literature that came before and after the BeSt study and ultimately gave a variety of conditional statements about the “best practice” treatment of symptomatic early RA. (See Table 2 on page 8.) The recommendations emphasized DMARD monotherapy as the initial strategy but in the specific setting of a treat-to-target strategy. They also recommended escalation to combination DMARDs or biologics in patients with moderate or high disease activity despite DMARD monotherapy.

References / Additional Reading:
1. “The lifetime risk of adult-onset rheumatoid arthritis and other inflammatory autoimmune rheumatic diseases.” Arthritis Rheum. 2011 Mar;63(3):633-9. [https://www.ncbi.nlm.nih.gov/pubmed/21360492]
2. BeSt @ Wiki Journal Club
3. “Progress toward the cure of rheumatoid arthritis? The BeSt study.” Arthritis Rheum. 2005 Nov;52(11):3326-32.
4. “Review: treat to target in rheumatoid arthritis: fact, fiction, or hypothesis?” Arthritis Rheumatol. 2014 Apr;66(4):775-82. [https://www.ncbi.nlm.nih.gov/pubmed/24757129]
5. “2015 American College of Rheumatology Guideline for the Treatment of Rheumatoid Arthritis” Arthritis Rheumatol. 2016 Jan;68(1):1-26
6. RheumDAS calculator

Summary by Duncan F. Moore, MD

Image Credit: Braegel, CC BY 3.0, via Wikimedia Commons

Week 39 – POISE

“Effects of extended-release metoprolol succinate in patients undergoing non-cardiac surgery: a randomised controlled trial”

Lancet. 2008 May 31;371(9627):1839-47. [free full text]

Non-cardiac surgery is commonly associated with major cardiovascular complications. It has been hypothesized that perioperative beta blockade would reduce such events by attenuating the effects of the intraoperative increases in catecholamine levels. Prior to the 2008 POISE trial, small- and moderate-sized trials had revealed inconsistent results, alternately demonstrating benefit and non-benefit with perioperative beta blockade. The POISE trial was a large RCT designed to assess the benefit of extended-release metoprolol succinate (vs. placebo) in reducing major cardiovascular events in patients of elevated cardiovascular risk.

The trial enrolled patients age 45+ undergoing non-cardiac surgery with estimated LOS 24+ hrs and elevated risk of cardiac disease, meaning: either 1) hx of CAD, 2) peripheral vascular disease, 3) hospitalization for CHF within past 3 years, 4) undergoing major vascular surgery, 5) or any three of the following seven risk criteria: undergoing intrathoracic or intraperitoneal surgery, hx CHF, hx TIA, hx DM, Cr > 2.0, age 70+, or undergoing urgent/emergent surgery.

Notable exclusion criteria: HR < 50, 2nd or 3rd degree heart block, asthma, already on beta blocker, prior intolerance of beta blocker, hx CABG within 5 years and no cardiac ischemia since

Intervention: metoprolol succinate (extended-release) 100mg PO starting 2-4 hrs before surgery, additional 100mg at 6-12 hrs postoperatively, followed by 200mg daily for 30 days. (Patients unable to take PO meds postoperatively were given metoprolol infusion.)

Comparison: placebo PO / IV at same frequency as metoprolol arm

Outcome:
Primary – composite of cardiovascular death, non-fatal MI, and non-fatal cardiac arrest at 30 days

Secondary (at 30 days)

        • cardiovascular death
        • non-fatal MI
        • non-fatal cardiac arrest
        • all-cause mortality
        • non-cardiovascular death
        • MI
        • cardiac revascularization
        • stroke
        • non-fatal stroke
        • CHF
        • new, clinically significant atrial fibrillation
        • clinically significant hypotension
        • clinically significant bradycardia

Pre-specified subgroup analyses of primary outcome:

Results:
9298 patients were randomized. However, fraudulent activity was detected at participating sites in Iran and Colombia, and thus 947 patients from these sites were excluded from the final analyses. Ultimately, 4174 were randomized to the metoprolol group, and 4177 were randomized to the placebo group. There were no significant differences in baseline characteristics, pre-operative cardiac medications, surgery type, or anesthesia type between the two groups (see Table 1).

Regarding the primary outcome, metoprolol patients were less likely than placebo patients to experience the primary composite endpoint of cardiovascular death, non-fatal MI, and non-fatal cardiac arrest (HR 0.84, 95% CI 0.70-0.99, p = 0.0399). See Figure 2A for the relevant Kaplan-Meier curve. Note that the curves separate distinctly within the first several days.

Regarding selected secondary outcomes (see Table 3 for full list), metoprolol patients were more likely to die from any cause (HR 1.33, 95% CI 1.03-1.74, p = 0.0317). See Figure 2D for the Kaplan-Meier curve for all-cause mortality. Note that the curves start to separate around day 10. Cause of death was analyzed, and the only group difference in attributable cause was an increased number of deaths due to sepsis or infection in the metoprolol group (data not shown). Metoprolol patients were more likely to sustain a stroke (HR 2.17, 95% CI 1.26-3.74, p = 0.0053) or a non-fatal stroke (HR 1.94, 95% CI 1.01-3.69, p = 0.0450). Of all patients who sustained a non-fatal stroke, only 15-20% made a full recovery. Metoprolol patients were less likely to sustain new-onset atrial fibrillation (HR 0.76, 95% CI 0.58-0.99, p = 0.0435) and less likely to sustain a non-fatal MI (HR 0.70, 95% CI 0.57-0.86, p = 0.0008). There were no group differences in risk of cardiovascular death or non-fatal cardiac arrest. Metoprolol patients were more likely to sustain clinically significant hypotension (HR 1.55, 95% CI 1.38-1.74, P < 0.0001) and clinically significant bradycardia (HR 2.74, 95% CI 2.19-3.43, p < 0.0001).

Subgroup analysis did not reveal any significant interaction with the primary outcome by RCRI, sex, type of surgery, or anesthesia type.

Implication/Discussion:
In patients with cardiovascular risk factors undergoing non-cardiac surgery, the perioperative initiation of beta blockade decreased the composite risk of cardiovascular death, non-fatal MI, and non-fatal cardiac arrest and increased the overall mortality risk and risk of stroke.

This study affirms its central hypothesis – that blunting the catecholamine surge of surgery is beneficial from a cardiac standpoint. (Most patients in this study had an RCRI of 1 or 2.) However, the attendant increase in all-cause mortality is dramatic. The increased mortality is thought to result from delayed recognition of sepsis due to masking of tachycardia. Beta blockade may also limit the physiologic hemodynamic response necessary to successfully fight a serious infection. In retrospective analyses mentioned in the discussion, the investigators state that they cannot fully explain the increased risk of stroke in the metoprolol group. However, hypotension attributable to beta blockade explains about half of the increased number of strokes.

Overall, the authors conclude that “patients are unlikely to accept the risks associated with perioperative extended-release metoprolol.”

A major limitation of this study is the fact that 10% of enrolled patients were discarded in analysis due to fraudulent activity at selected investigation sites. In terms of generalizability, it is important to remember that POISE excluded patients who were already on beta blockers.

Currently, per expert opinion at UpToDate, it is not recommended to initiate beta blockers preoperatively in order improve perioperative outcomes. POISE is an important piece of evidence underpinning the 2014 ACC/AHA Guideline on Perioperative Cardiovascular Evaluation and Management of Patients Undergoing Noncardiac Surgery, which includes the following recommendations regarding beta blockers:

      • Beta blocker therapy should not be started on the day of surgery (Class III – Harm, Level B)
      • Continue beta blockers in patients who are on beta blockers chronically (Class I, Level B)
      • In patients with intermediate- or high-risk preoperative tests, it may be reasonable to begin beta blockers
      • In patients with ≥ 3 RCRI risk factors, it may be reasonable to begin beta blockers before surgery
      • Initiating beta blockers in the perioperative setting as an approach to reduce perioperative risk is of uncertain benefit in those with a long-term indication but no other RCRI risk factors
      • It may be reasonable to begin perioperative beta blockers long enough in advance to assess safety and tolerability, preferably > 1 day before surgery

Further Reading/References:
1. Wiki Journal Club
2. 2 Minute Medicine
3. UpToDate, “Management of cardiac risk for noncardiac surgery”
4. 2014 ACC/AHA guideline on perioperative cardiovascular evaluation and management of patients undergoing noncardiac surgery: a report of the American College of Cardiology/American Heart Association Task Force on practice guidelines.

Image Credit: Mark Oniffrey, CC BY-SA 4.0, via Wikimedia Commons

Summary by Duncan F. Moore, MD