Week 51 – LOTT

“A Randomized Trial of Long-Term Oxygen for COPD with Moderate Desaturation”

by the Long-Term Oxygen Treatment Trial (LOTT) Research Group

N Engl J Med. 2016 Oct 27;375(17):1617-1627. [free full text]

The long-term treatment of severe resting hypoxemia (SpO2 < 89%) in COPD with supplemental oxygen has been a cornerstone of modern outpatient COPD management since its mortality benefit was demonstrated circa 1980. Subsequently, the utility of supplemental oxygen in COPD patients with moderate resting daytime hypoxemia (SpO2 89-93%) was investigated in trials in the 1990s; however, such trials were underpowered to assess mortality benefit. Ultimately, the LOTT trial was funded by the NIH and Centers for Medicare and Medicaid Services (CMS) primarily to determine if there was a mortality benefit to supplemental oxygen in COPD patients with moderate hypoxemia as well to analyze as numerous other secondary outcomes, such as hospitalization rates and exercise performance.

The LOTT trial was originally planned to enroll 3500 patients. However, after 7 months the trial had randomized only 34 patients, and mortality had been lower than anticipated. Thus in late 2009 the trial was redesigned to include broader inclusion criteria (now patients with exercise-induced hypoxemia could qualify) and the primary endpoint was broadened from mortality to a composite of time to first hospitalization or death.

The revised LOTT trial enrolled COPD patients with moderate resting hypoxemia (SpO2 89-93%) or moderate exercise-induced desaturation during the 6-minute walk test (SpO2 ≥ 80% for ≥ 5 minutes and < 90% for ≥ 10 seconds). Patients were randomized to either supplemental oxygen (24-hour oxygen if resting SpO2 89-93%, otherwise oxygen only during sleep and exercise if the desaturation occurred only during exercise) or to usual care without supplemental oxygen. Supplemental oxygen flow rate was 2 liters per minute and could be uptitrated by protocol among patients with exercise-induced hypoxemia. The primary outcome was time to composite of first hospitalization or death. Secondary outcomes included hospitalization rates, lung function, performance on 6-minute walk test, and quality of life.

368 patients were randomized to the supplemental-oxygen group and 370 to the no-supplemental-oxygen group. Of the supplemental-oxygen group, 220 patients were prescribed 24-hour oxygen support, and 148 were prescribed oxygen for use during exercise and sleep only. Median duration of follow-up was 18.4 months. Regarding the primary outcome, there was no group difference in time to death or first hospitalization (p = 0.52 by log-rank test). See Figure 1A. Furthermore, there were no treatment-group differences in the primary outcome among patients of the following pre-specified subgroups: type of oxygen prescription, “desaturation profile,” race, sex, smoking status, SpO2 nadir during 6-minute walk, FEV1, BODE  index, SF-36 physical-component score, BMI, or history of anemia. Patients with a COPD exacerbation in the 1-2 months prior to enrollment, age 71+ at enrollment, and those with lower Quality of Well-Being Scale score at enrollment all demonstrated benefit from supplemental O2, but none of these subgroup treatment effects were sustained when the analyses were adjusted for multiple comparisons. Regarding secondary outcomes, there were no treatment-group differences in rates of all-cause hospitalizations, COPD-related hospitalizations, or non-COPD-related hospitalizations, and there were no differences in change from baseline measures of quality of life, anxiety, depression, lung function, and distance achieved in 6-minute walk.

The LOTT trial presents compelling evidence that there is no significant benefit, mortality or otherwise, of oxygen supplementation in patients with COPD and either moderate hypoxemia at rest (SpO2 > 88%) or exercise-induced hypoxemia. Although this trial’s substantial redesign in its early course is noted, the trial still is our best evidence to date about the benefit (or lack thereof) of oxygen in this patient group. As acknowledged by the authors, the trial may have had significant selection bias in referral. (Many physicians did not refer specific patients for enrollment because “they were too ill or [were believed to have benefited] from oxygen.”) Another notable limitation of this study is that nocturnal oxygen saturation was not evaluated. The authors do note that “some patients with COPD and severe nocturnal desaturation might benefit from nocturnal oxygen supplementation.”

For further contemporary contextualization of the study, please see the excellent post at PulmCCM from 11/2016. Included in that post is a link to an overview and Q&A from the NIH regarding the LOTT study.

References / Additional Reading:
1. PulmCCM, “Long-term oxygen brought no benefits for moderate hypoxemia in COPD”
2. LOTT @ 2 Minute Medicine
3. LOTT @ ClinicalTrials.gov
4. McDonald, J.H. 2014. Handbook of Biological Statistics (3rd ed.). Sparky House Publishing, Baltimore, Maryland.
5. Centers for Medicare and Medicaid Services, “Certificate of Medical Necessity CMS-484– Oxygen”
6. Ann Am Thorac Soc. 2018 Dec;15(12):1369-1381. “Optimizing Home Oxygen Therapy. An Official American Thoracic Society Workshop Report.”

Summary by Duncan F. Moore, MD

Image Credit: Patrick McAleer, CC BY-SA 2.0 UK, via Wikimedia Commons

Week 46 – COURAGE

 

“Optimal Medical Therapy with or without PCI for Stable Coronary Disease”

by the Clinical Outcomes Utilizing Revascularization and Aggressive Drug Evaluation (COURAGE) Trial Research Group

N Engl J Med. 2007 Apr 12;356(15):1503-16 [free full text]

The optimal medical management of stable coronary artery disease has been well-described. However, prior to the 2007 COURAGE trial, the role of percutaneous coronary intervention (PCI) in the initial management of stable coronary artery disease was unclear. It was known that PCI improved angina symptoms and short-term exercise performance in stable disease, but its mortality benefit and reduction of future myocardial infarction and ACS were unknown.

The trial recruited patients with stable coronary artery disease. (See paper for inclusion/exclusion criteria. Disease had to be sufficiently and objectively severe, but not too severe, and symptoms could not be sustained at the highest CCS grade.) Patients were randomized to either optimal medical management (including antiplatelet, anti-anginal, ACEi/ARB, and cholesterol-lowering therapy) and PCI or to optimal medical management alone. The primary outcome was a composite of all-cause mortality and non-fatal MI.

2287 patients were randomized. Both groups had similar baseline characteristics with the exception of a higher prevalence of proximal LAD disease in the medical-therapy group. Median duration of follow-up was 4.6 years in both groups. Death or non-fatal MI occurred in 18.4% of the PCI group and in 17.8% of the medical-therapy group (p = 0.62). Death, non-fatal MI, or stroke occurred in 20.0% of the PCI group and 19.5% of the medical-therapy group (p = 0.62). Hospitalization for ACS occurred in 12.4% of the PCI group and 11.8% of the medical-therapy group (p = 0.56). Revascularization during follow-up was performed in 21.1% of the PCI group but in 32.6% of the medical-therapy group (HR 0.60, 95% CI 0.51–0.71, p < 0.001). Finally, 66% of PCI patients were free of angina at 1 year follow-up compared with 58% of medical-therapy patients (p < 0.001); rates were 72% and 67% at 3 years (p=0.02) and 72% and 74% at five years (not significant).

Thus, in the initial management of stable coronary artery disease, PCI in addition to optimal medical management provided no mortality benefit over optimal medical management alone. However, initial management with PCI did provide a time-limited improvement in angina symptoms.

As the authors of COURAGE nicely summarize on page 1512, the atherosclerotic plaques of ACS and stable CAD are different. Vulnerable, ACS-prone plaques have thin caps and spread outward along the wall of the coronary artery, as opposed to the plaques of stable CAD which have thick fibrous caps and are associated with inward-directed remodeling that narrows the artery lumen (and thus cause reliable angina symptoms and luminal narrowing on coronary angiography).

Notable limitations of this study: 1) the population was largely male, white, and 42% came from VA hospitals, thus limiting generalizability of the study; 2) drug-eluting stents were not clinically available until the last 6 months of the study, so most stents placed were bare metal.

Later meta-analyses were weakly suggestive of an association of PCI with improved all-cause mortality. It is thought that there may be a subset of patients with stable CAD who achieve a mortality benefit from PCI.

The 2017 ORBITA trial made headlines and engendered sustained controversy when it demonstrated in a randomized trial that, in the context of optimal medical therapy, PCI did not increase exercise time more than did a sham-PCI. Take note of the relatively savage author’s reply to commentary regarding the trial. See blog discussion here. The ORBITA-2 trial is currently underway.

Last month, the ISCHEMIA trial was published in NEJM. It demonstrated that among patients with stable CAD and moderate to severe ischemia, an initial invasive strategy did not reduce the risk of ischemic cardiovascular events or death from any cause at a median of 3.2 years follow-up.

It is important to note that all of the above discussions assume that the patient does not have specific coronary artery anatomy in which initial CABG would provide a mortality benefit (e.g. left main disease, multi-vessel disease with decreased LVEF). Finally, PCI should be considered in patients whose physical activity is limited by angina symptoms despite optimal medical therapy.

Further Reading:
1. COURAGE @ Wiki Journal Club
2. COURAGE @ 2 Minute Medicine
3. Canadian Cardiovascular Society grading of angina pectoris
4. ORBITA-2 @ ClinicalTrials.gov
5. ISCHEMIA @ ClinicalTrials.gov
6. Discussion re: ISCHEMIA trial changes @ CardioBrief
7. ISCHEMIA full text @ NEJM

Summary by Duncan F. Moore, MD

Image Credit: National Institutes of Health, US Public Domain, via Wikimedia Commons

Week 44 – SYMPLICITY HTN-3

“A Controlled Trial of Renal Denervation for Resistant Hypertension”

N Engl J Med. 2014 Apr 10;370(15):1393-401. [free full text]

Approximately 10% of patients with hypertension have resistant hypertension (SBP > 140 despite adherence to three maximally tolerated doses of antihypertensives, including a diuretic). Evidence suggests that the sympathetic nervous system plays a large role in such cases, so catheter-based radiofrequency ablation of the renal arteries (renal denervation therapy) was developed as a potential treatment for resistant HTN. The 2010 SYMPLICITY HTN-2 trial was a small (n = 106), non-blinded, randomized trial of renal denervation vs. continued care with oral antihypertensives that demonstrated a remarkable 30-mmHg greater decrease in SBP with renal denervation. Thus the 2014 SYMPLICITY HTN-3 trial was designed to evaluate the efficacy of renal denervation in a single-blinded trial with a sham-procedure control group.

The trial enrolled adults with resistant HTN with SBP ≥ 160 despite adherence to 3+ maximized antihypertensive drug classes, including a diuretic. (Pertinent exclusion criteria included secondary hypertension, renal artery stenosis > 50%, prior renal artery intervention.) Patients were randomized to either renal denervation with the Symplicity (Medtronic) radioablation catheter or to renal angiography only (sham procedure). The primary outcome was the mean change in office systolic BP from baseline at 6 months. (The examiner was blinded to intervention.) The secondary outcome was the change in mean 24-hour ambulatory SBP at 6 months. The primary safety endpoint was a composite of death, ESRD, embolic event with end-organ damage, renal artery or other vascular complication, hypertensive crisis within 30 days, or new renal artery stenosis of > 70%.

535 patients were randomized. On average, patients were receiving five antihypertensive medications. There was no significant difference in reduction of SBP between the two groups at 6 months. ∆SBP was -14.13 ± 23.93 mmHg in the denervation group vs. -11.74 ± 25.94 mmHg in the sham-procedure group for a between-group difference of -2.39 mmHg (95% CI -6.89 to 2.12, p = 0.26 with a superiority margin of 5 mmHg). The change in 24-hour ambulatory SBP at 6 months was -6.75 ± 15.11 mmHg in the denervation group vs. -4.79 ± 17.25 mmHg in the sham-procedure group for a between-group difference of -1.96 mmHg (95% CI -4.97 to 1.06, p = 0.98 with a superiority margin of 2 mmHg). There was no significant difference in the prevalence of the composite safety endpoint at 6 months with 4.0% of the denervation group and 5.8% of the sham-procedure group reaching the endpoint (percentage-point difference of -1.9, 95% CI -6.0 to 2.2).

In patients with resistant hypertension, renal denervation therapy provided no reduction in SBP at 6-month follow-up relative to a sham procedure.

This trial was an astounding failure for Medtronic and its Symplicity renal denervation radioablation catheter. The magnitude of the difference in results between the non-blinded, no-sham-procedure SYMPLICITY HTN-2 trial and this patient-blinded, sham-procedure-controlled trial is likely a product of 1) a marked placebo effect of procedural intervention, 2) Hawthorne effect in the non-blinded trial, and 3) regression toward the mean (patients were enrolled based on unusually high BP readings that over the course of the trial declined to reflect a lower true baseline).

Currently, there is no role for renal denervation therapy in the treatment of resistant HTN. However, despite the results of SYMPLICITY HTN-3, additional trials have since been conducted that assess the utility of renal denervation in patients with HTN not classified as resistant. SPYRAL HTN-ON MED demonstrated a benefit of renal denervation beyond that of a sham procedure (7.4 mmHg lower relative difference of SBP on 24hr ambulatory monitoring) in the continued presence of baseline antihypertensives. RADIANCE HTN-SOLO demonstrated a 6.3 mmHg greater reduction in daytime ambulatory SBP among ablated patients than that of sham-treatment patients notably after a 4-week discontinuation of up to two home antihypertensives. However, despite these two recent trials, the standard of care for the treatment of non-resistant HTN remains our affordable and safe default of multiple pharmacologic agents as well as lifestyle interventions.

Further Reading/References:
1. NephJC, SYMPLICITY HTN-3
2. UpToDate, “Treatment of resistant hypertension,” heading “Renal nerve denervation”

Summary by Duncan F. Moore, MD

Week 41 – HAS-BLED

 

“A Novel User-Friendly Score (HAS-BLED) To Assess 1-Year Risk of Major Bleeding in Patients with Atrial Fibrillation”

Chest. 2010 Nov;138(5):1093-100. [free full text]

Atrial fibrillation (AF) is a well-known risk factor for ischemic stroke. Stroke risk is further increased by individual comorbidities, such as CHF, HTN, and DM, and can be stratified with scores, such as CHADS2 and CHA2DS2VASC. Patients with intermediate stroke risk are recommended to be treated with oral anticoagulation (OAC). However, stroke risk is often also closely related to bleeding risk, and the benefits of anticoagulation for stroke need to be weighed against the added risk of bleeding. At the time of this study, there were no validated and user-friendly bleeding risk-stratification schemes. This study aimed to develop a practical risk score to estimate the 1-year risk of major bleeding (as defined in the study) in a contemporary, real world cohort of patients with AF.

The study enrolled adults with an EKG or Holter-proven diagnosis of AF. (Patients with mitral valve stenosis or previous valvular surgery were excluded.) No experiment was performed in this retrospective cohort study.

In a derivation cohort, the authors retrospectively performed univariate analyses to identify a range of clinical features associated with major bleeding (p < 0.10). Based on systematic reviews, they added additional risk factors for major bleeding. Ultimately, what resulted was a list of comprehensive risk factors deemed HAS-BLED:

H – Hypertension (> 160 mmHg systolic)

A – Abnormal renal (HD, transplant, Cr > 2.26 mg/dL) and liver function (cirrhosis, bilirubin > 2x normal w/ AST/ALT/ALP > 3x normal) – 1 pt each for abnormal renal or liver function

S – Stroke

B – Bleeding (prior major bleed or predisposition to bleed)

L – Labile INRs (time in therapeutic range < 60%)

E – Elderly (age > 65)

D – Drugs (i.e. ASA, clopidogrel, NSAIDs) or alcohol use (> 8 units per week) concomitantly – 1 pt each for use of either

Each risk factor was equivalent to one point. The HAS-BLED score was then compared to the HEMORR2HAGES scheme, a prior tool for estimating bleeding risk.

Outcomes:

      • incidence of major bleeding within 1 year, overall
      • bleeds per 100 patient-years, by HAS-BLED score
      • c-statistic for the HAS-BLED score in predicting the risk of bleeding

Definitions:

      • major bleeding = bleeding causing hospitalization, Hgb drop >2 g/L, or requiring blood transfusion, that was not a hemorrhagic stroke
      • hemorrhagic stroke = focal neurologic deficit of sudden onset, diagnosed by a neurologist, lasting >24h and caused by bleeding

Results:
3,456 patients with AF without mitral valve stenosis or valve surgery who completed their 1-year follow-up were analyzed retrospectively. 64.8% (2242) of these patients were on OAC (12.8% of whom on concurrent antiplatelet therapy), 24% (828) were on antiplatelet therapy alone, and 10.2% (352) received no antithrombotic therapy. 1.5% (53) of patients experienced a major bleed during the first year, with 17% (9) of these patients sustaining intracerebral hemorrhage.

HAS-BLED Score       Bleeds per 100-patient years
0                                       1.13
1                                       1.02
2                                       1.88
3                                       3.74
4                                       8.70
5                                      12.50
6*                                    0.0                   *(n = 2 patients at risk, neither bled)

Patients were given a HAS-BLED score and a HEMORR2HAGES score. C-statistics were then used to determine the predictive accuracy of each model overall as well as within patient subgroups (OAC alone, OAC + antiplatelet, antiplatelet alone, no antithrombotic therapy).

C statistics for HAS-BLED were as follows: for overall cohort, 0.72 (95%CI 0.65-0.79); for OAC alone, 0.69 (95%CI 0.59-0.80); for OAC + antiplatelet, 0.78 (95%CI 0.65-0.91); for antiplatelet alone, 0.91 (95%CI 0.83-1.00); and for those on no antithrombotic therapy, 0.85 (95%CI 0.00-1.00).

C statistics for HEMORR2HAGES were as follows: for overall cohort, 0.66 (95%CI 0.57-0.74); for OAC alone, 0.64 (95%CI 0.53-0.75); for OAC + antiplatelet, 0.83 (95%CI 0.74-0.91); for antiplatelet alone, 0.83 (95%CI 0.68-0.98); and for those without antithrombotic therapy, 0.81 (95%CI 0.00-1.00).

Implication/Discussion:
This study helped to establish a practical and user-friendly assessment of bleeding risk in AF. HAS-BLED is superior to its predecessor HEMORR2HAGES in that it has an easier-to-remember acronym and is quicker and simpler to perform. All of its risk factors are readily available from the clinical history or are routinely tested. Both stratification tools had a broadly similar c-statistics for the overall cohort – 0.72 for HAS-BLED versus 0.66 for HEMORR2HAGES respectively. However, HAS-BLED was particularly useful when looking at antiplatelet therapy alone or no antithrombotic therapy at all (0.91 and 0.85, respectively).

This study is useful because it provides evidence-based, easily-calculable, and actionable risk stratification in assessing bleeding risk in AF. In prior studies, such as ACTIVE-A (ASA + clopidogrel versus ASA alone for patients with AF deemed unsuitable for OAC), almost half of all patients (n= ~3500) were given a classification of “unsuitable for OAC,” which was based solely on physician clinical judgement alone without a predefined objective scoring. Now, physicians have an objective way to assess bleed risk rather than “gut feeling” or wanting to avoid iatrogenic insult.

The RE-LY trial used the HAS-BLED score to decide which patients with AF should get the standard dabigatran dose (150mg BID) versus a lower dose (110mg BID) for anticoagulation. This risk-stratified dosing resulted in a significant reduction in major bleeding compared with warfarin and maintained a similar reduction in stroke risk.

Furthermore, the HAS-BLED score could allow the physician to be more confident when deciding which patients may be appropriate for referral for a left atrial appendage occlusion device (e.g. Watchman).

Limitations:
The study had a limited number of major bleeds and a short follow-up period, and thus it is possible that other important risk factors for bleeding were not identified. Also, there were large numbers of patients lost to 1-year follow-up. These patients were likely to have had more comorbidities and may have transferred to nursing homes or even have died – which may have led to an underestimate of bleeding rates. Furthermore, the study had a modest number of very elderly patients (i.e. 75-84 and ≥ 85), who are likely to represent the greatest bleeding risk.

Bottom Line:
HAS-BLED provides an easy, practical tool to assess the individual bleeding risk of patients with AF. Oral anticoagulation should be considered for scores of 3 or less. When HAS-BLED scores are ≥ 4, it is reasonable to think about alternatives to oral anticoagulation.

Further Reading/References:
1. HAS-BLED @ 2 Minute Medicine
2. ACTIVE-A trial
3. RE-LY trial
4. RE-LY @ Wiki Journal Club
5. HAS-BLED Calculator
6. HEMORR2HAGES Calculator
7. CHADS2 Calculator
8. CHA2DS2VASC Calculator
9. Watchman (for Healthcare Professionals)
10. “Bleeding Risk Scores in Atrial Fibrillation: Helpful or Harmful?” Journal of the American Heart Association (2018)

Summary by Patrick Miller, MD

Image Credit: CardioNetworks, CC BY-SA 3.0, via Wikimedia Commons

Week 33 – ALLHAT

“Major Outcomes in High-Risk Hypertensive Patients Randomized to Angiotensin-Converting Enzyme Inhibitor or Calcium Channel Blocker vs. Diuretic”

The Antihypertensive and Lipid-Lowering Treatment to Prevent Heart Attack Trial (ALLHAT)

JAMA. 2002 Dec 18;288(23):2981-97. [free full text]

Hypertension is a ubiquitous disease, and the cardiovascular and mortality benefits of BP control have been well described. However, as the number of available antihypertensive classes proliferated in the past several decades, a head-to-head comparison of different antihypertensive regimens was necessary to determine the optimal first-step therapy. The 2002 ALLHAT trial was a landmark trial in this effort.

Population:
33,357 patients aged 55 years or older with hypertension and at least one other coronary heart disease (CHD) risk factor (previous MI or stroke, LVH by ECG or echo, T2DM, current cigarette smoking, HDL < 35 mg/dL, or documentation of other atherosclerotic cardiovascular disease (CVD)). Notable exclusion criteria: history of hospitalization for CHF, history of treated symptomatic CHF, or known LVEF < 35%.

Intervention:
Prior antihypertensives were discontinued upon initiation of the study drug. Patients were randomized to one of three study drugs in a double-blind fashion. Study drugs and additional drugs were added in a step-wise fashion to achieve a goal BP < 140/90 mmHg.

Step 1: titrate assigned study drug

      • chlorthalidone: 12.5 –> 12.5 (sham titration) –> 25 mg/day
      • amlodipine: 2.5 –> 5 –> 10 mg/day
      • lisinopril: 10 –> 20 –> 40 mg/day

Step 2: add open-label agents at treating physician’s discretion (atenolol, clonidine, or reserpine)

      • atenolol: 25 to 100 mg/day
      • reserpine: 0.05 to 0.2 mg/day
      • clonidine: 0.1 to 0.3 mg BID

Step 3: add hydralazine 25 to 100 mg BID

Comparison:
Pairwise comparisons with respect to outcomes of chlorthalidone vs. either amlodipine or lisinopril. A doxazosin arm existed initially, but it was terminated early due to an excess of CV events, primarily driven by CHF.

Outcomes:
Primary –  combined fatal CAD or nonfatal MI

Secondary

      • all-cause mortality
      • fatal and nonfatal stroke
      • combined CHD (primary outcome, PCI, or hospitalized angina)
      • combined CVD (CHD, stroke, non-hospitalized treated angina, CHF [fatal, hospitalized, or treated non-hospitalized], and PAD)

Results:
Over a mean follow-up period of 4.9 years, there was no difference between the groups in either the primary outcome or all-cause mortality.

When compared with chlorthalidone at 5 years, the amlodipine and lisinopril groups had significantly higher systolic blood pressures (by 0.8 mmHg and 2 mmHg, respectively). The amlodipine group had a lower diastolic blood pressure when compared to the chlorthalidone group (0.8 mmHg).

When comparing amlodipine to chlorthalidone for the pre-specified secondary outcomes, amlodipine was associated with an increased risk of heart failure (RR 1.38; 95% CI 1.25-1.52).

When comparing lisinopril to chlorthalidone for the pre-specified secondary outcomes, lisinopril was associated with an increased risk of stroke (RR 1.15; 95% CI 1.02-1.30), combined CVD (RR 1.10; 95% CI 1.05-1.16), and heart failure (RR 1.20; 95% CI 1.09-1.34). The increased risk of stroke was mostly driven by 3 subgroups: women (RR 1.22; 95% CI 1.01-1.46), blacks (RR 1.40; 95% CI 1.17-1.68), and non-diabetics (RR 1.23; 95% CI 1.05-1.44). The increased risk of CVD was statistically significant in all subgroups except in patients aged less than 65. The increased risk of heart failure was statistically significant in all subgroups.

Discussion:
In patients with hypertension and one risk factor for CAD, chlorthalidone, lisinopril, and amlodipine performed similarly in reducing the risks of fatal CAD and nonfatal MI.

The study has several strengths: a large and diverse study population, a randomized, double-blind structure, and the rigorous evaluation of three of the most commonly prescribed “newer” classes of antihypertensives. Unfortunately, neither an ARB nor an aldosterone antagonist was included in the study. Additionally, the step-up therapies were not reflective of contemporary practice. (Instead, patients would likely be prescribed one or more of the primary study drugs.)

The ALLHAT study is one of the hallmark studies of hypertension and has played an important role in hypertension guidelines since it was published. Following the publication of ALLHAT, thiazide diuretics became widely used as first line drugs in the treatment of hypertension. The low cost of thiazides and their limited side-effect profile are particularly attractive class features. While ALLHAT looked specifically at chlorthalidone, in practice the positive findings were attributed to HCTZ, which has been more often prescribed. The authors of ALLHAT argued that the superiority of thiazides was likely a class effect, but according to the analysis at Wiki Journal Club, “there is little direct evidence that HCTZ specifically reduces the incidence of CVD among hypertensive individuals.” Furthermore, a 2006 study noted that that HCTZ has worse 24-hour BP control than chlorthalidone due to a shorter half-life. The ALLHAT authors note that “since a large proportion of participants required more than 1 drug to control their BP, it is reasonable to infer that a diuretic be included in all multi-drug regimens, if possible.” The 2017 ACC/AHA High Blood Pressure Guidelines state that, of the four thiazide diuretics on the market, chlorthalidone is preferred because of a prolonged half-life and trial-proven reduction of CVD (via the ALLHAT study).

Further Reading / References:
1. 2017 ACC Hypertension Guidelines
2. ALLHAT @ Wiki Journal Club
3. 2 Minute Medicine
4. Ernst et al, “Comparative antihypertensive effects of hydrochlorothiazide and chlorthalidone on ambulatory and office blood pressure.” (2006)
5. Gillis Pharmaceuticals
6. Concepts in Hypertension, Volume 2 Issue 6

Summary by Ryan Commins MD

Image Credit: Kimivanil, CC BY-SA 4.0, via Wikimedia Commons

Week 31 – PLCO

“Mortality Results from a Randomized Prostate-Cancer Screening Trial”

by the Prostate, Lung, Colorectal, and Ovarian (PLCO) Cancer Screening Trial project team

N Engl J Med. 2009 Mar 26;360(13):1310-9. [free full text]

The use of prostate-specific-antigen (PSA) testing to screen for prostate cancer has been a contentious subject for decades. Prior to the 2009 PLCO trial, there were no high-quality prospective studies of the potential benefit of PSA testing.

The trial enrolled men ages 55-74 (excluded if history of prostate, lung, or colorectal cancer, current cancer treatment, or > 1 PSA test in the past 3 years). Patients were randomized to annual PSA testing for 6 years with annual digital rectal exam (DRE) for 4 years or to usual care. The primary outcome was the prostate-cancer-attributable death rate, and the secondary outcome was the incidence of prostate cancer.

38,343 patients were randomized to the screening group, and 38,350 were randomized to the usual-care group. Baseline characteristics were similar in both groups. Median follow-up duration was 11.5 years. Patients in the screening group were 85% compliant with PSA testing and 86% compliant with DRE. In the usual-care group, 40% of patients received a PSA test within the first year, and 52% received a PSA test by the sixth year. Cumulative DRE rates in the usual-care group were between 40-50%. By seven years, there was no significant difference in rates of death attributable to prostate cancer. There were 50 deaths in the screening group and only 44 in the usual-care group (rate ratio 1.13, 95% CI 0.75 – 1.70). At ten years, there were 92 and 82 deaths in the respective groups (rate ratio 1.11, 95% CI 0.83–1.50). By seven years, there was a higher rate of prostate cancer detection in the screening group. 2820 patients were diagnosed in the screening group, but only 2322 were diagnosed in the usual-care group (rate ratio 1.22, 95% CI 1.16–1.29). By ten years, there were 3452 and 2974 diagnoses in the respective groups (rate ratio 1.17, 95% CI 1.11–1.22). Treatment-related complications (e.g. infection, incontinence, impotence) were not reported in this study.

In summary, yearly PSA screening increased the prostate cancer diagnosis rate but did not impact prostate-cancer mortality when compared to the standard of care. However, there were relatively high rates of PSA testing in the usual-care group (40-50%). The authors cite this finding as a probable major contributor to the lack of mortality difference. Other factors that may have biased to a null result were prior PSA testing and advances in treatments for prostate cancer during the trial. Regarding the former, 44% of men in both groups had already had one or more PSA tests prior to study enrollment. Prior PSA testing likely contributed to selection bias.

PSA screening recommendations prior to this 2009 study:

      • American Urological Association and American Cancer Society – recommended annual PSA and DRE, starting at age 50 if normal risk and earlier in high-risk men
      • National Comprehensive Cancer Network: “a risk-based screening algorithm, including family history, race, and age”
      • 2008 USPSTF Guidelines: insufficient evidence to determine balance between risks/benefits of PSA testing in men younger than 75; recommended against screening in age 75+ (Grade I Recommendation)

The authors of this study conclude that their results “support the validity of the recent [2008] recommendations of the USPSTF, especially against screening all men over the age of 75.”

However, the conclusions of the European Randomized Study of Screening for Prostate Cancer (ERSPC), which was published concurrently with PLCO in NEJM, differed. In ERSPC, PSA was screened every 4 years. The authors found an increased rate of detection of prostate cancer, but, more importantly, they found that screening decreased prostate cancer mortality (adjusted rate ratio 0.80, 95% CI 0.65–0.98, p = 0.04; NNT 1410 men receiving 1.7 screening visits over 9 years). Like PLCO, this study did not report treatment harms that may have been associated with overly zealous diagnosis.

The USPSTF reexamined its PSA guidelines in 2012. Given the lack of mortality benefit in PLCO, the pitiful mortality benefit in ERSPC, and the assumed harm from over-diagnosis and excessive intervention in patients who would ultimately not succumb to prostate cancer, the USPSTF concluded that PSA-based screening for prostate cancer should not be offered (Grade D Recommendation).

In the following years, the pendulum has swung back partially toward screening. In May 2018, the USPSTF released new recommendations that encourage men ages 55-69 to have an informed discussion with their physician about potential benefits and harms of PSA-based screening (Grade C Recommendation). The USPSTF continues to recommend against screening in patients over 70 years old (Grade D).

Screening for prostate cancer remains a complex and controversial topic. Guidelines from the American Cancer Society, American Urological Association, and USPSTF vary, but ultimately all recommend shared decision-making. UpToDate has a nice summary of talking points culled from several sources.

Further Reading/References:
1. 2 Minute Medicine
2. ERSPC @ Wiki Journal Club
3. UpToDate, Screening for Prostate Cancer

Summary by Duncan F. Moore, MD

Image Credit: Otis Brawley, Public Domain, NIH National Cancer Institute Visuals Online

Week 28 – FACT

“Febuxostat Compared with Allopurinol in Patients with Hyperuricemia and Gout”

aka the Febuxostat versus Allopurinol Controlled Trial (FACT)

N Engl J Med. 2005 Dec 8;353(23):2450-61. [free full text]

Gout is thought to affect approximately 3% of the US population, and its prevalence appears to be rising. Gout occurs due to precipitation of monosodium urate crystals from supersaturated body fluids. Generally, the limit of solubility is 6.8 mg/dL, but local factors such as temperature, pH, and other solutes can lower this threshold. A critical element in the treatment of gout is the lowering of the serum urate concentration below the limit of solubility, and generally, the accepted target is 6.0 mg/dL. The xanthine oxidase inhibitor allopurinol is the most commonly used urate-lowering pharmacologic therapy. Allopurinol rarely can have severe or life-threatening side effects, particularly among patients with renal impairment. Thus drug companies have sought to bring to market other xanthine oxidase inhibitors such as febuxostat (trade name Uloric). In this chronic and increasingly burdensome disease, a more efficacious drug with fewer exclusion criteria and fewer side effects would be a blockbuster.

The study enrolled adults with gout and a serum urate concentration of ≥ 8.0 mg/dL. Exclusion criteria included serum Cr ≥ 1.5 mg/dL or eGFR < 50 ml/min (due to this being a relative contraindication for allopurinol use) as well as a the presence of various conditions or use of various drugs that would affect urate metabolism and/or clearance of the trial drugs. (Patients already on urate-lowering therapy were given a two week washout period prior to randomization.) Patients were randomized to treatment for 52 weeks with either febuxostat 80mg PO daily, febuxostat 120mg PO daily, or allopurinol 300mg PO daily. Because the initiation of urate-lowering therapy places patients at increased risk of gout flares, patients were placed on prophylaxis with either naproxen 250mg PO BID or colchicine 0.6mg PO daily for the first 8 weeks of the study. The primary endpoint was a serum urate level of < 6.0 mg/dL at weeks 44, 48, and 52. Selected secondary endpoints included percentage reduction in serum urate from baseline at each visit, percentage reduction in area of a selected tophus, and prevalence of acute gout flares weeks requiring treatment.

762 patients were randomized. Baseline characteristics were statistically similar among all three groups. A majority of the patients were white males age 50+ who drank alcohol. Average serum urate was slightly less than 10 mg/dL. The primary endpoint (urate < 6.0 at the last three monthly measurements) was achieved in 53% of patients taking febuxostat 80mg, 62% of patients taking febuxostat 120mg, and 21% of patients taking allopurinol 300mg (p < 0.001 for each febuxostat groups versus allopurinol). Regarding selected secondary endpoints:

1) The percent reduction in serum urate from baseline at the final visit was 44.73 ± 19.10 in the febuxostat 80mg group, 52.52 ± 19.91 in the febuxostat 120mg group, and 32.99 ± 15.33 in the allopurinol 300mg group (p < 0.001 for each febuxostat group versus allopurinol, and p < 0.001 for febuxostat 80mg versus 120mg). 2) The percentage reduction in area of a single selected tophus was assessed in 156 patients who had tophi at baseline. At week 52, the median percentage reduction in tophus area was 83% in febuxostat 80mg patients, 66% in febuxostat 120mg patients, and 50% in allopurinol patients (no statistical difference per authors, p values not reported). Additionally, there was no significant reduction in tophus count in any of the groups. 3) During weeks 1-8 (in which acute gout flare prophylaxis was scheduled), 36% of patients in the febuxostat 120mg sustained a flare, whereas only 22% of the febuxostat 80mg group and 21% of the allopurinol group sustained a flare (p < 0.001 for both pairwise comparisons versus febuxostat 120mg). During weeks 9-52 (in which acute gout flare prophylaxis was no longer scheduled), a similar proportion of patients in each treatment group sustained an acute flare of gout (64% in the febuxostat 80mg group, 70% in the febuxostat 120mg group, and 64% in the allopurinol group). Finally, the incidence of treatment-related adverse events was similar among all three groups (see Table 3). Treatment was most frequently discontinued in the febuxostat 120mg group (98 patients, versus 88 patients in the febuxostat 80mg group and 66 patients in the allopurinol group; p = 0.003 for comparison between febuxostat 120mg and allopurinol).

In summary, this large RCT of urate-lowering therapy among gout patients found that febuxostat, dosed at either 80mg or 120mg PO daily, was more efficacious than allopurinol 300mg in reducing serum urate to below 6.0 mg/dL. Febuxostat was not superior to allopurinol with respect to the tested clinical outcomes of tophus size reduction, tophus count, and acute gout flares. Safety profiles were similar among the three regimens.

The authors note that the incidence of gout flares during and after the prophylaxis phase of the study “calls attention to a well-described paradox with important implications for successful management of gout: the risk of acute gout flares is increased early in the course of urate-lowering treatment” and the authors suggest that there is “a role for more sustained prophylaxis during the initiation of urate-lowering therapy than was provided here” (2458).

A limitation of this study is that its comparator group, allopurinol 300mg PO daily, may not have represented optimal use of the drug. Allopurinol should be uptitrated q2-4 weeks to the minimum dose required to maintain the goal serum urate of < 6.0 mg/dL (< 5.0 if tophi are present). According to UpToDate, “a majority of gout patients require doses of allopurinol exceeding 300 mg/day in order to maintain serum urate < 6.0 mg/dL.” In the United States allopurinol has been approved for doses of up to 800 mg daily. The authors state that “titration of allopurinol would have compromised the blinding of the study” (2459) but this is not true – blinded protocolized titration of study or comparator drugs has been performed in numerous other RCTs and could have been achieved simply at greater cost to and effort from the study sponsor (which happens to be the drug company TAP Pharmaceuticals). The likelihood that such titration would have shifted the results toward a null effect does not go unnoted. Another limitation is the relatively short duration of the trial – follow-up may have been insufficient to establish superiority in clinical outcomes, given the chronic nature of the disease.

In the UK, the National Institute for Health and Care Excellence (NICE), the agency tasked with assessing cost-effectiveness of various medical therapies, recommended as of 2008 that febuxostat be used for the treatment of hyperuricemia in gout “only for people who are intolerant of allopurinol or for whom allopurinol is contraindicated.”

Of note, a recent study funded by Takeda Pharmaceuticals demonstrated the non-inferiority of febuxostat relative to allopurinol with respect to rates of adverse cardiovascular events in patient with gout and major pre-existing cardiovascular conditions.

Allopurinol started at 100mg PO daily and titrated gradually to goal serum urate is the current general practice in the US. However, patients of Chinese, Thai, Korean, or “another ethnicity with similarly increased frequency of HLA-B*5801” should be tested for HLA-B*5801 prior to initiation of allopurinol therapy, as those patients are at increased risk of a severe cutaneous adverse reaction to allopurinol.

Further Reading/References:
1. FACT @ ClinicalTrials.gov
2. UpToDate “Pharmacologic urate-lowering therapy and treatment of tophi in patients with gout”
3. NICE: “Febuxostat for the management of hyperuricemia in people with gout”
4. “Cardiovascular Safety of Febuxostat or Allopurinol in Patients with Gout.” N Engl J Med. 2018 Mar 29;378(13):1200-1210.

Summary by Duncan F. Moore, MD

Image Credit: James Gilray, US Public Domain, via Wikimedia Commons

Week 19 – RALES

“The effect of spironolactone on morbidity and mortality in patients with severe heart failure”

by the Randomized Aldactone Evaluation Study Investigators

N Engl J Med. 1999 Sep 2;341(10):709-17. [free full text]

Inhibition of the renin-angiotensin-aldosterone system (RAAS) is a tenet of the treatment of heart failure with reduced ejection fraction (see post from Week 12 – SOLVD). However, physiologic evidence exists that suggests ACEis only partially inhibit aldosterone production. It had been hypothesized that aldosterone receptor blockade (e.g. with spironolactone) in conjunction with ACE inhibition could synergistically improve RAAS blockade; however, there was substantial clinician concern about the risk of hyperkalemia. In 1996, the RALES investigators demonstrated that the addition of spironolactone 12.5 or 25mg daily in combination with ACEi resulted in laboratory evidence of increased RAAS inhibition at 12 weeks with an acceptable increased risk of hyperkalemia. The 1999 RALES study was thus designed to evaluate prospectively the mortality benefit and safety of the addition of relatively low-dose aldosterone treatment to the standard HFrEF treatment regimen.

The study enrolled patients with severe HFrEF (LVEF ≤ 35% and NYHA class IV symptoms within the past 6 months and class III or IV symptoms at enrollment) currently being treated with an ACEi (if tolerated) and a loop diuretic. Patients were randomized to the addition of spironolactone 25mg PO daily or placebo. (The dose could be increased at 8 weeks to 50mg PO daily if the patient showed signs or symptoms of progression of CHF without evidence of hyperkalemia.) The primary outcome was all-cause mortality. Secondary outcomes included death from cardiac causes, hospitalization for cardiac causes, change in NYHA functional class, and incidence of hyperkalemia.

1663 patients were randomized. The trial was stopped early (mean follow-up of 24 months) due to the marked improvement in mortality among the spironolactone group. Among the placebo group, 386 (46%) patients died, whereas only 284 (35%) patients among the spironolactone group died (RR 0.70, 95% CI 0.60 to 0.82, p < 0.001; NNT = 8.8). See the dramatic Kaplan-Meier curve in Figure 1. Relative to placebo, spironolactone treatment reduced deaths secondary to cardiac causes by 31% and hospitalizations for cardiac causes by 30% (p < 0.001 for both). In placebo patients, NYHA class improved in 33% of cases, was unchanged in 18%, and worsened in 48% of patients; in spironolactone patients, the NYHA class improved in 41%, was unchanged in 21%, and worsened in 38% of patients (p < 0.001 for group difference by Wilcoxon test). “Serious hyperkalemia” occurred in 10 (1%) of placebo patients and 14 (2%) of spironolactone patients (p = 0.42). Treatment discontinuation rates were similar among the two groups.

Among patients with severe HFrEF, the addition of spironolactone improved mortality, reduced hospitalizations for cardiac causes, and improved symptoms without conferring an increased risk of serious hyperkalemia. The authors hypothesized that spironolactone “can prevent progressive heart failure by averting sodium retention and myocardial fibrosis” and can “prevent sudden death from cardiac causes by averting potassium loss and by increasing the myocardial uptake of norepinephrine.” Myocardial fibrosis is thought to be reduced via blocking the role aldosterone plays in collagen formation. Overall, this was a well-designed double-blind RCT that built upon the safety data of the safe-dose-finding 1996 RALES trial and ushered in the era of routine use of aldosterone receptor blockade in severe HFrEF. In 2003, the EPHESUS trial trial demonstrated a mortality benefit of aldosterone antagonism (with eplerenone) among patients with LV dysfunction following acute MI, and in 2011, the EMPHASIS-HF trial demonstrated a reduction in CV death or HF hospitalization with eplerenone use among patients with EF ≤ 35% and NYHA class II symptoms (and notably among patients with a much higher prevalence of beta-blocker use than those of the mid-1990s RALES cohort). The 2014 TOPCAT trial demonstrated that, among patients with HFpEF, spironolactone does not reduce a composite endpoint of CV mortality, aborted cardiac arrest, or HF hospitalizations.

The 2013 ACCF/AHA Guideline for the Management of Heart Failure recommends the use of aldosterone receptor antagonists in patients with NYHA class II-IV symptoms with LVEF ≤ 35% and following an acute MI in patients with LVEF ≤ 40% with symptomatic HF or with a history of diabetes mellitus. Contraindications include Cr ≥ 2.5 or K ≥ 5.0.

Further Reading/References:
1. “Effectiveness of spironolactone added to an angiotensin-converting enzyme inhibitor and a loop diuretic for severe chronic congestive heart failure (the Randomized Aldactone Evaluation Study [RALES]).” American Journal of Cardiology, 1996.
2. RALES @ Wiki Journal Club
3. RALES @ 2 Minute Medicine
4. EPHESUS @ Wiki Journal Club
5. EMPHASIS-HF @ Wiki Journal Club
6. TOPCAT @ Wiki Journal Club
7. 2013 ACCF/AHA Guideline for the Management of Heart Failure

Summary by Duncan F. Moore, MD

Image Credit: Spirono, CC0 1.0, via Wikimedia Commons

Week 16 – National Lung Screening Trial (NLST)

“Reduced Lung-Cancer Mortality with Low-Dose Computed Tomographic Screening”

by the National Lung Cancer Screening Trial (NLST) Research Team

N Engl J Med. 2011 Aug 4;365(5):395-409 [free full text]

Despite a reduction in smoking rates in the United States, lung cancer remains the number one cause of cancer death in the United States as well as worldwide. Earlier studies of plain chest radiography for lung cancer screening demonstrated no benefit, and in 2002 the National Lung Screening Trial (NLST) was undertaken to determine whether then recent advances in CT technology could lead to an effective lung cancer screening method.

The study enrolled adults age 55-74 with 30+ pack-years of smoking (if former smokers, they must have quit within the past 15 years). Patients were randomized to either the intervention of three annual screenings for lung cancer with low-dose CT or to the comparator/control group to receive three annual screenings for lung cancer with PA chest radiograph. The primary outcome was mortality from lung cancer. Notable secondary outcomes were all-cause mortality and the incidence of lung cancer.

53,454 patients were randomized, and both groups had similar baseline characteristics. The low-dose CT group sustained 247 deaths from lung cancer per 100,000 person-years, whereas the radiography group sustained 309 deaths per 100,000 person-years. A relative reduction in rate of death by 20.0% was seen in the CT group (95% CI 6.8 – 26.7%, p = 0.004). The number needed to screen with CT to prevent one lung cancer death was 320. There were 1877 deaths from any cause in the CT group and 2000 deaths in the radiography group, so CT screening demonstrated a risk reduction of death from any cause of 6.7% (95% CI 1.2% – 13.6%, p = 0.02). Incidence of lung cancer in the CT group was 645 per 100,000 person-years and 941 per 100,000 person-years in the radiography group (RR 1.13, 95% CI 1.03 – 1.23).

Lung cancer screening with low-dose CT scan in high-risk patients provides a significant mortality benefit. This trial was stopped early because the mortality benefit was so high. The benefit was driven by the reduction in deaths attributed to lung cancer, and when deaths from lung cancer were excluded from the overall mortality analysis, there was no significant difference among the two arms. Largely on the basis of this study, the 2013 USPSTF guidelines for lung cancer screening recommend annual low-dose CT scan in patients who meet NLST inclusion criteria. However, it must be noted that, even in the “ideal” circumstances of this trial performed at experienced centers, 96% of abnormal CT screening results in this trial were actually false positives. Of all positive results, 11% led to invasive studies.

Per UpToDate, since NSLT, there have been several European low-dose CT screening trials published. However, all but one (NELSON) appear to be underpowered to demonstrate a possible mortality reduction. Meta-analysis of all such RCTs could allow for further refinement in risk stratification, frequency of screening, and management of positive screening findings.

No randomized trial has ever demonstrated a mortality benefit of plain chest radiography for lung cancer screening. The Prostate, Lung, Colorectal, and Ovarian (PLCO) Cancer Screening Trial tested this modality vs. “community care,” and because the PLCO trial was ongoing at the time of creation of the NSLT, the NSLT authors trial decided to compare their intervention (CT) to plain chest radiography in case the results of plain chest radiography in PLCO were positive. Ultimately, they were not.

Further Reading:
1. USPSTF Guidelines for Lung Cancer Screening (2013)
2. NLST @ ClinicalTrials.gov
3. NLST @ Wiki Journal Club
4. NLST @ 2 Minute Medicine
5. UpToDate, “Screening for lung cancer”

Summary by Duncan F. Moore, MD

Image Credit: Yale Rosen, CC BY-SA 2.0, via Wikimedia Commons

Week 14 – IDNT

“Renoprotective Effect of the Angiotensin-Receptor Antagonist Irbesartan in Patients with Nephropathy Due to Type 2 Diabetes”

aka the Irbesartan Diabetic Nephropathy Trial (IDNT)

N Engl J Med. 2001 Sep 20;345(12):851-60. [free full text]

Diabetes mellitus is the most common cause of ESRD in the US. In 1993, a landmark study in NEJM demonstrated that captopril (vs. placebo) slowed the deterioration in renal function in patients with T1DM. However, prior to this 2002 study, no study had addressed definitively whether a similar improvement in renal outcomes could be achieved with RAAS blockade in patients with T2DM. Irbesartan (Avapro) is an angiotensin II receptor blocker that was first approved in 1997 for the treatment of hypertension. Its marketer, Bristol-Meyers Squibb, sponsored this trial in hopes of broadening the market for its relatively new drug.

This trial randomized patients with T2DM, hypertension, and nephropathy (per proteinuria and elevated Cr) to treatment with either irbesartan, amlodipine, or placebo. The drug in each arm was titrated to achieve a target SBP ≤ 135, and all patients were allowed non-ACEi/non-ARB/non-CCB drugs as needed. The primary outcome was a composite of the doubling of serum Cr, onset of ESRD, or all-cause mortality. Secondary outcomes included individual components of the primary outcome and a composite cardiovascular outcome.

1715 patients were randomized. The mean blood pressure after the baseline visit was 140/77 in the irbesartan group, 141/77 in the amlodipine group, and 144/80 in the placebo group (p = 0.001 for pairwise comparisons of MAP between irbesartan or amlodipine and placebo). Regarding the primary composite renal endpoint, the unadjusted relative risk was 0.80 (95% CI 0.66-0.97, p = 0.02) for irbesartan vs. placebo, 1.04 (95% CI 0.86-1.25, p = 0.69) for amlodipine vs. placebo, and 0.77 (0.63-0.93, p = 0.006) for irbesartan vs. amlodipine. The groups also differed with respect to individual components of the primary outcome. The unadjusted relative risk of creatinine doubling was 33% lower among irbesartan patients than among placebo patients (p = 0.003) and was 37% lower than among amlodipine patients (p < 0.001). The relative risks of ESRD and all-cause mortality did not differ significantly among the groups. There were no significant group differences with respect to the composite cardiovascular outcome. Importantly, a sensitivity analysis was performed which demonstrated that the conclusions of the primary analysis were not impacted significantly by adjustment for mean arterial pressure achieved during follow-up.

In summary, irbesartan treatment in T2DM resulted in superior renal outcomes when compared to both placebo and amlodipine. This beneficial effect was independent of blood pressure lowering. This was a well-designed, double-blind, randomized, controlled trial. However, it was industry-sponsored, and in retrospect, its choice of study drug seems quaint. The direct conclusion of this trial is that irbesartan is renoprotective in T2DM. In the discussion of IDNT, the authors hypothesize that “the mechanism of renoprotection by agents that block the action of angiotensin II may be complex, involving hemodynamic factors that lower the intraglomerular pressure, the beneficial effects of diminished proteinuria, and decreased collagen formation that may be related to decreased stimulation of transforming growth factor beta by angiotensin II.” In September 2002, on the basis of this trial, the FDA broadened the official indication of irbesartan to include the treatment of type 2 diabetic nephropathy. This trial was published concurrently in NEJM with the RENAAL trial [https://www.wikijournalclub.org/wiki/RENAAL]. RENAAL was a similar trial of losartan vs. placebo in T2DM and demonstrated a similar reduction in the doubling of serum creatinine as well as a 28% reduction in progression to ESRD. In conjunction with the original 1993 ACEi in T1DM study, these two 2002 ARB in T2DM studies led to the overall notion of a renoprotective class effect of ACEis/ARBs in diabetes. Enalapril and lisinopril’s patents expired in 2000 and 2002, respectively. Shortly afterward, generic, once-daily ACE inhibitors entered the US market. Ultimately, such drugs ended up commandeering much of the diabetic-nephropathy-in-T2DM market share for which irbesartan’s owners had hoped.

Further Reading/References:
1. “The effect of angiotensin-converting-enzyme inhibition on diabetic nephropathy. The Collaborative Study Group.” NEJM 1993.
2. CSG Captopril Trial @ Wiki Journal Club
3. IDNT @ Wiki Journal Club
4. IDNT @ 2 Minute Medicine
5. US Food and Drug Administration, New Drug Application #020757
6. RENAAL @ Wiki Journal Club
7. RENAAL @ 2 Minute Medicine

Summary by Duncan F. Moore, MD

Image Credit: Skirtick, CC BY-SA 4.0, via Wikimedia Commons