Week 14 – CURB-65

“Defining community acquired pneumonia severity on presentation to hospital: an international derivation and validation study”

Thorax. 2003 May;58(5):377-82. [free full text]

Community-acquired pneumonia (CAP) is frequently encountered by the admitting medicine team. Ideally, the patient’s severity at presentation and risk for further decompensation should determine the appropriate setting for further care, whether as an outpatient, on an inpatient ward, or in the ICU. At the time of this 2003 study, the predominant decision aid was the 20-variable Pneumonia Severity Index. The authors of this study sought to develop a simpler decision aid for determining the appropriate level of care at presentation.

The study examined the 30-day mortality rates of adults admitted for CAP via the ED at three non-US academic medical centers (data from three previous CAP cohort studies). 80% of the dataset was analyzed as a derivation cohort – meaning it was used to identify statistically significant, clinically relevant prognostic factors that allowed for mortality risk stratification. The resulting model was applied to the remaining 20% of the dataset (the validation cohort) in order to assess the accuracy of its predictive ability.

The following variables were integrated into the final model (CURB-65):

  1. Confusion
  2. Urea > 19mg/dL (7 mmol/L)
  3. Respiratory rate ≥ 30 breaths/min
  4. low Blood pressure (systolic BP < 90 mmHg or diastolic BP < 60 mmHg)
  5. age ≥ 65

1068 patients were analyzed. 821 (77%) were in the derivation cohort. 86% of patients received IV antibiotics, 5% were admitted to the ICU, and 4% were intubated. 30-day mortality was 9%. 9 of 11 clinical features examined in univariate analysis were statistically significant (see Table 2).

Ultimately, using the above-described CURB-65 model, in which 1 point is assigned for each clinical characteristic, patients with a CURB-65 score of 0 or 1 had 1.5% mortality, patients with a score of 2 had 9.2% mortality, and patients with a score of 3 or more had 22% mortality. Similar values were demonstrated in the validation cohort. Table 5 summarizes the sensitivity, specificity, PPVs, and NPVs of each CURB-65 score for 30-day mortality in both cohorts. As we would expect from a good predictive model, the sensitivity starts out very high and decreases with increasing score, while the specificity starts out very low and increases with increasing score. For the clinical application of their model, the authors selected the cut points of 1, 2, and 3 (see Figure 2).

In conclusion, CURB-65 is a simple 5-variable decision aid that is helpful in the initial stratification of mortality risk in patients with CAP.

The wide range of specificities and sensitivities at different values of the CURB-65 score makes it a robust tool for risk stratification. The authors felt that patients with a score of 0-1 were “likely suitable for home treatment,” patients with a score of 2 should have “hospital-supervised treatment,” and patients with score of  ≥ 3 had “severe pneumonia” and should be admitted (with consideration of ICU admission if score of 4 or 5).

Following the publication of the CURB-65 Score, the author of the Pneumonia Severity Index (PSI) published a prospective cohort study of CAP that examined the discriminatory power (area under the receiver operating characteristic curve) of the PSI vs. CURB-65. His study found that the PSI “has a higher discriminatory power for short-term mortality, defines a greater proportion of patients at low risk, and is slightly more accurate in identifying patients at low risk” than the CURB-65 score.

Expert opinion at UpToDate prefers the PSI over the CURB-65 score based on its more robust base of confirmatory evidence. Of note, the author of the PSI is one of the authors of the relevant UpToDate article. In an important contrast from the CURB-65 authors, these experts suggest that patients with a CURB-65 score of 0 be managed as outpatients, while patients with a score of 1 and above “should generally be admitted.”

Further Reading/References:
1. Original publication of the PSI, NEJM (1997)
2. PSI vs. CURB-65 (2005)
3. Wiki Journal Club
4. 2 Minute Medicine
5. UpToDate, “CAP in adults: assessing severity and determining the appropriate level of care”

Summary by Duncan F. Moore, MD

Image Credit: by Christaras A, CC BY-SA 3.0

Week 13 – Sepsis-3

“The Third International Consensus Definitions for Sepsis and Septic Shock (Sepsis-3)”

JAMA. 2016 Feb 23;315(8):801-10. [free full text]

In practice, we recognize sepsis as a potentially life-threatening condition that arises secondary to infection. Because the SIRS criteria were of limited sensitivity and specificity in identifying sepsis and because our understanding of the pathophysiology of sepsis had purportedly advanced significantly during the interval since the last sepsis definition, an international task force of 19 experts was convened to define and prognosticate sepsis more effectively. The resulting 2016 Sepsis-3 definition was the subject of immediate and sustained controversy.

In the words of Sepsis-3, sepsis simply “is defined as life-threatening organ dysfunction caused by a dysregulated host response to infection.” The paper further defines organ dysfunction in terms of a threshold change in the SOFA score by 2+ points. However, the authors state that “the SOFA score is not intended to be used as a tool for patient management but as a means to clinically characterize a septic patient.” The authors note that qSOFA, an easier tool introduced in this paper, can identify promptly at the bedside patients “with suspected infection who are likely to have a prolonged ICU stay or die in the hospital.” A positive screen on qSOFA is identified as 2+ of the following: AMS, SBP ≤ 100, or respiratory rate ≥ 22. At the time of this endorsement of qSOFA, the tool had not been validated prospectively. Finally, septic shock was defined as sepsis with persistent hypotension requiring vasopressors to maintain MAP ≥ 65 and with a serum lactate > 2 despite adequate volume resuscitation.

As noted contemporaneously in the excellent PulmCrit blog post “Top ten problems with the new sepsis definition,” Sepsis-3 was not endorsed by the American College of Chest Physicians, the IDSA, any emergency medicine society, or any hospital medicine society. On behalf of the American College of Chest Physicians, Dr. Simpson published a scathing rejection of Sepsis-3 in Chest in May 2016. He noted “there is still no known precise pathophysiological feature that defines sepsis.” He went on to state “it is not clear to us that readjusting the sepsis criteria to be more specific for mortality is an exercise that benefits patients,” and said “to abandon one system of recognizing sepsis [SIRS] because it is imperfect and not yet in universal use for another system that is used even less seems unwise without prospective validation of that new system’s utility.”

In fact, the later validation of qSOFA demonstrated that the SIRS criteria had superior sensitivity for predicting in-hospital mortality while qSOFA had higher specificity. See the following posts at PulmCrit for further discussion: [https://emcrit.org/isepsis/isepsis-sepsis-3-0-much-nothing/] [https://emcrit.org/isepsis/isepsis-sepsis-3-0-flogging-dead-horse/].

At UpToDate, authors note that “data of the value of qSOFA is conflicting,” and because of this, “we believe that further studies that demonstrate improved clinically meaningful outcomes due to the use of qSOFA compared to clinical judgement are warranted before it can be routinely used to predict those at risk of death from sepsis.”

Additional Reading:
1. PulmCCM, “Simple qSOFA score predicts sepsis as well as anything else”
2. 2 Minute Medicine

Summary by Duncan F. Moore, MD

Image Credit: By Mark Oniffrey – Own work, CC BY-SA 4.0

Week 12 – Rivers Trial

“Early Goal-Directed Therapy in the Treatment of Severe Sepsis and Septic Shock”

N Engl J Med. 2001 Nov 8;345(19):1368-77. [free full text]

Sepsis is common and, in its more severe manifestations, confers a high mortality risk. Fundamentally, sepsis is a global mismatch between oxygen demand and delivery. Around the time of this seminal study by Rivers et al., there was increasing recognition of the concept of the “golden hour” in sepsis management – “where definitive recognition and treatment provide maximal benefit in terms of outcome” (1368). Rivers and his team created a “bundle” of early sepsis interventions that targeted preload, afterload, and contractility, dubbed early goal-directed therapy (EGDT). They evaluated this bundle’s effect on mortality and end-organ dysfunction.

The “Rivers trial” randomized adults presenting to a single US academic center ED with ≥ 2 SIRS criteria and either SBP ≤ 90 after a crystalloid challenge of 20-30ml/kg over 30min or lactate > 4mmol/L to either treatment with the EGDT bundle or to the standard of care.

Intervention: early goal-directed therapy (EGDT)

  • Received a central venous catheter with continuous central venous O2 saturation (ScvO2) measurement
  • Treated according to EGDT protocol (see Figure 2, or below) in ED for at least six hours
    • 500ml bolus of crystalloid q30min to achieve CVP 8-12mm
    • Vasopressors to achieve MAP ≥ 65
    • Vasodilators to achieve MAP ≤ 90
    • If ScvO2 < 70%, transfuse RBCs to achieve Hct ≥ 30
    • If, after CVP, MAP, and Hct were optimized as above and ScvO2 remained < 70%, dobutamine was added and uptitrated to achieve ScvO2 ≥ 70 or until max dose 20 μg/kg/min
      • dobutamine was de-escalated if MAP < 65 or HR > 120
    • Patients in whom hemodynamics could not be optimized were intubated and sedated, in order to decrease oxygen consumption
  • Patients were transferred to inpatient ICU bed as soon as able, and upon transfer ScvO2 measurement was discontinued
  • Inpatient team was blinded to treatment group assignment

The primary outcome was in-hospital mortality. Secondary endpoints included: resuscitation end points, organ-dysfunction scores, coagulation-related variables, administered treatments, and consumption of healthcare resources.

130 patients were randomized to EGDT, and 133 to standard therapy. There were no differences in baseline characteristics. There was no group difference in the prevalence of antibiotics given within the first 6 hours. Standard-therapy patients spent 6.3 ± 3.2 hours in the ED, whereas EGDT patients spent 8.0 ± 2.1 (p < 0.001).

In-hospital mortality was 46.5% in the standard-therapy group, and 30.5% in the EGDT group (p = 0.009, NNT 6.25). 28-day and 60-day mortalities were also improved in the EGDT group. See Table 3.

During the initial six hours of resuscitation, there was no significant group difference in mean heart rate or CVP. MAP was higher in the EGDT group (p < 0.001), but all patients in both groups reached a MAP ≥ 65. ScvO2 ≥ 70% was met by 60.2% of standard-therapy patients and 94.9% of EGDT patients (p < 0.001). A combination endpoint of achievement of CVP, MAP, and UOP (≥ 0.5cc/kg/hr) goals was met by 86.1% of standard-therapy patients and 99.2% of EGDT patients (p < 0.001). Standard-therapy patients had lower ScvO2 and greater base deficit, while lactate and pH values were similar in both groups.

During the period of 7 to 72 hours, the organ-dysfunction scores of APACHE II, SAPS II, and MODS were higher in the standard-therapy group (see Table 2). The prothrombin time, fibrin-split products concentration, and d-dimer concentrations were higher in the standard-therapy group, while PTT, fibrinogen concentration, and platelet counts were similar.

During the initial six hours, EGDT patients received significantly more fluids, pRBCs, and inotropic support than standard-therapy patients. Rates of vasopressor use and mechanical ventilation were similar. During the period of 7 to 72 hours, standard-therapy patients received more fluids, pRBCs, and vasopressors than the EGDT group, and they were more likely to be intubated and to have pulmonary-artery catheterization. Rates of inotrope use were similar. Overall, during the first 72 hrs, standard-therapy patients were more likely to receive vasopressors, be intubated, and undergo pulmonary-artery catheterization. EGDT patients were more likely to receive pRBC transfusion. There was no group difference in total volume of fluid administration or inotrope use. Regarding utilization, there were no group differences in mean duration of vasopressor therapy, mechanical ventilation, or length of stay. Among patients who survived to discharge, standard-therapy patients spent longer in the hospital than EGDT patients (18.4 ± 15.0 vs. 14.6 ± 14.5 days, respectively, p = 0.04).

In conclusion, early goal-directed therapy reduced in-hospital mortality in patients presenting to the ED with severe sepsis or septic shock when compared with usual care. In their discussion, the authors note that “when early therapy is not comprehensive, the progression to severe disease may be well under way at the time of admission to the intensive care unit” (1376).

The Rivers trial has been cited over 10,500 times. It has been widely discussed and dissected for decades. Most importantly, it helped catalyze a then-ongoing paradigm shift of what “usual care” in sepsis is. As noted by our own Drs. Sonti and Vinayak and in their Georgetown Critical Care Top 40: “Though we do not use the ‘Rivers protocol’ as written, concepts (timely resuscitation) have certainly infiltrated our ‘standard of care’ approach.” The Rivers trial evaluated the effect of a bundle (multiple interventions). It was a relatively complex protocol, and it has been recognized that the transfusion of blood to Hgb > 10 may have caused significant harm. In aggregate, the most critical elements of the modern initial resuscitation in sepsis are early administration of antibiotics (notably not protocolized by Rivers) within the first hour and the aggressive administration of IV fluids (now usually 30cc/kg of crystalloid within the first 3 hours of presentation).

More recently, there have been three large RCTs of EGDT versus usual care and/or protocols that used some of the EGDT targets: ProCESS (2014, USA), ARISE (2014, Australia), and ProMISe (2015, UK). In general terms, EGDT provided no mortality benefit compared to usual care. Prospectively, the authors of these three trials planned a meta-analysis – the 2017 PRISM study – which concluded that “EGDT did not result in better outcomes than usual care and was associated with higher hospitalization costs across a broad range of patient and hospital characteristics.” Despite patients in the Rivers trial being sicker than those of ProCESS/ARISE/ProMISe, it was not found in the subgroup analysis of PRISM that EGDT was more beneficial in sicker patients. Overall, the PRISM authors noted that “it remains possible that general advances in the provision of care for sepsis and septic shock, to the benefit of all patients, explain part or all of the difference in findings between the trial by Rivers et al. and the more recent trials.”

Further Reading/References:
1. Wiki Journal Club
2. 2 Minute Medicine
3. Life in The Fast Lane
4. Georgetown Critical Care Top 40
5. “A randomized trial of protocol-based care for early septic shock” (ProCESS). NEJM 2014.
6. “Goal-directed resuscitation for patients with early septic shock” (ARISE). NEJM 2014.
7. “Trial of early, goal-directed resuscitation for septic shock” (ProMISe). NEJM 2015.
8. “Early, Goal-Directed Therapy for Septic Shock – A Patient-level Meta-Analysis” PRISM. NEJM 2017.
9. Surviving Sepsis Campaign
10. UpToDate, “Evaluation and management of suspected sepsis and septic shock in adults”

Summary by Duncan F. Moore, MD

Image Credit: By Clinical_Cases, [CC BY-SA 2.5] via Wikimedia Commons

Week 11 – AFFIRM

“A Comparison of Rate Control and Rhythm Control in Patients with Atrial Fibrillation”

by the Atrial Fibrillation Follow-Up Investigation of Rhythm Management (AFFIRM) Investigators

N Engl J Med. 2002 Dec 5;347(23):1825-33. [free full text]

It seems like the majority of patients with atrial fibrillation that we encounter today in the inpatient setting are being treated with a rate-control strategy, as opposed to a rhythm-control strategy. There was a time when both approaches were considered acceptable, and perhaps rhythm control was even the preferred initial strategy. The AFFIRM trial was the landmark study to address this debate.

The trial randomized patients with atrial fibrillation (judged “likely to be recurrent”) aged 65 or older “or who had other risk factors for stroke or death” to either 1) a rhythm-control strategy with one or more drugs from a pre-specified list and/or cardioversion to achieve sinus rhythm or 2) a rate-control strategy with beta-blockers, CCBs, and/or digoxin to a target resting HR ≤ 80 and a six-minute walk test HR ≤ 110. The primary endpoint was death during follow-up. The major secondary endpoint was a composite of death, disabling stroke, disabling anoxic encephalopathy, major bleeding, and cardiac arrest.

4060 patients were randomized. Death occurred in 26.7% of rhythm-control patients versus 25.9% of rate-control patients (HR 1.15, 95% CI 0.99 – 1.34, p = 0.08). The composite secondary endpoint occurred in 32.0% of rhythm control-patients versus 32.7% of rate-control patients (p = 0.33). Rhythm-control strategy was associated with a higher risk of death among patients older than 65 and patients with CAD (see Figure 2). Additionally, rhythm-control patients were more likely to be hospitalized during follow-up (80.1% vs. 73.0%, p < 0.001) and to develop torsades de pointes (0.8% vs. 0.2%, p = 0.007).

This trial demonstrated that a rhythm-control strategy in atrial fibrillation offers no mortality benefit over a rate-control strategy. At the time of publication, the authors wrote that rate control was an “accepted, though often secondary alternative” to rhythm control. Their study clearly demonstrated that there was no significant mortality benefit to either strategy and that hospitalizations were greater in the rhythm-control group. In subgroup analysis that rhythm control led to higher mortality among the elderly and those with CAD. Notably, 37.5% of rhythm-control patients had crossed over to rate control strategy by 5 years of follow-up, whereas only 14.9% of rate-control patients had switched over to rhythm control.

But what does this study mean for our practice today? Generally speaking, rate control is preferred in most patients, particularly the elderly and patients with CHF, whereas rhythm control may be pursued in patients with persistent symptoms despite rate control, patients unable to achieve rate control on AV nodal agents alone, and patients younger than 65. Both the AHA/ACC (2014) and the European Society of Cardiology (2016) guidelines have extensive recommendations that detail specific patient scenarios.

Further Reading / References:
1. Cardiologytrials.org
2. Wiki Journal Club
3. 2 Minute Medicine
4. Visual abstract @ Visualmed

Summary by Duncan F. Moore, MD

Image Credit: Drj via Wikimedia Commons

Week 10 – CLOT

“Low-Molecular-Weight Heparin versus a Coumarin for the Prevention of Recurrent Venous Thromboembolism in Patients with Cancer”

by the Randomized Comparison of Low-Molecular-Weight Heparin versus Oral Anticoagulant Therapy for the Prevention of Recurrent Venous Thromboembolism in Patients with Cancer (CLOT) Investigators

N Engl J Med. 2003 Jul 10;349(2):146-53. [free full text]

Malignancy is a pro-thrombotic state, and patients with cancer are at significant and sustained risk of venous thromboembolism (VTE) even when treated with warfarin. Warfarin is a suboptimal drug that requires careful monitoring, and its effective administration is challenging in the setting of cancer-associated difficulties with oral intake, end-organ dysfunction, and drug interactions. The 2003 CLOT trial was designed to evaluate whether treatment with low-molecular-weight heparin (LMWH) was superior to treatment with a vitamin K antagonist (VKA) in the prevention of recurrent VTE.

The study randomized adults with active cancer and newly diagnosed symptomatic DVT or PE to treatment with either dalteparin subQ daily (200 IU/kg daily x1 month, then 150 IU/kg daily x5 months) or a vitamin K antagonist x6 months (target INR 2.5, with 5-7 day LMWH bridge). The primary outcome was the recurrence of symptomatic DVT or PE within 6 months of follow-up. Secondary outcomes included major bleed, any bleeding, and all-cause mortality.

338 patients were randomized to the LMWH group, and 338 were randomized to the VKA group. Baseline characteristics were similar among the two groups. 90% of patients had solid malignancies, and 67% of patients had metastatic disease. Within the VKA group, INR was estimated to be therapeutic 46% of the time, subtherapeutic 30% of the time, and supratherapeutic 24% of the time. Within the six-month follow-up period, symptomatic VTE occurred in 8.0% of the dalteparin group and 15.8% of the VKA group (HR 0.48, 95% CI 0.30-0.77, p=0.002; NNT = 12.9). The Kaplan-Meier estimate of recurrent VTE at 6 months was 9% in the dalteparin group and 17% in the VKA group. 6% of the dalteparin group developed major bleeding versus 6% of the VKA group (p = 0.27). 14% of the dalteparin group sustained any type of bleeding event versus 19% of the VKA group (p = 0.09). Mortality at 6 months was 39% in the dalteparin group versus 41% in the VKA group (p = 0.53).

In summary, treatment of VTE in cancer patients with low-molecular-weight heparin reduced the incidence of recurrent VTE relative to the incidence following treatment with vitamin K antagonists. Notably, this reduction in VTE recurrence was not associated with a change in bleeding risk. However, it also did not correlate with a mortality benefit either. This trial initiated a paradigm shift in the treatment of VTE in cancer. LMWH became the standard of care, although cost and convenience may have limited access and adherence to this treatment.

Until recently, no trial had directly compared a DOAC to LMWH in the prevention of recurrent VTE in malignancy. In an open-label, noninferiority trial, the Hokusai VTE Cancer Investigators demonstrated that the oral Xa inhibitor edoxaban (Savaysa) was noninferior to dalteparin with respect to a composite outcome of recurrent VTE or major bleeding. The 2018 SELECT-D trial compared rivaroxaban (Xarelto) to dalteparin and demonstrated a reduced rate of recurrence among patients treated with rivaroxaban (cumulative 6-month event rate of 4% versus 11%, HR 0.43, 95% CI 0.19–0.99) with no difference in rates of major bleeding but increased “clinically relevant nonmajor bleeding” within the rivaroxaban group.

Further Reading/References:
1. CLOT @ Wiki Journal Club
2. 2 Minute Medicine
3. UpToDate, “Treatment of venous thromboembolism in patients with malignancy”
4. Hokusai VTE Cancer Trial @ Wiki Journal Club
5. “Edoxaban for the Treatment of Cancer-Associated Venous Thromboembolism,” NEJM 2017
6. “Comparison of an Oral Factor Xa Inhibitor With Low Molecular Weight Heparin in Patients With Cancer With Venous Thromboembolism: Results of a Randomized Trial (SELECT-D).” J Clin Oncol 2018.

Summary by Duncan F. Moore, MD

Image Credit: By Westgate EJ, FitzGerald GA, CC BY 2.5via Wikimedia Commons

Week 9 – NICE-SUGAR

“Intensive versus Conventional Glucose Control in Critically Ill Patients”

by the Normoglycemia in Intensive Care Evaluation–Survival Using Glucose Algorithm Regulation (NICE-SUGAR) investigators

N Engl J Med 2009;360:1283-97. [free full text]

On the wards we often hear 180 mg/dL used as the upper limit of acceptable for blood glucose with the understanding that tighter glucose control in inpatients can lead to more harm than benefit. The relevant evidence base comes from ICU populations, with scant direct data in non-ICU patients. The 2009 NICE-SUGAR study is the largest and best among this evidence base.

The study randomized ICU patients (expected to require 3 or more days of ICU-level care) to either “intensive” glucose control (target glucose 81 to 108 mg/dL) or conventional glucose control (target of less than 180 mg/dL). The primary outcome was 90-day all-cause mortality.

6104 patients were randomized to the two arms, and both groups had similar baseline characteristics. 27.5% of patients in the intensive-control group died versus 24.9% in the conventional-control group (OR 1.14, 95% CI 1.02-1.28, p= 0.02). Severe hypoglycemia (< 40 mg/dL) was found in 6.8% of intensive patients but only 0.5% of conventional patients.

In conclusion, intensive glucose control increases mortality in ICU patients. The fact that only 20% of these patients had diabetes mellitus suggests that much of the hyperglycemia treated in this study (97% of intensive group received insulin, 69% of conventional) was from stress, critical illness, and corticosteroid use. For ICU patients, intensive insulin therapy is clearly harmful, but the ideal target glucose range remains controversial and by expert opinion appears to be 140-180. For non-ICU inpatients with or without diabetes mellitus, the ideal glucose target is also unclear – the ADA recommends 140-180, and the Endocrine Society recommends a pre-meal target of < 140 and random levels < 180.

References / Further Reading:
1. ADA Standards of Medical Care in Diabetes 2016 (skip to page S99)
2. Wiki Journal Club
3. Visual Abstract @ VisualMed

Summary by Duncan F. Moore, MD

Image Credit: Dietmar Rabich / Wikimedia Commons / “Würfelzucker — 2018 — 3564” / CC BY-SA 4.0

Week 8 – FUO

“Fever of Unexplained Origin: Report on 100 Cases”

Medicine (Baltimore). 1961 Feb;40:1-30. [free full text]

In our modern usage, fever of unknown origin (FUO) refers to a persistent unexplained fever despite an adequate medical workup. The most commonly used criteria for this diagnosis stem from the 1961 series by Petersdorf and Beeson.

This study analyzed a prospective cohort of patients evaluated at Yale’s hospital for FUO between 1952 and 1957. Their FUO criteria: 1) illness of more than three week’s duration, 2) fever higher than 101º F on several occasions, and 3) diagnosis uncertain after one week of study in hospital. After 126 cases had been noted, retrospective investigation was undertaken to determine the ultimate etiologies of the fevers. The authors winnowed this group to 100 cases based on availability of follow-up data and the exclusion of cases that “represented combinations of such common entities as urinary tract infection and thrombophlebitis.”

In 93 cases, “a reasonably certain diagnosis was eventually possible.” 6 of the 7 undiagnosed patients ultimately made a full recovery. Underlying etiologies (see table 1 on page 3) included: infectious 36% (with TB in 11%), neoplastic diseases 19%, collagen disease (e.g. SLE) 13%, pulmonary embolism 3%, benign non-specific pericarditis 2%, sarcoidosis 2%, hypersensitivity reaction 4%, cranial arteritis 2%, periodic disease 5%, miscellaneous disease 4%, factitious fever 3%, no diagnosis 7%.

Clearly, diagnostic modalities have improved markedly since this 1961 study. However, the core etiologies of infection, malignancy, and connective tissue disease/non-infectious inflammatory disease remain most prominent, while the percentage of patients with no ultimate diagnosis has been increasing (for example, see PMIDs 9413425, 12742800, and 17220753). Modifications to the 1961 criteria have been proposed (for example: 1 week duration of hospital stay not required if certain diagnostic measures have been performed) and implemented in recent FUO trials. One modern definition of FUO: fever ≥ 38.3º C, lasting at least 2-3 weeks, with no identified cause after three days of hospital evaluation or three outpatient visits. Per UpToDate, the following minimum diagnostic workup is recommended in suspected FUO: blood cultures, ESR or CRP, LDH, HIV, RF, heterophile antibody test, CK, ANA, TB testing, SPEP, and CT of abdomen and chest.

Further Reading/References:
1. “Fever of unknown origin (FUO). I A. prospective multicenter study of 167 patients with FUO, using fixed epidemiologic entry criteria. The Netherlands FUO Study Group.” Medicine (Baltimore). 1997 Nov;76(6):392-400.
2. “From prolonged febrile illness to fever of unknown origin: the challenge continues.” Arch Intern Med. 2003 May 12;163(9):1033-41.
3. “A prospective multicenter study on fever of unknown origin: the yield of a structured diagnostic protocol.” Medicine (Baltimore). 2007 Jan;86(1):26-38.
4. UpToDate, “Approach to the Adult with Fever of Unknown Origin”
5. “Robert Petersdorf, 80, Major Force in U.S. Medicine, Dies” The New York Times, 2006.

Summary by Duncan F. Moore, MD

Image Credit: by Menchi @ Wikimedia Commons, CC BY-SA 3.0

Week 7 – ARDSNet aka ARMA

“Ventilation with Lower Tidal Volumes as Compared with Traditional Tidal Volumes for Acute Lung Injury and the Acute Respiratory Distress Syndrome”

by the Acute Respiratory Distress Syndrome Network (ARDSNet)

N Engl J Med. 2000 May 4;342(18):1301-8. [free full text]


Acute respiratory distress syndrome (ARDS) is an inflammatory and highly morbid lung injury found in many critically ill patients. In the 1990s, it was hypothesized that overdistention of aerated lung volumes and elevated airway pressures might contribute to the severity of ARDS, and indeed some work in animal models supported this theory. Prior to the ARDSNet study, four randomized trials had been conducted to investigate the possible protective effect of ventilation with lower tidal volumes, but their results were conflicting.

The ARDSNet study enrolled patients with ARDS (diagnosed within 36 hours) to either a lower initial tidal volume of 6ml/kg, downtitrated as necessary to maintain plateau pressure ≤ 30 cm H2O, or to the “traditional” therapy of an initial tidal volume of 12 ml/kg, downtitrated as necessary to maintain plateau pressure ≤ 50 cm of water. The primary outcomes were in-hospital mortality and ventilator-free days within the first 28 days. Secondary outcomes included number of days without organ failure, occurrence of barotrauma, and reduction in IL-6 concentration from day 0 to day 3.

861 patients were randomized before the trial was stopped early due to the increased mortality in the control arm noted during interim analysis. In-hospital mortality was 31.0% in the lower tidal volume group and 39.8% in the traditional tidal volume group (p = 0.007, NNT = 11.4). Ventilator free days were 12±11 in the lower tidal volume group vs. 10±11 in the traditional group (n = 0.007). The lower tidal volume group had more days without organ failure (15±11 vs. 12±11, p = 0.006). There was no difference in rates of barotrauma among the two groups. Decrease in IL-6 concentration between days 0 and 3 was greater in the low tidal volume group (p < 0.001), and IL-6 concentration at day 3 was lower in the low tidal volume group (p = 0.002).

In summary, low tidal volume ventilation decreases mortality in ARDS relative to “traditional” tidal volumes. The authors felt that this study confirmed the results of prior animal models and conclusively answered the question of whether or not low tidal volume ventilation provided a mortality benefit. In fact, in the years following, low tidal volume ventilation became the standard of care, and a robust body of literature followed this study to further delineate a “lung-protective strategy.” Critics of the study noted that, at the time of the study, the “traditional” (standard of care) tidal volume in ARDS was less than the 12 ml/kg used in the comparison arm. (Non-enrolled patients at the participating centers were receiving a mean tidal volume of 10.3 ml/kg.) Thus not only was the trial making a comparison to a faulty control, but it was also potentially harming patients in the control arm. An excellent summary of the ethical issues and debate regarding this specific issue and regarding control arms of RCTs in general can be found here.

Corresponding practice point from Dr. Sonti and Dr. Vinayak and their Georgetown Critical Care Top 40: “Low tidal volume ventilation is the standard of care in patients with ARDS (P/F < 300). Use ≤ 6 ml/kg predicted body weight, follow plateau pressures, and be cautious of mixed modes in which you set a tidal volume but the ventilator can adjust and choose a larger one.”

PulmCCM is an excellent blog, and they have a nice page reviewing this topic and summarizing some of the research and guidelines that have followed.

Further Reading/References:
1. Wiki Journal Club
2. 2 Minute Medicine
3. PulmCCM “Mechanical Ventilation in ARDS: Research Update”
4. Georgetown Critical Care Top 40, page 6
5. PulmCCM “In ARDS, substandard ventilator care is the norm, not the exception.” 2017.

Summary by Duncan F. Moore, MD

Photo Credit: Hanno H. Endres at de.wikipedia, CC BY-SA 3.0

Week 6 – SOLVD

“Effect of Enalapril on Survival in Patients with Reduced Left Ventricular Ejection Fractions and Congestive Heart Failure”

by the Studies of Left Ventricular Dysfunction (SOLVD) Investigators

N Engl J Med. 1991 Aug 1;325(5):293-302. [free full text]

Heart failure with reduced ejection fraction (HFrEF) is a very common and highly morbid condition. We now know that blockade of the renin-angiotensin-aldosterone system (RAAS) with an ACEi or ARB is a cornerstone of modern HFrEF treatment. The 1991 SOLVD trial played an integral part in demonstrating the benefit of and broadening the indication for RAAS blockade in HFrEF.

The trial enrolled patients with HFrEF and LVEF ≤ 35% who were already on treatment (but not on an ACEi) and had Cr ≤ 2.0 and randomized them to treatment with enalapril BID (starting at 2.5mg and uptitrated as tolerated to 20mg BID) or treatment with placebo BID (again, starting at 2.5mg and uptitrated as tolerated to 20mg BID). Of note, there was a single-blind run-in period with enalapril in all patients, followed by a single-blind placebo run-in period. Finally, the patient was randomized to his/her actual study drug in a double-blind fashion. The primary outcomes were all-cause mortality and death from or hospitalization for CHF. Secondary outcomes included hospitalization for CHF, all-cause hospitalization, cardiovascular mortality, and CHF-related mortality.

2569 patients were randomized. Follow-up duration ranged from 22 to 55 months. 510 (39.7%) placebo patients died during follow-up compared to 452 (35.2%) enalapril patients (relative risk reduction of 16% per log-rank test, 95% CI 5-26%, p = 0.0036). See Figure 1 for the relevant Kaplan-Meier curves. 736 (57.3%) placebo patients died or were hospitalized for CHF during follow-up compared to 613 (47.7%) enalapril patients (relative risk reduction 26%, 95% CI 18-34, p < 0.0001). Hospitalizations for heart failure, all-cause hospitalizations, cardiovascular deaths, and deaths due to heart failure were all significantly reduced in the enalapril group. 320 placebo patients discontinued the study drug versus only 182 patients in the enalapril group. Enalapril patients were significantly more likely to report dizziness, fainting, and cough. There was no difference in the prevalence of angioedema.

Treatment of HFrEF with enalapril significantly reduced mortality and hospitalizations for heart failure. The authors note that for every 1000 study patients treated with enalapril, approximately 50 premature deaths and 350 heart failure hospitalizations were averted. The mortality benefit of enalapril appears to be immediate and increases for approximately 24 months. Per the authors, “reductions in deaths and rates of hospitalization from worsening heart failure may be related to improvements in ejection fraction and exercise capacity, to a decrease in signs and symptoms of congestion, and also to the known mechanism of action of the agent – i.e., a decrease in preload and afterload when the conversion of angiotensin I to angiotensin II is blocked.” Strengths of this study include its double-blind, randomized design, large sample size, and long follow-up. The fact that the run-in period allowed for the exclusion prior to randomization of patients who did not immediately tolerate enalapril is a major limitation of this study.

Prior to SOLVD, studies of ACEi in HFrEF had focused on patients with severe symptoms. The 1987 CONSENSUS trial was limited to patients with NYHA class IV symptoms. SOLVD broadened the indication of ACEi treatment to a wider group of symptoms and correlating EFs. Per the current 2013 ACCF/AHA guidelines for the management of heart failure, ACEi/ARB therapy is a Class I recommendation in all patients with HFrEF in order to reduce morbidity and mortality.

Further Reading/References:
1. Wiki Journal Club
2. 2 Minute Medicine
3. Effects of enalapril on mortality in severe congestive heart failure – Results of the Cooperative North Scandinavian Enalapril Survival Study (CONSENSUS). 1987.
4. 2013 ACCF/AHA guideline for the management of heart failure: executive summary

Summary by Duncan F. Moore, MD

Week 5 – IDNT

“Renoprotective Effect of the Angiotensin-Receptor Antagonist Irbesartan in Patients with Nephropathy Due to Type 2 Diabetes”

aka the Irbesartan Diabetic Nephropathy Trial (IDNT)

N Engl J Med. 2001 Sep 20;345(12):851-60. [free full text]

Diabetes mellitus is the most common cause of ESRD in the US. In 1993, a landmark study in NEJM demonstrated that captopril (vs. placebo) slowed the deterioration in renal function in patients with T1DM. However, prior to this 2002 study, no study had addressed definitively whether a similar improvement in renal outcomes could be achieved with RAAS blockade in patients with T2DM. Irbesartan (Avapro) is an angiotensin II receptor blocker that was first approved in 1997 for the treatment of hypertension. Its marketer, Bristol-Meyers Squibb, sponsored this trial in hopes of broadening the market for its relatively new drug.

This trial randomized patients with T2DM, hypertension, and nephropathy (per proteinuria and elevated Cr) to treatment with either irbesartan, amlodipine, or placebo. The drug in each arm was titrated to achieve a target SBP ≤ 135, and all patients were allowed non-ACEi/non-ARB/non-CCB drugs as needed. The primary outcome was a composite of the doubling of serum Cr, onset of ESRD, or all-cause mortality. Secondary outcomes included individual components of the primary outcome and a composite cardiovascular outcome.

1715 patients were randomized. The mean blood pressure after the baseline visit was 140/77 in the irbesartan group, 141/77 in the amlodipine group, and 144/80 in the placebo group (p = 0.001 for pairwise comparisons of MAP between irbesartan or amlodipine and placebo). Regarding the primary composite renal endpoint, the unadjusted relative risk was 0.80 (95% CI 0.66-0.97, p = 0.02) for irbesartan vs. placebo, 1.04 (95% CI 0.86-1.25, p = 0.69) for amlodipine vs. placebo, and 0.77 (0.63-0.93, p = 0.006) for irbesartan vs. amlodipine. The groups also differed with respect to individual components of the primary outcome. The unadjusted relative risk of creatinine doubling was 33% lower among irbesartan patients than among placebo patients (p = 0.003) and was 37% lower than among amlodipine patients (p < 0.001). The relative risks of ESRD and all-cause mortality did not differ significantly among the groups. There were no significant group differences with respect to the composite cardiovascular outcome. Importantly, a sensitivity analysis was performed which demonstrated that the conclusions of the primary analysis were not impacted significantly by adjustment for mean arterial pressure achieved during follow-up.

In summary, irbesartan treatment in T2DM resulted in superior renal outcomes when compared to both placebo and amlodipine. This beneficial effect was independent of blood pressure lowering. This was a well-designed, double-blind, randomized, controlled trial. However, it was industry-sponsored, and in retrospect, its choice of study drug seems quaint. The direct conclusion of this trial is that irbesartan is renoprotective in T2DM. In the discussion of IDNT, the authors hypothesize that “the mechanism of renoprotection by agents that block the action of angiotensin II may be complex, involving hemodynamic factors that lower the intraglomerular pressure, the beneficial effects of diminished proteinuria, and decreased collagen formation that may be related to decreased stimulation of transforming growth factor beta by angiotensin II.” In September 2002, on the basis of this trial, the FDA broadened the official indication of irbesartan to include the treatment of type 2 diabetic nephropathy. This trial was published concurrently in NEJM with the RENAAL trial. RENAAL was a similar trial of losartan vs. placebo in T2DM and demonstrated a similar reduction in the doubling of serum creatinine as well as a 28% reduction in progression to ESRD. In conjunction with the original 1993 ACEi in T1DM study, these two 2002 ARB in T2DM studies led to the overall notion of a renoprotective class effect of ACEis/ARBs in diabetes. Enalapril and lisinopril’s patents expired in 2000 and 2002, respectively. Shortly afterward, generic, once-daily ACE inhibitors entered the US market. Ultimately, such drugs ended up commandeering much of the diabetic-nephropathy-in-T2DM market share for which irbesartan’s owners had hoped.

Further Reading/References:
1. “The effect of angiotensin-converting-enzyme inhibition on diabetic nephropathy. The Collaborative Study Group.” NEJM 1993.
2. CSG Captopril Trial @ Wiki Journal Club
3. IDNT @ Wiki Journal Club
4. IDNT @ 2 Minute Medicine
5. US Food and Drug Administration, New Drug Application #020757
6. RENAAL @ Wiki Journal Club
7. RENAAL @ 2 Minute Medicine

Summary by Duncan F. Moore, MD