Variation in methods, results and reporting in electronic health record-based studies evaluating routine care in gout: A systematic review.
Clicks: 265
ID: 63397
2019
Article Quality & Performance Metrics
Overall Quality
Improving Quality
0.0
/100
Combines engagement data with AI-assessed academic quality
Reader Engagement
Steady Performance
67.1
/100
265 views
212 readers
Trending
AI Quality Assessment
Not analyzed
Abstract
To perform a systematic review examining the variation in methods, results, reporting and risk of bias in electronic health record (EHR)-based studies evaluating management of a common musculoskeletal disease, gout.Two reviewers systematically searched MEDLINE, Scopus, Web of Science, CINAHL, PubMed, EMBASE and Google Scholar for all EHR-based studies published by February 2019 investigating gout pharmacological treatment. Information was extracted on study design, eligibility criteria, definitions, medication usage, effectiveness and safety data, comprehensiveness of reporting (RECORD), and Cochrane risk of bias (registered PROSPERO CRD42017065195).We screened 5,603 titles/abstracts, 613 full-texts and selected 75 studies including 1.9M gout patients. Gout diagnosis was defined in 26 ways across the studies, most commonly using a single diagnostic code (n = 31, 41.3%). 48.4% did not specify a disease-free period before 'incident' diagnosis. Medication use was suboptimal and varied with disease definition while results regarding effectiveness and safety were broadly similar across studies despite variability in inclusion criteria. Comprehensiveness of reporting was variable, ranging from 73% (55/75) appropriately discussing the limitations of EHR data use, to 5% (4/75) reporting on key data cleaning steps. Risk of bias was generally low.The wide variation in case definitions and medication-related analysis among EHR-based studies has implications for reported medication use. This is amplified by variable reporting comprehensiveness and the limited consideration of EHR-relevant biases (e.g. data adequacy) in study assessment tools. We recommend accounting for these biases and performing a sensitivity analysis on case definitions, and suggest changes to assessment tools to foster this.Reference Key |
crossfield2019variationplos
Use this key to autocite in the manuscript while using
SciMatic Manuscript Manager or Thesis Manager
|
---|---|
Authors | Crossfield, Samantha S R;Lai, Lana Yin Hui;Kingsbury, Sarah R;Baxter, Paul;Johnson, Owen;Conaghan, Philip G;Pujades-Rodriguez, Mar; |
Journal | PloS one |
Year | 2019 |
DOI | 10.1371/journal.pone.0224272 |
URL | |
Keywords |
Citations
No citations found. To add a citation, contact the admin at info@scimatic.org
Comments
No comments yet. Be the first to comment on this article.