The Law of Attrition
Clicks: 285
ID: 112030
2005
Article Quality & Performance Metrics
Overall Quality
Improving Quality
0.0
/100
Combines engagement data with AI-assessed academic quality
Reader Engagement
Steady Performance
75.9
/100
284 views
227 readers
Trending
AI Quality Assessment
Not analyzed
Abstract
In an ongoing effort of this Journal to develop and further the theories, models, and best practices around eHealth research, this paper argues for the need for a “science of attrition”, that is, a need to develop models for discontinuation of eHealth applications and the related phenomenon of participants dropping out of eHealth trials. What I call “law of attrition” here is the observation that in any eHealth trial a substantial proportion of users drop out before completion or stop using the appplication. This feature of eHealth trials is a distinct characteristic compared to, for example, drug trials. The traditional clinical trial and evidence-based medicine paradigm stipulates that high dropout rates make trials less believable. Consequently eHealth researchers tend to gloss over high dropout rates, or not to publish their study results at all, as they see their studies as failures. However, for many eHealth trials, in particular those conducted on the Internet and in particular with self-help applications, high dropout rates may be a natural and typical feature. Usage metrics and determinants of attrition should be highlighted, measured, analyzed, and discussed. This also includes analyzing and reporting the characteristics of the subpopulation for which the application eventually “works”, ie, those who stay in the trial and use it. For the question of what works and what does not, such attrition measures are as important to report as pure efficacy measures from intention-to-treat (ITT) analyses. In cases of high dropout rates efficacy measures underestimate the impact of an application on a population which continues to use it. Methods of analyzing attrition curves can be drawn from survival analysis methods, eg, the Kaplan-Meier analysis and proportional hazards regression analysis (Cox model). Measures to be reported include the relative risk of dropping out or of stopping the use of an application, as well as a “usage half-life”, and models reporting demographic and other factors predicting usage discontinuation in a population. Differential dropout or usage rates between two interventions could be a standard metric for the “usability efficacy” of a system. A “run-in and withdrawal” trial design is suggested as a methodological innovation for Internet-based trials with a high number of initial dropouts/nonusers and a stable group of hardcore users.
[J Med Internet Res 2005;7(1):e11]
Abstract Quality Issue:
This abstract appears to be incomplete or contains metadata (373 words).
Try re-searching for a better abstract.
| Reference Key |
eysenbach2005journalthe
Use this key to autocite in the manuscript while using
SciMatic Manuscript Manager or Thesis Manager
|
|---|---|
| Authors | Gunther Eysenbach; |
| Journal | Journal of medical Internet research |
| Year | 2005 |
| DOI |
doi:10.2196/jmir.7.1.e11
|
| URL | |
| Keywords |
medical
Internet
research
medicine
journal
ehealth
JMIR
open access publishing
medical research
medical informatics
National Center for Biotechnology Information
NCBI
NLM
MEDLINE
medical informatics
humans
pubmed abstract
nih
national institutes of health
national library of medicine
patient dropouts
delivery of health care / organization & administration
pmid:15829473
pmc1550631
doi:10.2196/jmir.7.1.e11
gunther eysenbach
health services research* / statistics & numerical data
internet / statistics & numerical data*
medical informatics applications*
|
Citations
No citations found. To add a citation, contact the admin at info@scimatic.org
Comments
No comments yet. Be the first to comment on this article.