Assessing the quality of feedback to general internal medicine residents in a competency-based environment.
Clicks: 314
ID: 70588
2019
Article Quality & Performance Metrics
Overall Quality
Improving Quality
0.0
/100
Combines engagement data with AI-assessed academic quality
Reader Engagement
Steady Performance
64.8
/100
309 views
250 readers
Trending
AI Quality Assessment
Not analyzed
Abstract
Competency Based Medical Education (CBME) is designed to use workplace-based assessment (WBA) tools to provide observed assessment and feedback on resident competence. Moreover, WBAs are expected to provide evidence beyond that of more traditional mid- or end-of-rotation assessments [e.g., In Training Evaluation Reports (ITERs)]. In this study, we investigated the quality of feedback in General Internal Medicine (GIM), by comparing WBA and ITER assessment tools.WBAs are hypothesized to improve written and numerical feedback to support the development and documentation of competence. In this study, we investigated residents' and preceptors' perceptions of WBA validity, usability, and reliability and the extent to which WBAs differentiate residents' performance when compared to ITERs.We used a mixed methods approach over a three-year period, including perspectives gathered from focus groups, interviews, along with numerical and narrative comparisons between WBA and ITERs in one GIM program.Our quantitative analysis of feedback from seven residents' clinical assessments showed that overall rates of actionable feedback, for both ITERs and WBAs, were low (26%), with only 9% of the total providing an improvement strategy. The provision of quality feedback was not significantly different between tools; although WBAs provided more actionable feedback, ITERs provided more strategies. We found that residents and preceptors indicated the narrative component of feedback was more constructive and effective than numerical scores. Both groups perceived the focus on specific workplace-based feedback was more effective than ITERs.Participants in this study viewed narrative, actionable, and specific feedback as essential, and an overall preference was found for written feedback over numerical assessments. However, our quantitative analyses showed that specific actionable feedback was rarely documented, despite finding an emphasis from both residents and preceptors of its importance for developing competency. Neither formative WBAs nor summative ITERs clearly provided better feedback, and both may still have a role in overall resident evaluation. Participant views differed in roles and responsibilities, with residents stating that preceptors should be responsible for initiating assessments and vice-versa. These results reveal an incongruence between resident and preceptor perceptions and practice around giving feedback and emphasize opportunities for programs adopting and implementing CBME to address how best to support residents and frontline clinical teachers.
| Reference Key |
marcotte2019assessingcanadian
Use this key to autocite in the manuscript while using
SciMatic Manuscript Manager or Thesis Manager
|
|---|---|
| Authors | Marcotte, Laura;Egan, Rylan;Soleas, Eleftherios;Dalgarno, Nancy;Norris, Matt;Smith, Chris; |
| Journal | canadian medical education journal |
| Year | 2019 |
| DOI |
DOI not found
|
| URL | URL not found |
| Keywords |
Citations
No citations found. To add a citation, contact the admin at info@scimatic.org
Comments
No comments yet. Be the first to comment on this article.