Traditional Poster Round
Background: : Performance in anesthesia can be assessed based on direct observation of behaviors and/or knowledge obtained from interviewing after a simulation scenario. When an observed clinical behavior appears counter to what is expected, understanding the subject’s cognitive processes may improve ability to fairly assess performance. Some performance domains are difficult to assess by direct observation, requiring understanding of cognitive processes that are not observable. A previously reported and validated assessment rubric developed for residents (HARP) has 5 performance domains vital to safe anesthesia practice - 3 clinically oriented (formulate a clear anesthetic plan; implements a plan based on changing conditions; and communication) and 2 associated with reflective practice.1,2 Following each of 7 simulation scenarios 3 standardized post-encounter questions were asked and the encounter recorded (Figure 1). Raters rated the performance domains after viewing both the simulation and the same 3 standardized post-encounter questions. 2 of the questions probed ‘Knowledge of limitations’ and ‘ways to improve future performance’ are vital for physicians, but not usually directly observable. The American Board of Anesthesiology expects residency programs to prove that residents are competent in these skills.
Research Question: : We hypothesized that performance ratings will be affected by post-encounter probing, particularly for our reflective domains that are not easily assessed by direct observation.
Methodology: : 20 HARP assessments of first year anesthesia residents consisting of 7 scenarios were rated after viewing the scenario alone and then again after viewing the 3 standardized post-encounter questions. 3 raters were trained and calibrated on the HARP scoring rubric. 242 of the 280 (7 scenarios x 20 assessments x 2 ratings) total possible scenarios were used for analysis; the absent scenarios were not completed due to time constraints or technical difficulties. After viewing a scenario, each rated the 5 performance domains. After viewing the video of the standardized, 3-question, post-encounter interview they re-rated the 5 performance domains. We compared ratings from the same rater after viewing only the scenario and the 2nd rating after viewing both the scenario and the post-encounter questions using paired sample t-test. Individual domains were analyzed, as well as grouping the 3 clinical and 2 reflective domains. For the reflective domains, we compared how often raters elected not to score the scenario with vs. without the post-encounter questions.
Results: : There was no difference in average scores for the 3 clinically-oriented domains across the 7 scenarios when scored with or without viewing the post-encounter interview. The frequency of ratings not made for knowledge of limits (2% vs. 14% [p < 0.0002]) and ways to improve performance (9% vs. 44% [p < 0.0002]) were significantly higher without the post-encounter interview (Figure 2).
Discussion/Conclusions: : Results suggest: 1) post-scenario interviews may not have added information that affected the scoring of 3 clinically-oriented domains, 2) ability to assess “knowledge of limitations” and “ways to improve future performance” are substantively enhanced with a structured interview that addresses these constructs.