Traditional Poster Round
Background: : Pediatric patients are at high risk of adverse events (1,2). Root cause analysis (RCA) is used in health care to examine adverse events (AEs). Despite the widespread use of RCA, AE rates have not changed significantly (3). Best practices have not been established for implementing or tracking recommendations from RCA.
Research Question: : Can testing and modifying interventions through simulation help optimize recommendations made after AEs?
Methodology: : In the first phase of the study we used simulation to analyze two fictionalized AEs that occurred in hospitalized pediatric patients to generate recommendations for error prevention. Scenario A involved an error of commission (inpatient drug error) and Scenario B involved detecting an error that already occurred (critical care drug infusion error). Recommendations generated included product and medication labeling, guarding against harm and transfer of knowledge. In this study, recommendations were tested using different health care clinicians in their working environment. Each scenario was debriefed to understand efficacy of the intervention and how the intervention could be improved. Interventions were modified with subsequent simulations until data saturation was reached.
Results: : Each scenario was repeated eight times. Errors occurred in 2/8 (25%) of scenarios for both cases using the initial interventions. All interventions were then modified and optimized at least four times each based on feedback from participants. Key error prevention strategies (process change like drug kits and mandatory checklists versus passive interventions like alert signs and memory aids) were unique to each case. Error prevention resulted from repeated optimizations rather than a single intervention alone.
Discussion/Conclusions: : This study used simulation to test interventions generated during a simulation-based event analysis. All interventions needed modifications as clinicians often made similar errors. Debriefing highlighted that for errors to be prevented, interventions focused on system changes and guarding against harm were generally most successful. Simulation and debriefing allowed different bedside clinicians to test the interventions in their real-life clinical environment and provide immediate feedback for modification. This direct feedback provided a deeper understanding of the practical use of the interventions and allowed for prompt editing and re-testing of each intervention.
We recommend that all hospitals use simulation to test, modify and optimize interventions recommended through event analysis. Simulation optimizes recommendations and strengthens improvements by testing their efficacy in the real-life setting with a variety of different clinicians. Ultimately, using simulation to test interventions ensures the right intervention is being implemented before valuable resources are dedicated to system-wide implementation.