III-Medium: A Machine Learning Approach to Computational Understanding of Skill Criteria in Surgical Training III-Medium: A Machine Learning Approach to Computational Understanding of Skill Criteria in Surgical Training A recent study by the Agency for Healthcare Research and Quality (AHRQ) documented over 32,000 mostly surgery-related deaths, costing nine billion dollars and accounting for 2.4 million extra days in hospital in 2000. At the same time, economic pressures persistently influence medical schools to reduce the cost in training surgeons. Good training improves surgical skills and reduces surgery-related deaths, but the need for training cost reduction presents a quandary of compromising training quality. The success of simulation-based surgical education and training will not only shorten the time involving a faculty surgeon in various stages of training (hence reducing the cost), but also improve the quality of training. Reviewing the state-of-the-art research, we face two research challenges: (1) to automatically rate the proficiency of a resident surgeon in simulation-based training, and (2) to associate skill ratings with correction procedures. Aiming at addressing these challenges, we present a machine-learning-based approach to computational understanding of surgical skills based on temporal inference of visual and motion-capture data from surgical simulation. The approach employs latent space analysis that exploits intrinsic correlations among multiple data sources of a surgical action. This learning approach is enabled by our simulation and data acquisition design that ensures clinical meaningfulness of the acquired data. The research team consists of PIs from Computer Science & Engineering, Biomedical Informatics (BMI) at Arizona State University and Banner Good Samaritan Medical Center in Phoenix. In our preliminary studies, experiments were carried out using data collected at Banner, where a group of surgical residents from 1st year, 2nd year and 3rd year of the residency program participated in evaluations over a period of 6 months. Motion and position data (wrist kinematics, palm arch, etc.) were captured by CyberTouch Gloves and magnetic trackers. Multiple cameras were employed to capture synchronized videos of an action. The proposed work will be based on effective evaluation protocols and key algorithmic modules that have been validated in such preliminary studies.
|Effective start/end date||7/15/09 → 9/30/13|
- National Science Foundation (NSF): $874,484.00
Explore the research topics touched on by this project. These labels are generated based on the underlying awards/grants. Together they form a unique fingerprint.