Page 3 - CUA 2020_Technology and Training_v2
P. 3

Podium 5: Training, Reconstruction





        uses crowd-sourcing (30–40 evaluations per video) to score technical
        skills. Following baseline trials, they were instructed in BLUS through live
        demonstration. They then practiced each task three times with feedback. A
        final performance of each task was once again evaluated by both metrics.
        Results: A total of 55 residents participated; 41 (79%) were male and
        median age was 29 years. There was no correlation between self-per-
        ceived technical experience and any outcome metrics. Most residents
        (71%, 39/55) showed an ‘average’ Pi score, while 27% (15) had ‘low’ Pi
        scores. All residents improved in time score for all tasks with a median
        improvement of 25% (interquartile range [IQR] 16–33%). Baseline aggre-
        gate CSATS scores improved significantly across all five tasks (12.7 to
        13.4; average improvement 0.7 [IQR 0.3–1.1; p<0.03 for all]).
        Conclusions: This inaugural AUA course shows that the BLUS curricu-
        lum shows measurable and objective improvements in a single teaching   POD-5.6. Fig. 1. Target times used to evaluate procedures. All had significant
        session. As robotic surgery continues to overtake laparoscopic surgical   decreases, with an average of 25% improvement in average times for all
        volume, BLUS and similar curriculums are an important investment for   trainees (p<0.001). This was used to calculate the “time score” aspect of the
        training programs to teach and evaluate resident competency.
                                                             validated Pi (performance improvement) scoring tool.



























































                                                CUAJ • June 2020 • Volume 14, Issue 6(Suppl2)                S43
   1   2   3   4   5   6   7   8