A Systematic Review of Simulation-Based Training Tools for Technical and Non-Technical Skills in Ophthalmology

Authors: Lee R, Raison N, Lau WY, Aydin A, Dasgupta P, Ahmed K, Haldar S.

Geographical coverage: Not reported

Sector: Service Delivery

Sub-sector: Access

Equity focus: Not reported

Study population: Ophthalmologists and medical students

Review type: Effectiveness review

Quantitative synthesis method: Not applicable

Qualitative synthesis method: Grouping according to simulator type / Messick’s modern validity framework

Background:

Traditional apprenticeship-based models of ophthalmology training are increasingly challenged by inconsistent clinical exposure, limited working hours, and ethical concerns over patient safety during early skill acquisition. Relying on procedural case counts to gauge competence has proven insufficient, as quantity does not necessarily reflect true proficiency. Rapid technological advancements in healthcare have paved the way for simulation models that allow safe, standardised, competency-based training outside the clinical environment. These models offer the opportunity to refine both technical and non-technical skills without risking patient safety. In ophthalmology, however, the use of simulation varies significantly across programmes and remains poorly integrated into curricula. Nevertheless, there is growing recognition of simulation’s potential to enhance training quality and ultimately improve patient outcomes.

Objective:

To evaluate the range of simulation models available for training technical and non-technical skills in ophthalmology and to appraise the strength of evidence supporting their validity and effectiveness.

Main findings:

The review included 131 studies, encompassing 93 distinct simulators. Of these, 53 studies focused on virtual reality (VR) tools, 47 on wet-lab models, 26 on dry-lab models, and 5 on e-learning. Only two studies provided evidence for all five validity assessment criteria. Among the evaluated models, the Eyesi Surgical simulator, the Eyesi Direct Ophthalmoscope, and the Eye Surgical Skills Assessment Test demonstrated the strongest validity evidence. Despite the wide variety of models available, the Eyesi remains the only simulator to have undergone comprehensive validation. In terms of training effectiveness, most simulators were rated at Level 2 (“contained effects”) on the simulation-based outcome scale. However, the Sophocle vitreoretinal surgery simulator reached Level 3 (“downstream effects”), and notably the Eyesi achieved the highest rating, Level 5 (“target effects”), for cataract surgery training.

 

Methodology:

Searches were conducted in PubMed and Embase up to 1 July 2019 to identify studies involving ophthalmology trainees or medical students that reported on simulation or e-learning for technical or non-technical skill development in ophthalmic training. Reference lists of included studies and relevant review articles were also scanned for additional publications. Two reviewers independently screened the articles, extracted data, and resolved any disagreements through discussion or with a third reviewer. The findings were synthesised qualitatively by grouping studies according to simulator type. The validity of each simulation tool was evaluated using Messick’s modern validity framework, and the strength of each source of validity evidence was measured with a validated rating scale. Training effectiveness was assessed using an adaptation of McGaghie’s levels of simulation-based translational outcomes.

Applicability/external validity:

The review noted significant limitations in the quality of the available studies. Most reports were descriptive, offering only minimal validity evidence for the simulation tools. Very few studies examined the impact of simulation training on actual patient outcomes, underscoring a critical gap in the evidence. The authors emphasised the need for more rigorous research with robust study designs and outcome measures to effectively integrate simulation tools into ophthalmology training curricula. Until such evidence is available, the generalisability of these findings to improve real-world training and patient care remains limited.

Geographic focus:

No geographical limits were applied to the search strategy. The review did not report the specific geographical distribution of the included studies, so it is unclear whether the evidence base is drawn from a broad international sample or concentrated in particular regions.

Summary of quality assessment:

Overall, there is medium confidence in the review’s conclusions. The literature search was comprehensive and imposed no language restrictions. Inclusion and exclusion criteria were clearly defined, and two reviewers independently performed study selection, data extraction, and quality assessment using established tools. The characteristics of included studies were well documented, and the findings were synthesised qualitatively. However, the review did not report searching the grey literature, nor did it provide a list of excluded studies. In addition, the findings were not reported according to the risk-of-bias status of the studies. These factors limit the strength of the evidence and suggest that the results should be interpreted with caution.

Publication Source:

Lee R, Raison N, Lau WY, Aydin A, Dasgupta P, Ahmed K, Haldar S. A systematic review of simulation-based training tools for technical and non-technical skills in ophthalmology. Eye (Lond). 2020 Oct;34(10):1737-1759. doi: 10.1038/s41433-020-0832-1. Epub 2020 Mar 13. PMID: 32203241; PMCID: PMC7609318.

Downloadable link