S3: Surgical Scene Segmentation

S3: Surgical Scene Segmentation goal is to develop a Graphic User Interface (GUI) and associated backend system that can automate and visualize semantic segmentation of intraoperative imaging in keyhole surgery.

Faculty Mentor:



In computer-assisted surgery, recognizing anatomical structures is essential in understanding the surgical scene and providing assistance during surgery. While machine learning models have the potential to identify such structures and reduce surgical complications, their deployment is impeded by the need for annotated and diverse surgical anatomical datasets. Annotating multiple classes (i.e., organs) in a surgical scene is a time-intensive task requiring medical experts that usually have limited computational experience. The goal of this project is to develop a platform that addresses these problems by providing a Graphic User Interface (GUI) for computational interpretation of surgical video data. The system will also integrate semi-automation of annotations, integrated training of existing computational models for segmentation and analyze system performance. If successful, we plan to integrate this tool into a publicly available surgical video and methods sharing platform for surgeons and data scientists to collaborate and share expertise through. 

Relevant Technologies:

  • Computer vision
  • Machine learning,
  • Graphic User Interface design

Pre-requisite knowledge/skills:

Experienced in Python programming, Git, machine learning/deep learning models for computer vision, Graphic User Interface design, knowledge of statistics and visualization