The simulation of realistic ultrasound (US) images has many applications in image-guided surgery such as image registration, data augmentation, or education. We simulated intraoperative US images of the brain after tumor resection surgery. A Generative Adversarial Networks first generated an US image with resection from a resection cavity map. This generated cavity texture was then blended into a real pre-resection patient-specific US image. A validation study showed that two neurosurgeons correctly labelled only 56% and 53% of the simulated images, which indicate that these synthesized images are hardly distinguishable from real post-resection US images.
During tumor resection surgery, intraoperative ultrasound images of the brain show anatomical structures, the tumor, and the resection cavity (after resection started). These elements help with the localization and tumor resection, and can be used to register the preoperative MRI to intraoperative images, to compensate for the tissue deformation occurring during surgery. We evaluate a multi-class segmentation model for the sulci, falx cerebri, tumor, resection cavity and ventricle. We present strategies to overcome the severe class imbalance in the training data. We show that a multi-class model may leverage inter-class spatial relationships and produce more accurate results than single-class models.
Because of the deformation of the brain during neurosurgery, intraoperative imaging can be used to visualize the actual location of the brain structures. These images are used for image-guided navigation as well as determining whether the resection is complete and localizing the remaining tumor tissue. Intraoperative ultrasound (iUS) is a convenient modality with short acquisition times. However, iUS images are difficult to interpret because of the noise and artifacts. In particular, tumor tissue is difficult to distinguish from healthy tissue and it is very difficult to delimit tumors in iUS images. In this paper, we propose an automatic method to segment low grade brain tumors in iUS images using a 2-D and 3-D U-Net. We trained the networks on three folds with twelve training cases and five test cases each. The obtained results are promising, with a median Dice score of 0.72. The volume differences between the estimated and ground truth segmentations were similar to the intra-rater volume differences. While these results are preliminary, they suggest that deep learning methods can be successfully applied to tumor segmentation in intraoperative images.
To compensate for the intraoperative brain tissue deformation, computer-assisted intervention methods have been used to register preoperative magnetic resonance images with intraoperative images. In order to model the deformation due to tissue resection, the resection cavity needs to be segmented in intraoperative images. We present an automatic method to segment the resection cavity in intraoperative ultrasound (iUS) images. We trained and evaluated two-dimensional (2-D) and three-dimensional (3-D) U-Net networks on two datasets of 37 and 13 cases that contain images acquired from different ultrasound systems. The best overall performing method was the 3-D network, which resulted in a 0.72 mean and 0.88 median Dice score over the whole dataset. The 2-D network also had good results with less computation time, with a median Dice score over 0.8. We also evaluated the sensitivity of network performance to training and testing with images from different ultrasound systems and image field of view. In this application, we found specialized networks to be more accurate for processing similar images than a general network trained with all the data. Overall, promising results were obtained for both datasets using specialized networks. This motivates further studies with additional clinical data, to enable training and validation of a clinically viable deep-learning model for automated delineation of the tumor resection cavity in iUS images.
The brain is significantly deformed during neurosurgery, in particular because of the removal of tumor tissue. Because of this deformation, intraoperative data is needed for accurate navigation in image-guided surgery. During the surgery, it is easier to acquire ultrasound images than Magnetic Resonance (MR) images. However, ultrasound images are difficult to interpret. Several methods have been developed to register preoperative MR and intraoperative ultrasound images, to allow accurate navigation during neurosurgery. Model-based methods need the location of the resection cavity to take into account the tissue removal in the model. Manually segmenting this cavity is extremely time consuming and cannot be performed in the operating room. It is also difficult and error-prone because of the noise and reconstruction artifacts in the ultrasound images. In this work, we present a method to perform the segmentation of the resection cavity automatically. We manually labelled the resection cavity on the ultrasound volumes from a database of 23 patients. We trained a Unet-based artificial neural network with our manual segmentation and evaluated several variations of the method. Our best method results in 0.82 mean Dice score over the 10 testing cases. The Dice scores range from 0.67 to 0.96, and eight out of ten are higher than 0.75. For the most difficult test cases, lacking clear contour, the manual segmentation is also difficult but our method still yields acceptable results. Overall the segmentations obtained with the automatic methods are qualitatively similar to the manual ones.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.