Accurate segmentation of organs at risk (OARs) is a key step in image guided radiation therapy. In recent years, deep learning based methods have been widely used in medical image segmentation. Among them, U-Net and V-Net are the most popular ones. In this paper, we evaluate a customized V-Net on 16 OARs throughout the body using a large CT dataset. Specifically, two customizations are used to reduce the GPU memory cost of V-Net: 1) multi-resolution V-Nets, where the coarse-resolution V-Net aims to localize the OAR in the entire image space, while the fine-resolution V-Net focuses on refining detailed boundaries of OAR; 2) a modified V-Net architecture, which is specifically designed for segmenting large organs, e.g., liver. Validated on 3483 CT scans of various imaging and disease conditions, we show that, compared with traditional methods, the customized V-Net wins in speed (0.7 second vs 20 seconds per organ), accuracy (average Dice score 96.6% vs 84.3%), and robustness (98.6% successful rate vs 83.3% successful rate). Moreover, the customized V-Net is very robust against various image artifacts, diseases and slice thicknesses, and has much better performance even on the organs with large shape variations (e.g., the bladder) than traditional methods.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.