Computer-aided detection aims to improve breast cancer screening programs by helping radiologists to evaluate digital mammography (DM) exams. DM exams are generated by devices from different vendors, with diverse characteristics between and even within vendors. Physical properties of these devices and postprocessing of the images can greatly influence the resulting mammogram. This results in the fact that a deep learning model trained on data from one vendor cannot readily be applied to data from another vendor. This paper investigates the use of tailored transfer learning methods based on adversarial learning to tackle this problem. We consider a database of DM exams (mostly bilateral and two views) generated by Hologic and Siemens vendors. We analyze two transfer learning settings: 1) unsupervised transfer, where Hologic data with soft lesion annotation at pixel level and Siemens unlabelled data are used to annotate images in the latter data; 2) weak supervised transfer, where exam level labels for images from the Siemens mammograph are available. We propose tailored variants of recent state-of-the-art methods for transfer learning which take into account the class imbalance and incorporate knowledge provided by the annotations at exam level. Results of experiments indicate the beneficial effect of transfer learning in both transfer settings. Notably, at 0.02 false positives per image, we achieve a sensitivity of 0.37, compared to 0.30 of a baseline with no transfer. Results indicate that using exam level annotations gives an additional increase in sensitivity.
The number and location of cerebral microbleeds (CMBs) in patients with traumatic brain injury (TBI) is
important to determine the severity of trauma and may hold prognostic value for patient outcome. However,
manual assessment is subjective and time-consuming due to the resemblance of CMBs to blood vessels, the
possible presence of imaging artifacts, and the typical heterogeneity of trauma imaging data. In this work, we
present a computer aided detection system based on 3D convolutional neural networks for detecting CMBs in 3D
susceptibility weighted images. Network architectures with varying depth were evaluated. Data augmentation
techniques were employed to improve the networks’ generalization ability and selective sampling was implemented
to handle class imbalance. The predictions of the models were clustered using a connected component analysis.
The system was trained on ten annotated scans and evaluated on an independent test set of eight scans. Despite
this limited data set, the system reached a sensitivity of 0.87 at 16.75 false positives per scan (2.5 false positives
per CMB), outperforming related work on CMB detection in TBI patients.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.