The essential sequences in breast magnetic resonance imaging (MRI) are the dynamic contrast-enhanced (DCE) images, which are widely used in clinical settings. Diffusion-weighted imaging (DWI) MRI also plays an important role in many diagnostic applications and in developing novel imaging bio-makers. Compared to DCE MRI, technical advantages of DWI include a shorter acquisition time, no need for administration of any contrast agent, and availability on most commercial scanners. Segmenting the whole-breast region is an essential pre-processing step in many quantitative and radiomics breast MRI studies. However, it is a challenging task for computerized methods due to the low contrast of intensity along breast chest wall boundaries. While several studies have reported computational methods for automated whole-breast segmentation in DCE MRI, the segmentation in DWI MRI is still underdeveloped. In this paper, we propose to use deep learning and transfer learning methods to segment the whole-breast in DWI MRI, by leveraging pretraining on a DCE MRI dataset. Experiments are reported in multiple breast MRI datasets including an external evaluation dataset and encouraging results are demonstrated.
Identification of malignancy and false recalls (women who are recalled in screening for additional workup, but later proven benign) in screening mammography has significant clinical value for accurate diagnosis of breast cancer. Deep learning methods have recently shown success in the area of medical imaging classification. However, there are a multitude of different training strategies that can significantly impact the overall model performance for a specific classification task. In this study, we aimed to investigate the impact of training strategy on classification of digital mammograms by performing a robustness analysis of deep learning models to distinguish malignancy and false-recall from normal (benign) findings. Specifically, we employed several pre-training strategies including transfer learning with medical and non-medical datasets, layer freezing, and varied network structure on both binary and three-class classification tasks of digital mammography images. We found that, overall, deep learning models appear to be robust to some modifications of network structure and pre-training strategy that we tested for mammogram-specific classification tasks. However, for specific classification tasks, some training strategies offer performance gains. The most notable performance gains in our experiments involved residual network models.
Breast cancer risk prediction refers to the task of predicting whether a healthy patient is likely to develop breast cancer in the future. Breast density and parenchymal texture features are well-known imaging-based breast cancer risk markers that can be qualitatively/visually assessed by radiologists or even quantitatively measured by computerized software. Recently, deep learning has emerged as a promising strategy to solve tasks in a variety of classification and prediction scenarios, including breast imaging. Building on this premise, we propose a deep learning-based modeling method for breast cancer risk prediction in a case-control setting purely using prior normal screening mammogram images. In addition, considering the fact that clinical statistics shows that the upper outer quadrant is the most common site of origin for breast cancer, we designed a simple experiment on 226 patients (a total of 1,632 images) to explore the concept of localized breast cancer risk prediction. We built two deep learning models with the same settings but fed one with the top halves of the mammogram images (corresponding to the outer portion of a breast) and the other with the bottom halves (corresponding to the inner portion of a breast). Our preliminary results showed that the top halves have a higher prediction performance (AUC=0.89) than the bottom halves (AUC=0.69) in predicting the case/control outcome. This indicates a relation between localized imaging features extracted from a sub-region of the full mammogram images and the underlying risk of developing breast cancer in this specific sub-region.
The Viola–Jones face detection algorithm was (and still is) a quite popular face detector. In spite of the numerous face detection techniques that have been recently presented, there are many research works that are still based on the Viola–Jones algorithm because of its simplicity. We study the influence of a set of blind preprocessing methods on the face detection rate using the Viola–Jones algorithm. We focus on two aspects of improvement, specifically badly illuminated faces and blurred faces. Many methods for lighting invariant and deblurring are used in order to improve the detection accuracy. We want to avoid using blind preprocessing methods that may obstruct the face detector. To that end, we perform two sets of experiments. The first set is performed to avoid any blind preprocessing method that may hurt the face detector. The second set is performed to study the effect of the selected preprocessing methods on images that suffer from hard conditions. We present two manners of applying the preprocessing method to the image prior to being used by the Viola–Jones face detector. Five different datasets are used to draw a coherent conclusion about the potential improvement caused by using prior enhanced images. The results demonstrate that some of the preprocessing methods may hurt the accuracy of the Viola–Jones face detection algorithm. However, other preprocessing methods have an evident positive impact on the accuracy of the face detector. Overall, we recommend three simple and fast blind photometric normalization methods as a preprocessing step in order to improve the accuracy of the pretrained Viola–Jones face detector.
Digital mammography screening is an important exam for the early detection of breast cancer and reduction in mortality. False positives leading to high recall rates, however, results in unnecessary negative consequences to patients and health care systems. In order to better aid radiologists, computer-aided tools can be utilized to improve distinction between image classifications and thus potentially reduce false recalls. The emergence of deep learning has shown promising results in the area of biomedical imaging data analysis. This study aimed to investigate deep learning and transfer learning methods that can improve digital mammography classification performance. In particular, we evaluated the effect of pre-training deep learning models with other imaging datasets in order to boost classification performance on a digital mammography dataset. Two types of datasets were used for pre-training: (1) a digitized film mammography dataset, and (2) a very large non-medical imaging dataset. By using either of these datasets to pre-train the network initially, and then fine-tuning with the digital mammography dataset, we found an increase in overall classification performance in comparison to a model without pre-training, with the very large non-medical dataset performing the best in improving the classification accuracy.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.