Detecting an anomaly such as a malignant tumor or a nodule from medical images including mammogram, CT or PET images is still an ongoing research problem drawing a lot of attention with applications in medical diagnosis. A conventional way to address this is to learn a discriminative model using training datasets of negative and positive samples. The learned model can be used to classify a testing sample into a positive or negative class. However, in medical applications, the high unbalance between negative and positive samples poses a difficulty for learning algorithms, as they will be biased towards the majority group, i.e., the negative one. To address this imbalanced data issue as well as leverage the huge amount of negative samples, i.e., normal medical images, we propose to learn an unsupervised model to characterize the negative class. To make the learned model more flexible and extendable for medical images of different scales, we have designed an autoencoder based on a deep neural network to characterize the negative patches decomposed from large medical images. A testing image is decomposed into patches and then fed into the learned autoencoder to reconstruct these patches themselves. The reconstruction error of one patch is used to classify this patch into a binary class, i.e., a positive or a negative one, leading to a one-class classifier. The positive patches highlight the suspicious areas containing anomalies in a large medical image. The proposed method has been tested on InBreast dataset and achieves an AUC of 0.84.
The main contribution of our work can be summarized as follows. 1) The proposed one-class learning requires only data from one class, i.e., the negative data; 2) The patch-based learning makes the proposed method scalable to images of different sizes and helps avoid the large scale problem for medical images; 3) The training of the proposed deep convolutional neural network (DCNN) based auto-encoder is fast and stable.
Purpose: To determine whether domain transfer learning can improve the performance of deep features extracted from digital mammograms using a pre-trained deep convolutional neural network (CNN) in the prediction of occult invasive disease for patients with ductal carcinoma in situ (DCIS) on core needle biopsy.
Method: In this study, we collected digital mammography magnification views for 140 patients with DCIS at biopsy, 35 of which were subsequently upstaged to invasive cancer. We utilized a deep CNN model that was pre-trained on two natural image data sets (ImageNet and DTD) and one mammographic data set (INbreast) as the feature extractor, hypothesizing that these data sets are increasingly more similar to our target task and will lead to better representations of deep features to describe DCIS lesions. Through a statistical pooling strategy, three sets of deep features were extracted using the CNNs at different levels of convolutional layers from the lesion areas. A logistic regression classifier was then trained to predict which tumors contain occult invasive disease. The generalization performance was assessed and compared using repeated random sub-sampling validation and receiver operating characteristic (ROC) curve analysis.
Result: The best performance of deep features was from CNN model pre-trained on INbreast, and the proposed classifier using this set of deep features was able to achieve a median classification performance of ROC-AUC equal to 0.75, which is significantly better (p<=0.05) than the performance of deep features extracted using ImageNet data set (ROCAUC = 0.68).
Conclusion: Transfer learning is helpful for learning a better representation of deep features, and improves the prediction of occult invasive disease in DCIS.
Predicting whether ductal carcinoma in situ (DCIS) identified at core biopsy contains occult invasive disease is an import task since these “upstaged” cases will affect further treatment planning. Therefore, a prediction model that better classifies pure DCIS and upstaged DCIS can help avoid overtreatment and overdiagnosis. In this work, we propose to improve this classification performance with the aid of two other related classes: Atypical Ductal Hyperplasia (ADH) and Invasive Ductal Carcinoma (IDC). Our data set contains mammograms for 230 cases. Specifically, 66 of them are ADH cases; 99 of them are biopsy-proven DCIS cases, of whom 25 were found to contain invasive disease at the time of definitive surgery. The remaining 65 cases were diagnosed with IDC at core biopsy. Our hypothesis is that knowledge can be transferred from training with the easier and more readily available cases of benign but suspicious ADH versus IDC that is already apparent at initial biopsy. Thus, embedding both ADH and IDC cases to the classifier will improve the performance of distinguishing upstaged DCIS from pure DCIS. We extracted 113 mammographic features based on a radiologist’s annotation of clusters.Our method then added both ADH and IDC cases during training, where ADH were “force labeled” or treated by the classifier as pure DCIS (negative) cases, and IDC were labeled as upstaged DCIS (positive) cases. A logistic regression classifier was built based on the designed training dataset to perform a prediction of whether biopsy-proven DCIS cases contain invasive cancer. The performance was assessed by repeated 5-fold CrossValidation and Receiver Operating Characteristic(ROC) curve analysis. While prediction performance with only training on DCIS dataset had an average AUC of 0.607(%95CI, 0.479-0.721). By adding both ADH and IDC cases for training, we improved the performance to 0.691(95%CI, 0.581-0.801).
Predicting the risk of occult invasive disease in ductal carcinoma in situ (DCIS) is an important task to help address the overdiagnosis and overtreatment problems associated with breast cancer. In this work, we investigated the feasibility of using computer-extracted mammographic features to predict occult invasive disease in patients with biopsy proven DCIS. We proposed a computer-vision algorithm based approach to extract mammographic features from magnification views of full field digital mammography (FFDM) for patients with DCIS. After an expert breast radiologist provided a region of interest (ROI) mask for the DCIS lesion, the proposed approach is able to segment individual microcalcifications (MCs), detect the boundary of the MC cluster (MCC), and extract 113 mammographic features from MCs and MCC within the ROI. In this study, we extracted mammographic features from 99 patients with DCIS (74 pure DCIS; 25 DCIS plus invasive disease). The predictive power of the mammographic features was demonstrated through binary classifications between pure DCIS and DCIS with invasive disease using linear discriminant analysis (LDA). Before classification, the minimum redundancy Maximum Relevance (mRMR) feature selection method was first applied to choose subsets of useful features. The generalization performance was assessed using Leave-One-Out Cross-Validation and Receiver Operating Characteristic (ROC) curve analysis. Using the computer-extracted mammographic features, the proposed model was able to distinguish DCIS with invasive disease from pure DCIS, with an average classification performance of AUC = 0.61 ± 0.05. Overall, the proposed computer-extracted mammographic features are promising for predicting occult invasive disease in DCIS.
Reducing the overdiagnosis and overtreatment associated with ductal carcinoma in situ (DCIS) requires accurate prediction of the invasive potential at cancer screening. In this work, we investigated the utility of pre-operative histologic and mammographic features to predict upstaging of DCIS. The goal was to provide intentionally conservative baseline performance using readily available data from radiologists and pathologists and only linear models. We conducted a retrospective analysis on 99 patients with DCIS. Of those 25 were upstaged to invasive cancer at the time of definitive surgery. Pre-operative factors including both the histologic features extracted from stereotactic core needle biopsy (SCNB) reports and the mammographic features annotated by an expert breast radiologist were investigated with statistical analysis. Furthermore, we built classification models based on those features in an attempt to predict the presence of an occult invasive component in DCIS, with generalization performance assessed by receiver operating characteristic (ROC) curve analysis. Histologic features including nuclear grade and DCIS subtype did not show statistically significant differences between cases with pure DCIS and with DCIS plus invasive disease. However, three mammographic features, i.e., the major axis length of DCIS lesion, the BI-RADS level of suspicion, and radiologist’s assessment did achieve the statistical significance. Using those three statistically significant features as input, a linear discriminant model was able to distinguish patients with DCIS plus invasive disease from those with pure DCIS, with AUC-ROC equal to 0.62. Overall, mammograms used for breast screening contain useful information that can be perceived by radiologists and help predict occult invasive components in DCIS.
In this work we revisit TV filter and propose an improved version that is tailored to diagnostic CT purposes. We revise TV cost function, which results in symmetric gradient function that leads to more natural noise texture. We apply a multi-scale approach to resolve noise grain issue in CT images. We examine noise texture, granularity, and loss of low contrast in the test images. We also discuss potential acceleration by Nesterov and Conjugate Gradient methods.
A new motion estimation and compensation method for cardiac computed tomography (CT) was developed. By
combining two motion estimation (ME) approaches the proposed method estimates the local and global cardiac motion
and then preforms motion compensated reconstruction. The combined motion estimation method has two parts: one is
the local motion estimation, which estimates the coronary artery motion by using coronary artery tree tracking and
registration; the other one is the global motion estimation, which estimates the entire cardiac motion estimation by image
registration. The final cardiac motion is the linear combination of the coronary artery motion and entire cardiac motion
the. We use the backproject-then-warp method proposed by Pack et al. to perform motion compensation reconstruction
(MCR). The proposed method was evaluated with 5 patient data and improvements in sharpness of both coronary
arteries and heart chamber boundaries were obtained.
Motion estimation is a very important method for improving image quality by compensating the cardiac motion at the best phase reconstructed. We tackle the cardiac motion estimation problem using an image registration approach. We compare the performance of three gradient-based registration methods on clinical data. In addition to simple gradient descent, we test the Nesterov accelerated descent and conjugate gradient algorithms. The results show that accelerated gradient methods provide significant speedup over conventional gradient descent with no loss of image quality.
In addition to seeking geometric correspondence between the inputs, a legitimate image registration algorithm should also
keep the estimated transformation meaningful or regular. In this paper, we present a mathematically sound formulation that
explicitly controls the deformation to keep each grid in a meaningful shape over the entire geometric matching procedure.
The deformation regularity conditions are enforced by maintaining all the moving neighbors as non-twist grids. In contrast
to similar works, our model differentiates and formulates the convex and concave update cases under an efficient and
straightforward point-line/surface orientation framework, and uses equality constraints to guarantee grid regularity and
prevent folding. Experiments on MR images are presented to show the improvements made by our model over the popular
Demon's and DCT-based registration algorithms.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.