Breast cancer is the most common cancer and one of the main causes of death in women. Early diagnosis of breast cancer is essential to ensure a high chance of survival for the affected women. Computer-aided detection (CAD) systems based on convolutional neural networks (CNN) could assist in the classification of abnormalities such as masses and calcifications. In this paper, several convolutional network models for the automatic classification of pathology in mammograms are analyzed. As well as different preprocessing and tuning techniques, such as data augmentation, hyperparameter tuning, and fine-tuning are used to train the models. Finally, these models are validated on various publicly available benchmark datasets.
Breast cancer in women is a worldwide health problem that has a high mortality rate. A strategy to reduce breast cancer mortality in women is to implement preventive programs such as mammography screening for early breast cancer diagnosis. In this presentation, a method for automatic detection of breast pathologies using a deep convolutional neural network and a class activation map is proposed. The neural network is pretrained on the regions of interest in order to modify the output layers to have two output classes. The proposed method is compared with different CNN models and applied to classify the public dataset Curated Breast Imaging Subset of DDSM (CBIS-DDSM).
It is well known that the accuracy and resolution of depth data decreases when the distance from a RGB-D sensor to a 3D object of interest increases, affecting the performance of 3D scene reconstruction systems based on an ICP algorithm. In this paper, to improve the 3D map accuracy by aligning multiple cloud points we propose: first, to split the depth plane into sub-clouds with a similar resolution; then, in each sub-cloud to select a minimum number of keypoints for aligning them separately with an ICP algorithm; finally, to merge all clouds into a dense 3D map. Computer simulation results show the performance of the proposed algorithm of the 3D scene reconstruction using real indoor environment data.
With the development of RGB-D sensors, a new alternative to generation of 3D maps is appeared. First, features extracted from color and depth images are used to localize them in a 3D scene. Next, Iterative Closest Point (ICP) algorithm is used to align RGB-D frames. As a result, a new frame is added to the dense 3D model. However, the spatial distribution and resolution of depth data affect to the performance of 3D scene reconstruction systems based on ICP. In this paper we propose to divide the depth data into sub-clouds with similar resolution, to align them separately, and unify in the entire points cloud. The presented computer simulation results show an improvement in accuracy of 3D scene reconstruction using real indoor environment data.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.