We propose an unsupervised method to detect lung lesions on FDG-PET/CT images based on deep image anomaly detection using 2.5-dimensional (2.5D) image processing. This 2.5D processing is applied to preprocessed FDG-PET/CT images without image patterns other than lung fields. It enhances lung lesions by parallel analysis of axial, coronal, and sagittal FDG-PET/CT slice images using multiple 2D U-Net. All the U-Nets are pretrained by 95 cases of normal FDG-PET/CT images having no lung lesions and used to transform CT slice images to normal FDG-PET slice images without any lesion-like SUV patterns. A lesion-enhanced image is obtained by merging subtractions of the transformed three normal FDG-PET images from the input FDG-PET image. Lesion detection is performed by simple binarization of the lesion-enhanced images. The threshold value varies from the case and is the 30-percentile voxel value of the target lesion-enhanced image. In each extracted region, the average of the intra-regional voxel values of the enhanced image is computed and assigned as a lesion-like score. We evaluated the proposed method by 27 patients FDG-PET/CT images with 41 lung lesions. The proposed method achieved 82.9 % of lesion detection sensitivity with five false positives per case. The result was significantly superior to the detection performance of FDG-PET image thresholding and indicates that the proposed method may be helpful for effective lung lesion detection. Future works include expanding the detectable range of lesions to outside lungs, such as mediastinum and axillae.
Many studies have assessed breast density in clinical practice. However, calculation of breast density requires segmentation of the mammary gland region, and deep learning has only recently been applied. Thus, the robustness of the deep learning model for different image processing types has not yet been reported. We investigated the accuracy of segmentation of the U-net for mammograms made with variousimage processing types. We used 478 mediolateral oblique view mammograms. The mammograms were divided into 390 training images and 88 testing images. The ground truth of the mammary gland region made by mammary experts was used for the training and testing datasets. Four types of image processing (Types 1–4) were applied to the testing images to compare breast density in the segmented mammary gland regions with that of ground truths. The shape agreement between ground truth and the segmented mammary gland region by U-net of Types 1–4 was assessed using the Dice coefficient, and the equivalence or compatibility of breast density with ground truth was assessed by Bland-Altman analysis. The mean Dice coefficients between the ground truth and U-net were 0.952, 0.948, 0.948, and 0.947 for Types 1, 2, 3, and 4, respectively. By Bland-Altman analysis, the equivalence of breast density between ground truth and U-net was confirmed for Types 1 and 2, and compatibility was confirmed for Types 3 and 4. We concluded that the robustness of the U-net for segmenting the mammary gland region was confirmed for different image processing types.
In individualized screening mammography, a breast density is important to predict potential risks of breast cancer incidence and missing lesions in mammographic diagnosis. Segmentation of the mammary gland region is required when focusing on missing lesions. A deep-learning method was recently developed to segment the mammary gland region. A large amount of ground truth (prepared by mammary experts) is required for highly accurate deep-learning practice; however, this work is time- and labor-intensive. To streamline the ground truth in deep learning, we investigated a difference in acquired mammary gland regions among multiple radiological technologists having various experience and reading levels, who shared the criteria on segmentation. If we can ignore a skill level for image reading, we can increase a number of training images. Three certified radiological technologists segmented the mammary gland region in 195 mammograms. The degree of coincidence among them was assessed with respect to seven factors which indicated the feature of segmented regions including the breast density and mean glandular dose, using Student’s t-test and Bland-Altman analysis. The assessments made by the three radiological technologists were consistent considering all factors, except the mean pixel value. Thus, we concluded that the ground truths prepared by multiple practitioners with different experiences can be accepted for the segmentation of the mammary gland region and they are applicable for training images if they stringently share the criteria on the segmentation.
We propose an automatic feature generation by deep convolutional autoencoder (deep CAE) without lesion data. The main idea of the proposed method is based on anomaly detection. Deep CAE is trained by only normal volume patches. Trained deep CAE calculates low-dimensional features and reproduction error from 2.5 dimensional (2.5D) volume patch. The proposed method was evaluated experimentally with 150 chest CT cases. By using both previous features and the deep CAE based features, an improved classification performance was obtained; AUC=0.989 and ANODE=0.339.
Dentin was visualized using a new fluorescence technique and confocal laser scanning microscopy. Thirty extracted human teeth showing no clinical signs of caries were investigated. All teeth were horizontally sectioned to approximately 200 mm thickness and sections were subjected to different pretreatment conditions as follows: vacuum only, ultrasonication only, sodium hypochlorite only, sodium hypochlorite and vacuum, sodium hypochlorite and ultrasonication, and a combination of sodium hypochlorite, vacuum, and ultrasonication. Some samples were left untreated to serve as control. Following pretreatment, rhodamine 123 fluorescent
dye was used for staining at concentrations ranging from 1023 to 1027M for 1 to 24 h at pH 6.0, 6.5,
or 7.4. Optical staining occurred at pH 7.4 and concentrations > 1025 M over 3 h or longer. Surface images obtained using confocal laser scanning microscopy were similar to those observed by scanning electron microscopy without the need for sample-altering conventional scanning electron microscope preparation techniques. Subsurface imaging to a depth of approximately 60 mm was achieved using confocal laser microscope techniques. This fluorescence technique offers a useful new alternative for visualization and quantification of dentin.
In this study, a fluorescence technique was developed for visualization of dentin using confocal laser scanning microscopy (CLSM). Eighteen extracted human teeth were used: 13 showing no clinical signs of caries and 5 with visually apparent decay. Preliminary study: All teeth were horizontally sectioned to approx. 200 micrometers thickness and pre-treated as follows: no pretreatment; vacuum only; ultrasonication only; sodium hypochlorite (NaOCl) only; vacuum and NaOCl; ultrasonication and NaOCl; or vacuum, ultrasonication and NaOCl. Samples were stained with Rhodamine 123 fluorescent dye at a concentration of 10-5 M in phosphate buffer saline for 1 to 24 hours. Caries study: Dentin surfaces, some with pre-existing caries, were visualized using CLSM. Most dentin tubules in sound dentin appeared open using CLSM, but most dentin tubules in carious dentin appeared closed or narrowed. Surface images obtained using CLSM were similar to those seen by SEM, but additional subsurface imaging was possible using CLSM at depth intervals of 1 micrometers to a depth of 30 - 50 micrometers . This technique shows good potential for non-invasive surface and subsurface imaging of dentin structures.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.