Knowledge of the positions of ultrasound transducer elements in a photoacoustic computed tomography (PACT) system is essential for reconstructing high-quality images. Errors in these positions, typically due to manufacturing defects, can degrade the reconstructed image quality perceivably. To overcome this, we present a calibration method for the ultrasound transducer array geometry that is based on the times-of-arrival of point source signals at the array elements. We pose the problem in terms of the speed of sound, the transducer positions, and the point source positions. We reformulate the problem as a linear problem in the transducer coordinates by obtaining the other unknowns using surrogate methods. Finally, we estimate the transducer coordinates using the pseudoinverse solution and characterize the estimation error in the coordinates. We use our method for calibrating an experimental PACT system, which results in an improvement in the contrast-to-noise ratio and resolution of point source reconstructions. Additionally, we reconstruct the images of a healthy human breast and show that the calibrated image reveals vasculatures that were previously not visible.
This work presents hybrid photoacoustic and ultrasound tomography, which enables three-dimensional panoramic imaging of the human body’s morphological and angiographic information to provide dual-contrast images of representative parts of human body (i.e., head, breast, and hand) with a single system. Through in vivo human application, we present our hybrid tomography system as a powerful tool for high-speed, three-dimensional, dual-contrast imaging of the human body with potential for rapid clinical translation.
To extend the depth of field (DOF) in optical-resolution photoacoustic microscopy (OR-PAM), we propose the needle-shaped beam photoacoustic microscopy (NB-PAM) via customized diffractive optical elements to extend the DOF, featuring a well-maintained beam diameter, a uniform axial intensity distribution, and negligible sidelobes. The advantage of using NB-PAM with an improved DOF has been demonstrated by both histology-like imaging of fresh slide-free organs using 266 nm laser and in vivo mouse brain vasculature imaging using 532 nm laser. Our approach provides new perspectives for slide-free intraoperative pathological imaging and various in vivo organ-level imaging applications.
We report a virtual image refocusing framework for fluorescence microscopy, which extends the imaging depth-of-field by ~20-fold and provides improved lateral resolution. This method utilizes point-spread function (PSF) engineering and a cascaded convolutional neural network model, which we termed as W-Net. We tested this W-Net architecture by imaging 50 nm fluorescent nanobeads at various defocus distances using a double-helix PSF, demonstrating ~20-fold improvement in image depth-of-field over conventional wide-field microscopy. W-Net architecture can be used to develop deep-learning-based image reconstruction and computational microscopy techniques that utilize engineered PSFs and can significantly improve the spatial resolution and throughput of fluorescence microscopy.
KEYWORDS: Optical coherence tomography, Image restoration, Neural networks, 3D image reconstruction, Image quality, 3D image processing, Stereoscopy, Spectral resolution, Signal to noise ratio, Imaging systems
We report neural network-based rapid reconstruction of swept-source OCT (SS-OCT) images using undersampled spectral data. We trained and blindly tested a deep neural network using mouse embryo samples imaged by an SS-OCT system. Using >3-fold undersampled spectral data per A-line, the trained neural network can blindly remove spatial aliasing artifacts due to spectral undersampling, presenting a very good match to the images reconstructed using the full spectral data. This method can be integrated with various swept-source or spectral domain OCT systems to potentially improve the 3D imaging speed without a sacrifice in resolution or signal-to-noise of the reconstructed images.
We report a deep learning-based virtual image refocusing method that utilizes double-helix point-spread-function (DH-PSF) engineering and a cascaded neural network model, termed W-Net. This method can virtually refocus a defocused fluorescence image onto an arbitrary axial plane within the sample volume, enhancing the imaging depth-of-field and lateral resolution at the same time. We demonstrated the efficacy of our method by imaging fluorescent nano-beads at various defocus distances, and also quantified the nano-particle localization performance achieved with our virtually-refocused images, demonstrating ~20-fold improvement in image depth-of-field over wide-field microscopy, enabled by the combination of DH-PSF and W-Net inference.
We report a recurrent neural network (RNN)-based cross-modality image inference framework, termed Recurrent-MZ+, that explicitly incorporates two or three 2D fluorescence images, acquired at different axial planes, to rapidly reconstruct fluorescence images at arbitrary axial positions within the sample volume, matching the 3D image of the same sample acquired with a confocal scanning microscope. We demonstrated the efficacy of Recurrent-MZ+ on transgenic C. Elegans samples; using 3 wide-field fluorescence images as input, the reconstructed sample volume by Recurrent-MZ+ mitigates the deformations caused by the anisotropic point-spread-function of wide-field microscopy, and matches the ground truth confocal image stack of the sample.
We demonstrate a deep learning-based offline autofocusing method, termed Deep-R, to rapidly and blindly autofocus a single-shot microscopy image captured at an arbitrary out-of-focus plane. Deep-R is experimentally validated using various tissue sections that were imaged with fluorescence and brightfield microscopes. Furthermore, snapshot autofocusing under different defocusing scenarios is demonstrated, including uniform axial-defocusing, sample tilting, cylindrical and spherical distortions within the field-of-view. Compared with other online autofocusing algorithms, Deep-R is significantly faster while having comparable image performance. Deep-R framework will enable high-throughput microscopic imaging over large fields-of-view, improving the overall imaging throughput, also reducing the photon dose on the sample.
We report a deep learning-based volumetric imaging framework that uses sparse 2D-scans captured by standard wide-field fluorescence microscopy at arbitrary axial positions within the sample. Through the design of a recurrent neural network, the information from different input planes is blended, and virtually propagated in space to rapidly reconstruct the sample volume over an extended axial range. We validated this deep-learning-based volumetric imaging framework using C. Elegans and nanobead samples to demonstrate a 30-fold reduction in the number of required scans. This versatile and rapid volumetric imaging technique reduces the photon dose on the sample and improves the temporal resolution.
KEYWORDS: Holograms, Holography, Microscopy, 3D modeling, Signal to noise ratio, 3D image reconstruction, Stereoscopy, Microscopes, Speckle, Time metrology
Holographic microscopy encodes the 3D information of a sample into a single hologram. However, holographic images are in general inferior to bright-field microscopy images in terms of contrast and signal-to-noise ratio, due to twin-image artifacts, speckle and out-of-plane interference. The contrast and noise problem of holography can be mitigated using iterative algorithms, but at the cost of additional measurements and time. Here, we present a deep-learning-based cross-modality imaging method to reconstruct a single hologram into volumetric images of a sample with bright-field contrast and SNR, merging the snapshot 3D imaging capability of holography with the image quality of bright-field microscopy.
KEYWORDS: Digital holography, Holography, Microscopy, 3D image reconstruction, Digital imaging, Holograms, Digital recording, Speckle, 3D image processing, Wave propagation interference
We demonstrate a deep learning-based hologram reconstruction method that achieves bright-field microscopy image contrast in digital holographic microscopy (DHM), which we termed as “bright-field holography”. In bright-field holography, a generative adversarial network was trained to transform a complex-valued DHM reconstruction (obtained without phase-retrieval) into an equivalent image captured by a high-NA bright-field microscope, corresponding to the same sample plane. As a proof-of-concept, we demonstrated snapshot imaging of pollen samples distributed in 3D, digitally matching the contrast and shallow depth-of-field advantages of bright-field microscopy; this enabled us to digitally image a sample volume using bright-field holography without any physical axial scanning.
We report a highly-sensitive, high-throughput, and cost-effective bacteria identification system which continuously captures and reconstructs holographic images of an agar-plate and analyzes the time-lapsed images with deep learning models for early detection of colonies. The performance of our system was confirmed by detection and classification of Escherichia coli, Enterobacter aerogenes, and Klebsiella pneumoniae in water samples. We detected 90% of the bacterial colonies and their growth within 7-10h (>95% within 12h) with ~100% precision, and correctly identified the corresponding species within 7.6-12h with 80% accuracy, and achieved time savings of >12h as compared to the gold-standard EPA-approved methods.
We report a digital image refocusing framework in fluorescence microscopy (termed “Deep-Z”), where a deep neural network is trained to virtually-refocus a 2D fluorescence image onto user-defined 3D surfaces. Using Deep-Z, we demonstrated 3D reconstruction of C. elegans neuronal activity from a 2D movie, digitally increasing the depth-of-field by 20-fold. We also demonstrated digital correction of sample drift, tilt and other image aberrations, all performed after the acquisition of a single image. Deep-Z also permits cross-modality virtual refocusing, where a single 2D wide-field image can be digitally refocused to match a confocal microscopy image captured at a different sample plane.
Optical visualization of Alzheimer’s disease (AD) pathological changes is crucial to facilitate exploration of disease mechanism and treatment. We developed cryo-micro-optical sectioning tomography (cryo-MOST) to acquire brainwide map of senile plaques. Using intrinsic fluorescence emission intensified under ultra-low temperature, we accomplished senile plaque visualization at a micron-level resolution. A whole-brain coronal distribution of senile plaque in a transgenic mouse was successfully acquired without any exogenous dye. We believe cryo-MOST would be a potential tool for understanding neurodegenerative disease mechanism and evaluating drug efficacy.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.