As an important three-dimensional (3D) display technology, holographic 3D display has great application prospects in virtual and augmented reality applications. However, it has been challenging to generate 3D hologram rapidly with high‑reconstruction quality. Here, we proposed a high-speed 3D hologram generation method via convolutional neural network (CNN). The CNN network is trained by unsupervised training, and the trained CNN can generate 3D hologram with 1024×1024 resolution at 100 planes within 60 ms. The feasibility and effectiveness of the proposed method have been demonstrated by simulation. This method will further expand the application of holographic 3D display in remote education, medical treatment, entertainment, and other fields.
Photoacoustic tomography is a new hybrid imaging technology, which combines the high contrast of optics imaging with the high penetration of acoustics imaging, moreover, which provides deep imaging of biological tissue. At present, photoacoustic tomography has been widely applied in biomedical research, such as brain imaging, cancer diagnosis, and vascular imaging. However, constructing a photoacoustic imaging system is costly and complex. In this study, a simulation platform for photoacoustic tomography was established based on the k-wave toolbox. Using this virtual photoacoustic tomography platform, the process of ultrasonic signal generation is simulated, the signals are collected by the same number of ultrasonic transducers at different center frequencies, and multiple groups of reconstructed images are obtained using the back projection algorithm. The relationship between the center frequency and the quality of the reconstructed image is analyzed to show that when an appropriate center frequency is selected, the ultrasonic transducer can effectively receive the photoacoustic signal; when the selected center frequency is too large or too small, the ultrasonic transducer may not effectively receive the photoacoustic signal, leading to the loss of some detailed features of the original image, which can result in an increase in the number of artifacts. Therefore, by selecting an appropriate center frequency of ultrasonic transducers, the quality of reconstructed images can be effectively improved.
Artifact in photoacoustic tomography is always an issue to be solved. Here, a deep learning based physical model method to remove artifact for limited-data photoacoustic tomography was proposed, termed as PD net. A virtual photoacoustic tomography platform was constructed based on k-Wave, and the dataset required for deep learning was obtained using this virtual platform. The U-Net was used to build a deep learning network to remove artifacts in sparse-view and limited-view photoacoustic tomography. Under sparsity condition, when the number of ultrasonic transducers is 64, the improvement rates of SSIM and PSNR of the network are 274% and 66.34%, respectively, compared with the input of the network, which verifies that this method can remove artifacts in sparse-view photoacoustic tomography. The proposed method can reduce artifacts and enhance anatomical contrast when the number of ultrasonic transducers used is limited, and effectively reduce manufacturing costs of photoacoustic tomography.
In the traditional Fourier single-pixel imaging (FSPI), compressed sampling is often used to improve the acquisition speed. However, the reconstructed image after compressed sampling often has a lower resolution and the quality is difficult to meet the imaging requirements of practical applications. To address this issue, we proposed a novel imaging method that combines deep learning and single-pixel imaging, which can reconstruct high-resolution images with only a small-scale sampling. In the training phase of the network, we attempted to incorporate the physical process of FSPI into the training process. To achieve this objective, a large number of natural images were selected to simulate Fourier single-pixel compressed sampling and reconstruction. The compressed reconstructed samples were subsequently employed for network training. In the testing phase of the network, the compressed reconstruction samples of the test dataset were input into the network for optimization. The experimental results showed that compared with traditional compressed reconstruction methods, this method effectively improved the quality of reconstructed images.
KEYWORDS: 3D acquisition, 3D image processing, 3D displays, Holography, Holograms, Diffraction, Optical scanning systems, Frequency modulation, Fermium, 3D image reconstruction
In recent years, three-dimensional (3D) display technology has developed rapidly, and it is widely used in education, medical, military and other fields. 3D holographic display is regarded as the ultimate solution of 3D display. However, the lack of 3D content is one of the challenges that has been faced by 3D holographic display. The traditional method uses light-field camera and RGB-D camera to obtain 3D information of real scene, which has the problems of high-system complexity and long-time consumption. Here, we proposed a 3D scene acquisition and reconstruction system based on optical axial scanning. First an electrically tunable lens (ETL) was used for high-speed focus shift (up to 2.5 ms). A CCD camera was synchronized with the ETL to acquire multi-focused image sequence of real scene. Then, Tenengrad operator was used to obtain the focusing area of each multi-focused image, and the 3D image were obtained. Finally, the Computer-generated Hologram (CGH) can be obtained by the layer-based diffraction algorithm. The CGH was loaded onto the space light modulator to reconstruct the 3D holographic image. The experimental results verify the feasibility of the system. This method will expand the application of 3D holographic display in the field of education, advertising, entertainment, and other fields.
Defocus blur in images is often the result of inadequate camera settings or depth of field restrictions. In recent years, with the emergence and advancement of deep learning, learning representation-based methods have achieved remarkable success in the field of image defocus enhancement. In this paper, a rapid axial scanning system was proposed for efficient acquisition of defocused-enhancement datasets. A multi-focus image sequence with different focus depths of a same scene is captured, and it is utilized to generate a full-focus image (ground truth) through image fusion, to build a set of defocused enhancement datasets. Multiple defocused-enhancement datasets can be obtained based on this approach. Experimental results confirm the feasibility and effectiveness.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.