PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.
Proceedings Volume International Symposium on Photoelectronic Detection and Imaging 2013: Optical Storage and Display Technology, 891301 (2013) https://doi.org/10.1117/12.2044136
This PDF file contains the front matter associated with SPIE Proceedings Volume 8913 including the Title Page, Copyright information, Table of Contents, Introduction, and Conference Committee listing.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Proceedings Volume International Symposium on Photoelectronic Detection and Imaging 2013: Optical Storage and Display Technology, 891302 (2013) https://doi.org/10.1117/12.2030802
Long chain series resonance cavity is suitable for transferring slow optical wave, which can be served as the basic
device for optical storage technology. Micro-ring resonator is one kind of such a long chain structure, which is
considered to be the basic component of optical integrated circuit and optical computer in the future. The discrete
energy level has the potential to distinguish digital optical data. The optical delay characteristics make such a device
possible to store the information for some time. The advantage of this device is that it has the potential to construct
an optical storage device in small geometrical dimension and could use mature semiconductor manufacture
capability to lower the design and manufacturing expenses. Many experimental results have proved a lot of material
and geometrical coefficients are very important for such an optical delay device. New theory method is needed to
calculate the periodical energy transfer and time delay characteristics, which can be compared with experimental
result. The Bloch FDTD is presented for analysis of such a new optical device, based on the optical Bloch energy
band theory. The energy band characteristics of micro-ring periodical optical waveguide device is discussed used that
analytical method. This precise calculated method could be served as a useful tool for design the structure of such
resonance cavity to achieve desired slow optical wave transfer performance.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Proceedings Volume International Symposium on Photoelectronic Detection and Imaging 2013: Optical Storage and Display Technology, 891303 (2013) https://doi.org/10.1117/12.2031002
In this paper, a novel holographic 3D imaging system based on compressive sensing is proposed. In
this work we utilize the Mach-Zehnder interferometer for hologram formation and apply compressive
sensing (CS) approach to the holography acquisition process, which combines digital holography and
compressive sensing theory to obtain the hologram. The system integrates a digital micro-mirror
device array (DMD) to optically compress the object to be acquired and a single photon detector is
used to obtain the compressive sensing measurements. Holographic 3D images are reconstructed
numerically via certain signal recovery algorithms of CS theory and hologram reconstruction
techniques. Computer simulations demonstrated the feasibility and the efficiency of the proposed
imaging system.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Proceedings Volume International Symposium on Photoelectronic Detection and Imaging 2013: Optical Storage and Display Technology, 891304 (2013) https://doi.org/10.1117/12.2031661
Abstract—Image matching is the core research topics of digital photogrammetry and computer vision. SIFT(Scale-Invariant
Feature Transform) algorithm is a feature matching algorithm based on local invariant features which is proposed by Lowe at
1999, SIFT features are invariant to image rotation and scaling, even partially invariant to change in 3D camera viewpoint
and illumination. They are well localized in both the spatial and frequency domains, reducing the probability of disruption by
occlusion, clutter, or noise. So the algorithm has a widely used in image matching and 3D reconstruction based on stereo
image. Traditional SIFT algorithm's implementation and optimization are generally for CPU. Due to the large numbers of
extracted features(even if only several objects can also extract large numbers of SIFT feature), high-dimensional of the feature
vector(usually a 128-dimensional SIFT feature vector), and the complexity for the SIFT algorithm, therefore the SIFT
algorithm on the CPU processing speed is slow, hard to fulfil the real-time requirements. Programmable Graphic Process
United(PGPU) is commonly used by the current computer graphics as a dedicated device for image processing. The
development experience of recent years shows that a high-performance GPU, which can be achieved 10 times single-precision
floating-point processing performanceone compared with the same time of a high-performance desktop CPU, simultaneity the
GPU's memory bandwidth is up to five times compared with the same period desktop platform. Provide the same computing
power, the GPU's cost and power consumption should be less than the CPU-based system. At the same time, due to the parallel
nature of graphics rendering and image processing, so GPU-accelerated image processing become to an efficient solution for
some algorithm which have requirements for real-time. In this paper, we realized the algorithm by OpenGL shader language
and compare to the results which realized by CPU. Experiments demonstrate that the efficiency of GPU-based SIFT algorithm
are significantly improved.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Proceedings Volume International Symposium on Photoelectronic Detection and Imaging 2013: Optical Storage and Display Technology, 891305 (2013) https://doi.org/10.1117/12.2031696
Holographic data storage system is a promising candidate of the next-generation of storage equipment. However,
conventional technologies (called 2-axis holography) still have essential issues for commercialization of products. In
this paper, we introduce the collinear holography that can produce a small, practical data storage system more easily than
conventional 2-axis holography. In this technology the information and reference beams are displayed co-axially by the
same SLM. With this unique configuration the optical pickup can be placed on one side of the recording media. The
special media structure uses a pre-formatted reflective layer for the focus/tracking servo and for reading address
information. It also uses a dichroic mirror interlayer for detecting holographic recording information without
interfering with the preformatted information. A 2-dimensional digital page data format is used and the
shift-multiplexing method is employed to increase recording density.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Proceedings Volume International Symposium on Photoelectronic Detection and Imaging 2013: Optical Storage and Display Technology, 891306 (2013) https://doi.org/10.1117/12.2031814
Holographic display is a true three-dimensional (3D) display technology presenting all depth cues without using any
special glasses. With this technology we have developed a system of city photonics map. In the system there are three
key steps: gaining raw 3D point clouds data, Computer-generated holograms, and digital holographic display of 3D data.
In the first step, the theory of structure from motion (SfM) is applied. Then an algorithm of phase CGH with inverse
diffraction is proposed and implemented. Difference from ordinary CGH is that the diffraction fringes of 3D
reconstructed terrain data are obtained from the diffraction results by the reverse strike means. At last, 3D hologram is
displayed on the developing experimental platform and printed on the holographic silver halide plate.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Proceedings Volume International Symposium on Photoelectronic Detection and Imaging 2013: Optical Storage and Display Technology, 891307 (2013) https://doi.org/10.1117/12.2031887
In order to precisely locate and track space targets, construct targets’ kinematic trajectories, a space target
detection algorithm in star image based on motion information is researched in this paper. Firstly, Target’s SNR
is increased based on spatial energy accumulation, and the faint targets can be easily segmented from the
background. Then, according to the invariance of stars’ relative position in neighboring 2 frames, the control
points of star images are extracted. And the global motion parameters can be calculated with the control points
in succession. Then, according to the relative position between star points and the control points, stars and
potential targets are classified. And then, targets are grossly detected utilizing 3-frames nearest neighboring
correlation method, and false targets are filtered with multi-frame back and forth searching method. In the end,
all targets in star image sequence are numbered, and targets’ trajectories are constructed. Experimental results
about 4 groups of real photographed star images are as follows: All targets’ trajectories are constructed. The
least average SNR detected is 2.99. The least mean motion velocity is 2.47 pixel /frame. The most mean
motion velocity is 12.72pixel /frame. The algorithm in this paper can satisfy the space target detection
requirements, which include high detection probability, few false alarms and high trajectory locating accuracy,
etc..
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Proceedings Volume International Symposium on Photoelectronic Detection and Imaging 2013: Optical Storage and Display Technology, 891308 (2013) https://doi.org/10.1117/12.2032302
Nowadays, big-data centers still rely on hard drives. However, there is strong evidence that these
surface-storage technologies are approaching fundamental limits that may be difficult to overcome, as
ever-smaller bits become less thermally stable and harder to access. An intriguing approach for next
generation data-storage is to use light to store information throughout the three-dimensional volume of
a material. In this paper, a review of the major achievements of multilayer optical storage by
pre-layered scheme from 2003 is presented and the key technique details are discussed. The authors
conclude that multilayer optical storage techniques by pre-layered scheme could be the strong
candidates for big data center in the years to come.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Proceedings Volume International Symposium on Photoelectronic Detection and Imaging 2013: Optical Storage and Display Technology, 891309 (2013) https://doi.org/10.1117/12.2032607
An arbitrary view synthesis method from 2D-Plus-Depth image for real-time auto-stereoscopic display is presented.
Traditional methods use depth image based rendering (DIBR) technology, which is a process of synthesizing “virtual”
views of a scene from still or moving images and associated per-pixel depth information. All the virtual view images are
generated and then the ultimate stereo-image is synthesized. DIBR can greatly decrease the number of reference images
and is flexible and efficient as the depth images are used. However it causes some problems such as the appearance of
holes in the rendered image, and the occurrence of depth discontinuity on the surface of the object at virtual image plane.
Here, reversed disparity shift pixel rendering is used to generate the stereo-image directly, and the target image won’t
generate holes. To avoid duplication of calculation and also to be able to match with any specific three-dimensional
display, a selecting table is designed to pick up appropriate virtual viewpoints for auto-stereoscopic display. According to
the selecting table, only sub-pixels of the appropriate virtual viewpoints are calculated, so calculation amount is
independent of the number of virtual viewpoints. In addition, 3D image warping technology is used to translate depth
information to parallax between virtual viewpoints and parallax, and the viewer can adjust the
zero-parallax-setting-plane (ZPS) and change parallax conveniently to suit his/her personal preferences. The proposed
method is implemented with OPENGL and demonstrated on a laptop computer with a 2.3 GHz Intel Core i5 CPU and
NVIDA GeForce GT540m GPU. We got a frame rate 30 frames per second with 4096×2340 video. High synthesis
efficiency and good stereoscopic sense can be obtained. The presented method can meet the requirements of
real-time ultra-HD super multi-view auto-stereoscopic display.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Proceedings Volume International Symposium on Photoelectronic Detection and Imaging 2013: Optical Storage and Display Technology, 89130A (2013) https://doi.org/10.1117/12.2032801
The noise has a serious effect on the quality of the three-dimensional holographic images reconstructed. A new method is
proposed to improve the Burch code and combine it with the three-step phase shifting method to remove the noise. The
reconstructed images are with the high contrast and resolution. The three-dimensional reconstruction images are
compared with the ones of median filtering and of 20-time intensity superposition method. The performance parameters
of three methods are analyzed. The experimental results show that the zero-order light spot, conjugate image and speckle
noise are suppressed effectively. The quality of the reconstructed image is noticeably improved.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Proceedings Volume International Symposium on Photoelectronic Detection and Imaging 2013: Optical Storage and Display Technology, 89130B (2013) https://doi.org/10.1117/12.2032913
The LED display incorporate the micro electronic technique, computer technology and
information processing as a whole, it becomes the most preponderant of a new generation of
display media with the advantages of bright in color, high dynamic range, high brightness and
long operating life, etc. The LED display has been widely used in the bank, securities trading,
highway signs, airport and advertising, etc. According to the display color, the LED display screen
is divided into monochrome screen, double color display and full color display.
With the diversification of the LED display’s color and the ceaseless rise of the display
demands, the LED display’s drive circuit and control technology also get the corresponding
progress and development. The earliest monochrome screen just displaying Chinese characters,
simple character or digital, so the requirements of the controller are relatively low. With the widely
used of the double color LED display, the performance of its controller will also increase. In
recent years, the full color LED display with three primary colors of red, green, blue and grayscale
display effect has been highly attention with its rich and colorful display effect. Every true color
pixel includes three son pixels of red, green, blue, using the space colour mixture to realize the
multicolor.
The dynamic scanning control system of LED full-color display is designed based on
MSP430 microcontroller technology of the low power consumption. The gray control technology
of this system used the new method of pulse width modulation (PWM) and 19 games show
principle are combining. This method in meet 256 level grayscale display conditions, improves the
efficiency of the LED light device, and enhances the administrative levels feels of the image.
Drive circuit used 1/8 scanning constant current drive mode, and make full use of the single chip
microcomputer I/O mouth resources to complete the control. The system supports text, pictures
display of 256 grayscale full-color LED screen.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Proceedings Volume International Symposium on Photoelectronic Detection and Imaging 2013: Optical Storage and Display Technology, 89130C (2013) https://doi.org/10.1117/12.2032916
Digital holography is a kind of coherent imaging method and inevitably affected by many factors in the process of
recording. One of dominant problems is the speckle noise, which is essentially nonlinear multiplicative noise related to
signals. So it is more difficult to remove than additive noise. Due to the noise pollution, the low resolution of image
reconstructed is caused. A new solution for suppressing speckle noise in digital hologram is presented, which combines
Canny filtering algorithm with wavelet threshold denoising algorithm. Canny filter is used to obtain the edge detail.
Wavelet transformation performs denoising. In order to suppress speckle effectively and retain the image details as much
as possible, Neyman-Pearson (N-P) criterion is introduced to estimate wavelet coefficient in every scale. An improved
threshold function is proposed, whose curve is smoother. The reconstructed image is achieved by merging the denoised
image with the edge details. Experimental results and performance parameters of the proposed algorithm are discussed
and compared with other methods, which shows that the presented approach can not only effectively eliminate speckle
noise, but also retain useful signals and edge information simultaneously.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Proceedings Volume International Symposium on Photoelectronic Detection and Imaging 2013: Optical Storage and Display Technology, 89130D (2013) https://doi.org/10.1117/12.2032929
Three-dimensional (3D) display can offer the viewer the realism, which is the candidate for the next-generation imaging.
3D display technology without wearing glasses has become research focus. Currently the display of mobile devices is
evolved towards 3D. Mobile devices limit the viewing range due to their mass production as well as the algorithm of
relative fixity. Only in the area can be achieved a good viewing experience, which leads to the inconvenience for the
viewer. The distance adaptive three-dimensional display based on mobile devices is presented. We analyzed the
relationship between the viewing distance and the number of pixels for each viewpoint. Based on the viewing distance
detected by sensor, the proposed method automatically adjusts the pixels for each viewpoint to accommodate different
viewing distance for mobile portable devices. So the method realizes the adaptive distance between the viewer and the
device, The experience of viewing 3D images is also improved, as well as the viewing areas is expanded. The crosstalk
and the normalized brightness of the 3D picture after restructuring in different distance are measured. Experimental
results show that the algorithm with matching parallax barrier can achieve a good 3D view experience at the different
distances.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Proceedings Volume International Symposium on Photoelectronic Detection and Imaging 2013: Optical Storage and Display Technology, 89130E (2013) https://doi.org/10.1117/12.2032942
This paper brings forward a video recording and replaying system with the architecture of Digital Signal Processor (DSP)
and Field Programmable Gate Array (FPGA). The system achieved encoding, recording, decoding and replaying of
Video Graphics Array (VGA) signals which are displayed on a monitor during airplanes and ships’ navigating. In the
architecture, the DSP is a main processor which is used for a large amount of complicated calculation during digital
signal processing. The FPGA is a coprocessor for preprocessing video signals and implementing logic control in the
system. In the hardware design of the system, Peripheral Device Transfer (PDT) function of the External Memory
Interface (EMIF) is utilized to implement seamless interface among the DSP, the synchronous dynamic RAM (SDRAM)
and the First-In-First-Out (FIFO) in the system. This transfer mode can avoid the bottle-neck of the data transfer and
simplify the circuit between the DSP and its peripheral chips. The DSP’s EMIF and two level matching chips are used to
implement Advanced Technology Attachment (ATA) protocol on physical layer of the interface of an Integrated Drive
Electronics (IDE) Hard Disk (HD), which has a high speed in data access and does not rely on a computer. Main
functions of the logic on the FPGA are described and the screenshots of the behavioral simulation are provided in this
paper. In the design of program on the DSP, Enhanced Direct Memory Access (EDMA) channels are used to transfer data
between the FIFO and the SDRAM to exert the CPU’s high performance on computing without intervention by the CPU
and save its time spending. JPEG2000 is implemented to obtain high fidelity in video recording and replaying. Ways and
means of acquiring high performance for code are briefly present. The ability of data processing of the system is
desirable. And smoothness of the replayed video is acceptable. By right of its design flexibility and reliable operation,
the system based on DSP and FPGA for video recording and replaying has a considerable perspective in analysis after
the event, simulated exercitation and so forth.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Proceedings Volume International Symposium on Photoelectronic Detection and Imaging 2013: Optical Storage and Display Technology, 89130F (2013) https://doi.org/10.1117/12.2032960
Vander-Lugt correlator[1] plays an important role in optical pattern recognition due to the characteristics of accurate
positioning and high signal-to-noise ratio. The ideal Vander-Lugt correlator should have the ability of outputting strong
and sharp correlation peak in allusion to the true target, in the existing Spatial Light Modulators[2], Liquid Crystal On
Silicon(LCOS) has been the most competitive candidate for the matched filter owing to the continuous phase modulation
peculiarity. Allowing for the distortions of the target to be identified including rotations, scaling changes, perspective
changes, which can severely impact the correlation recognition results, herein, we present a modified Vander-Lugt
correlator based on the LCOS by means of applying an iterative algorithm to the design of the filter so that the correlator
can invariant to the distortions while maintaining good performance. The results of numerical simulation demonstrate
that the filter could get the similar recognition results for all the training images. And the experiment shows that the
modified correlator achieves the 180° rotating tolerance significantly improving the recognition efficiency of the
correlator.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Proceedings Volume International Symposium on Photoelectronic Detection and Imaging 2013: Optical Storage and Display Technology, 89130G (2013) https://doi.org/10.1117/12.2033041
With the progress of 3D technology, the huge computing capacity for the real-time autostereoscopic
display is required. Because of complicated sub-pixel allocating, masks providing arranged sub-pixels are
fabricated to reduce real-time computation. However, the binary mask has inherent drawbacks. In order to
solve these problems, weighted masks are used in displaying based on partial sub-pixel. Nevertheless, the
corresponding computations will be tremendously growing and unbearable for CPU. To improve calculating
speed, Graphics Processing Unit (GPU) processing with parallel computing ability is adopted. Here the
principle of partial sub-pixel is presented, and the texture array of Direct3D 10 is used to increase the number
of computable textures. When dealing with a HD display and multi-viewpoints, a low level GPU is still able
to permit a fluent real time displaying, while the performance of high level CPU is really not acceptable.
Meanwhile, after using texture array, the performance of D3D10 could be double, and sometimes be triple
faster than D3D9. There are several distinguishing features for the proposed method, such as the good
portability, less overhead and good stability. The GPU display system could also be used for the future Ultra
HD autostereoscopic display.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Proceedings Volume International Symposium on Photoelectronic Detection and Imaging 2013: Optical Storage and Display Technology, 89130H (2013) https://doi.org/10.1117/12.2033043
Reconstruction of three-dimensional (3D) scenes is an active research topic in the field of computer vision and 3D
display. It’s a challenge to model 3D objects rapidly and effectively. A 3D model can be extracted from multiple images.
The system only requires a sequence of images taken with cameras without knowing the parameters of camera, which
provide flexibility to a high degree. We focus on quickly merging point cloud of the object from depth map sequences.
The whole system combines algorithms of different areas in computer vision, such as camera calibration, stereo
correspondence, point cloud splicing and surface reconstruction. The procedure of 3D reconstruction is decomposed into
a number of successive steps. Firstly, image sequences are received by the camera freely moving around the object.
Secondly, the scene depth is obtained by a non-local stereo matching algorithm. The pairwise is realized with the Scale
Invariant Feature Transform (SIFT) algorithm. An initial matching is then made for the first two images of the sequence.
For the subsequent image that is processed with previous image, the point of interest corresponding to ones in previous
images are refined or corrected. The vertical parallax between the images is eliminated. The next step is to calibrate
camera, and intrinsic parameters and external parameters of the camera are calculated. Therefore, The relative position
and orientation of camera are gotten. A sequence of depth maps are acquired by using a non-local cost aggregation
method for stereo matching. Then point cloud sequence is achieved by the scene depths, which consists of point cloud
model using the external parameters of camera and the point cloud sequence. The point cloud model is then
approximated by a triangular wire-frame mesh to reduce geometric complexity and to tailor the model to the
requirements of computer graphics visualization systems. Finally, the texture is mapped onto the wire-frame model,
which can also be used for 3D display. According to the experimental results, we can reconstruct a 3D point cloud model
more quickly and efficiently than other methods.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Proceedings Volume International Symposium on Photoelectronic Detection and Imaging 2013: Optical Storage and Display Technology, 89130I (2013) https://doi.org/10.1117/12.2033045
An efficient stereo matching algorithm for computing stable disparity map sequence from video footage is presented.
The algorithm is based on both the spatial and temporal consistency in the stereo sequences, and high quality disparity
maps are achieved. Weber local descriptors (WLD) are extracted for each color channel from current stereo pairs, and
the raw matching costs between the images are initialized by WLD. Orthogonal integral image (OII) technique along
with minimum spanning tree (MST) is used to aggregate the similar pixels and preserve disparity edges adaptively. MST
takes place of the process of voting support regions in OII technique and provides a specific support region for each pixel.
The nodes of MST are all the image pixels, and the weight of edges are absolute difference between the nearest
neighboring pixels. It’s a global method, and can achieve more accurate disparity maps than traditional OII technique.
Three-frame subtraction is used to determine the temporal consistency between adjacent frames. The motion region is
extracted and the disparity map of motive region is renewed. The disparity of current frame with the renewed disparity
and the one for last frame is confirmed. The proposed approach has been tested on the real stereo sequences, and the
results are satisfactory.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Proceedings Volume International Symposium on Photoelectronic Detection and Imaging 2013: Optical Storage and Display Technology, 89130J (2013) https://doi.org/10.1117/12.2033046
In recent years, 3D technology has become an emerging industry. However, visual fatigue always impedes the
development of 3D technology. In this paper we propose some factors affecting human perception of depth as new
quality metrics. These factors are from three aspects of 3D video--spatial characteristics, temporal characteristics and
scene movement characteristics. They play important roles for the viewer's visual perception. If there are many objects
with a certain velocity and the scene changes fast, viewers will feel uncomfortable. In this paper, we propose a new
algorithm to calculate the weight values of these factors and analyses their effect on visual fatigue.MSE (Mean Square
Error) of different blocks is taken into consideration from the frame and inter-frame for 3D stereoscopic videos. The
depth frame is divided into a number of blocks. There are overlapped and sharing pixels (at half of the block) in the
horizontal and vertical direction. Ignoring edge information of objects in the image can be avoided. Then the distribution
of all these data is indicated by kurtosis with regard of regions which human eye may mainly gaze at. Weight values can
be gotten by the normalized kurtosis. When the method is used for individual depth, spatial variation can be achieved.
When we use it in different frames between current and previous one, we can get temporal variation and scene
movement variation. Three factors above are linearly combined, so we can get objective assessment value of 3D videos
directly. The coefficients of three factors can be estimated based on the liner regression. At last, the experimental results
show that the proposed method exhibits high correlation with subjective quality assessment results.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Proceedings Volume International Symposium on Photoelectronic Detection and Imaging 2013: Optical Storage and Display Technology, 89130K (2013) https://doi.org/10.1117/12.2033133
Zinc telluride (ZnTe) compound is one of the attractive elements of the II–VI group also having wide range of
applications such as switching devices, light-emitting diode, solar cells and photodetectors. In this paper, the
microstructure and electrical properties of znic telluride thin films were studied by using thermal-furnace evaporation
with emphasis on the effects of argon pressure and deposition temperature. Crystallinity, mobility, carrier concentration
and sheet resistance are shown to be dependent on the argon pressure and deposition temperature. The grain size was
increased with increasing the annealing temperature and decreasing the argon pressure. The highest carrier concentration
of 1.9×1016 cm-3, the lowest sheet resistance of 3180 Ω/ and the largest mobility of 5.1×103 cm2V-1S-1 are presented at
an argon pressure of 100°sccm and a deposition temperature of 580°C, respectively.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Proceedings Volume International Symposium on Photoelectronic Detection and Imaging 2013: Optical Storage and Display Technology, 89130L (2013) https://doi.org/10.1117/12.2033136
Existing image fusion algorithms have the problem of edge aliasing, and 3-D image looks rough, have a sense of
particles. This paper presents a new interpolation algorithm in which new gray value are re-computed according to the
sub-pixel divided situation. We are able to create 3-D images with the new gray value calculated by our algorithm.
Experimental result shows that the improved algorithm creates an anti-aliased 3-D image with higher quality and
makes the 3-D image looks more exquisite and smooth.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Proceedings Volume International Symposium on Photoelectronic Detection and Imaging 2013: Optical Storage and Display Technology, 89130M (2013) https://doi.org/10.1117/12.2033158
Phase-only spatial light modulator (SLM) based on liquid crystal on silicon (LCOS) is a kind of device based on
electrically controlled birefringence effect to realize phase modulation. Due to its low cost, programmable, high
resolution, fast response time, LCOS has been widely used in multi-channel imaging, adaptive optics, diffraction optical
elements (DOEs), dynamic holographic, optical tweezers and other fields. It is necessary to numerically evaluate the
modulation characterization of LCOS before application. Firstly, the phase modulation characterization of the LCOS
(PLUTO HED6010XXX by Holoeye Company) was measured based on the Twyman-Green interferometer, and the
curves of both phase shift and normalized intensity as grey level functions were obtained. Experimental results indicated
that phase modulation of the LCOS could be achieved to 3.99π, and the root-mean-square value (RMS) of normalized
intensity was less than 0.01, which demonstrated that LCOS could be regarded as a phase-only modulation device. This
method is also suitable for the evaluation of modulation characterization of other LCOS devices. Secondly, a phase
pattern of thin lens written onto LCOS was demonstrated. Because of the pixel structure of LCOS, the theory of
discretization of lenses was studied. Both simulation and experimental results were obtained. The experimental results
proved that the convergence character of the lens written onto LCOS was similar to optical lenses. In the experiment, the
measured focal length was in a good agreement to the theoretical deduction, and the relative error (RE) of which was
below 1%. Both simulation and experimental results showed that LCOS could be used as lens to converge the plane
wave, and replace the optical lens successfully.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Proceedings Volume International Symposium on Photoelectronic Detection and Imaging 2013: Optical Storage and Display Technology, 89130N (2013) https://doi.org/10.1117/12.2033160
High-immersion three-dimensional (3D) displays making them valuable tools for many applications, such as
designing and constructing desired building houses, industrial architecture design, aeronautics, scientific
research, entertainment, media advertisement, military areas and so on. However, most technologies provide
3D display in the front of screens which are in parallel with the walls, and the sense of immersion is decreased.
To get the right multi-view stereo ground image, cameras’ photosensitive surface should be parallax to the
public focus plane and the cameras’ optical axes should be offset to the center of public focus plane both
atvertical direction and horizontal direction. It is very common to use virtual cameras, which is an ideal pinhole
camera to display 3D model in computer system. We can use virtual cameras to simulate the shooting method
of multi-view ground based stereo image. Here, two virtual shooting methods for ground based
high-immersion 3D display are presented. The position of virtual camera is determined by the people's eye
position in the real world. When the observer stand in the circumcircle of 3D ground display, offset perspective
projection virtual cameras is used. If the observer stands out the circumcircle of 3D ground display, offset
perspective projection virtual cameras and the orthogonal projection virtual cameras are adopted. In this paper,
we mainly discussed the parameter setting of virtual cameras。The Near Clip Plane parameter setting is the
main point in the first method, while the rotation angle of virtual cameras is the main point in the second
method. In order to validate the results, we use the D3D and OpenGL to render scenes of different viewpoints
and generate a stereoscopic image. A realistic visualization system for 3D models is constructed and
demonstrated for viewing horizontally, which provides high-immersion 3D visualization. The displayed 3D
scenes are compared with the real objects in the real world.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Proceedings Volume International Symposium on Photoelectronic Detection and Imaging 2013: Optical Storage and Display Technology, 89130O (2013) https://doi.org/10.1117/12.2033166
In this paper, by establishing a liquid crystal model of beam steering, a rapid iterative algorithm called Rosenbrock
algorithm is proposed for obtaining wave-control data of liquid crystal phased array (LCPA). The method doesn’t need
wavefront reconstruction, the error is reduced, and the searching direction is adjusted automatically to right space of
solution, thus the algorithm converge rapidly, and the performance of beam steering is optimized at the same time. In this
paper, firstly, a liquid crystal model of beam steering is established. Then, the effectiveness and superiority of this
algorithm is verified compared with Pattern Search algorithm. Finally, we evaluate our algorithm by experiment. The
results show that Rosenbrok algorithm can optimize the efficiency of beam steering rapidly and significantly.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Proceedings Volume International Symposium on Photoelectronic Detection and Imaging 2013: Optical Storage and Display Technology, 89130P (2013) https://doi.org/10.1117/12.2033222
A novel method is presented for the design and the fabrication of the
Diffractive Optical Elements (DOEs) with arbitrary complex phase profile
based on interference. The DOEs are designed to modulate the complex light
wave by the analytical formulas, and asymmetric holographic DOE with cubic
phase modulation is fabricated by two-step exposure technique on flat plane.
Further it is employed for the modulation of the optical intensity on curved
surfaces (CS) and apply it to fabricate diffractive optical elements (DOEs) with
arbitrary profile and large area on CS. Both a binary pattern and a gray pattern
are reconstructed numerically on the lens surfaces with big curvatures in large
areas, while a binary and nonperiodic pattern is produced experimentally. The
simulations together with the experiment demonstrate the validity of the
method. It is believed that micro- or nano- optical elements with smaller
feature sizes can be fabricated by the proposed method when an optical
microscopy system is used. It will be a very efficient and convenient way to
design and fabricate the DOEs with required complex fine structure.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Proceedings Volume International Symposium on Photoelectronic Detection and Imaging 2013: Optical Storage and Display Technology, 89130R (2013) https://doi.org/10.1117/12.2033237
A novel strategy for images fusion is presented based on the block compressed sensing (BCS) and multiwavelet
transform (MWT). Since the BCS requires small memory requirement and enables fast computation, the images with
large amounts of data can be compressively sampled by the BCS. Secondly, taking full advantages of multiwavelet such
as symmetry, orthogonality, short support, and a higher number of vanishing moments, the compressive measurements
of images can be better represented by the MWT. Moreover, the compressive measurements are fused based on the
coherence of MWT decomposition coefficients. And finally, the fused image is reconstructed by the minimization of
total variance method, and an overlapped blocking technique is proposed to eliminate the block effects. Experiments
result shows the validity of the proposed method. Simultaneously, results also indicate that the compressive fusion can
produce better results than conventional fusion techniques such as the principle component analysis method, Laplacian
pyramid-based method, and wavelet transform method.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Proceedings Volume International Symposium on Photoelectronic Detection and Imaging 2013: Optical Storage and Display Technology, 89130S (2013) https://doi.org/10.1117/12.2033261
Road crack detection is seriously affected by many factors in actual applications, such as some shadows, road signs, oil
stains, high frequency noise and so on. Due to these factors, the current crack detection methods can not distinguish the
cracks in complex scenes. In order to solve this problem, a novel method based on infrared laser pavement imaging is
proposed. Firstly, single sensor laser pavement imaging system is adopted to obtain pavement images, high power laser
line projector is well used to resist various shadows. Secondly, the crack extraction algorithm which has merged multiple
features intelligently is proposed to extract crack information. In this step, the non-negative feature and contrast feature
are used to extract the basic crack information, and circular projection based on linearity feature is applied to enhance the
crack area and eliminate noise. A series of experiments have been performed to test the proposed method, which shows
that the proposed automatic extraction method is effective and advanced.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Proceedings Volume International Symposium on Photoelectronic Detection and Imaging 2013: Optical Storage and Display Technology, 89130T (2013) https://doi.org/10.1117/12.2033282
LED has many advantages, such as high luminous efficiency, small size, low-voltage DC drive, etc. so it has been widely
used in many lighting situations. But in certain circumstances, such as uniform illumination requirements in the
micro-display projection system, because LED has an analogous Lambertian radiation distribution resulting in its
non-uniform light intensity, the secondary optical design should be done to achieve the specific illumination
requirements. In this paper, a freeform surface lens is designed to obtain a square area with uniform illumination on the
target plane with LED: Firstly, based on the law of conservation of energy, the source-target light energy mapping
relationship between the incident plane and the target plane is achieved. Secondly, according to the vector form of Snell’s
law, the differential equations of the freeform surface are established. Finally, in order to solve the problem of surface
discontinuity during the progress of designing the lens, the process of solving the differential equations is described in
detail and the integrability condition is introduced to modify and adjust the points’ position of freeform surface, thus
reducing the errors accumulating during the progress of numerical calculation and improving the continuity of the
freeform surface. By simulation, the continuity of the freeform surface and the uniform illumination improve a lot, so it
is validated that the results meet the requirements and the design method using integrability condition can improve the
continuity of the surface and lighting uniformity of the lighting system.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Proceedings Volume International Symposium on Photoelectronic Detection and Imaging 2013: Optical Storage and Display Technology, 89130U (2013) https://doi.org/10.1117/12.2034366
Surface measurement and analysis are important to freeform surface optical systems. The deviation from designed
surface is generally regarded as a judging criterion of real surface quality. In off-axis optical systems, some freeform
surfaces contain no reference points. Measured data of such surfaces can only constitute a fitted surface, but the spatial
position of the fitted surface is difficult to be determined to make a smallest deviation from designed surface by internal
algorithms. In freeform surface optical systems, besides the surface deviations, the tangent vector variations of lattice
data of measured surface can also affect the image quality. Consequently the quality of freeform surface should be
appraised by both of tangent vector variations and surface deviations. This paper presents one method using first-order
differential to directly analyze and process the measured lattice data of freeform surfaces. This method assesses the
tangent vector variations of measured data and the smoothness of real surfaces, while does not involve the fitting
procedure with designed surfaces. In this paper, this method is applied to evaluate a set of measured lattice data of some
reflective freeform surfaces. Furthermore, some fitting algorithms are applied to assess the surface deviations between
the measured and designed surfaces as contrasts.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Proceedings Volume International Symposium on Photoelectronic Detection and Imaging 2013: Optical Storage and Display Technology, 89130V (2013) https://doi.org/10.1117/12.2034542
An effective method based on the Fresnel transform method and rotational transformation for improving the image
quality on the condition of the arbitrarily tilted recording planes is proposed. As a kind of numerical reconstruction for
digital holography, Fresnel transform method has already been the most commonly used method. Rotational
transformation in Fourier space is an effective technique for simulating optical diffraction between nonparallel planes. In
addition to a FTM, the method requires performing a fast Fourier transform twice and a single Rotational Transformation,
which enables it to have the advantage of fast calculation. The effectiveness of the proposed method is demonstrated by
experiment, in which the distortion caused by the tilting of the object is removed.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Proceedings Volume International Symposium on Photoelectronic Detection and Imaging 2013: Optical Storage and Display Technology, 89130W (2013) https://doi.org/10.1117/12.2034767
Waveguide display systems are widely used in various display fields, especially in head mounted display. Comparing
with the traditional head mounted display system, this device dramatically reduce the size and mass. However, there are
still several fatal problems such as high scatting, the cumbersome design and chromatic aberration that should be solved.
We designed and fabricated a monochromatic portable eyewear display system consist of a comfortable eyewear device
and waveguide system with two holographic gratings located on the substrate symmetrically. We record the gratings on
the photopolymer medium with high efficiency and wavelength sensitivity. The light emitting from the micro-display is
diffracted by the grating and trapped in the glass substrate by total internal reflection. The relationship between the
diffraction efficiency and exposure value is studied and analyzed, and we fabricated the gratings with appropriate
diffraction efficiency in a optimization condition. To avoid the disturbance of the stray light, we optimize the waveguide
system numerically and perform the optical experiments. With this system, people can both see through the waveguide to
obtain the information outside and catch the information from the micro display. After considering the human body
engineering and industrial production, we design the structure in a compact and portable way. It has the advantage of
small-type configuration and economic acceptable. It is believe that this kind of planar waveguide system is a potentially
replaceable choice for the portable devices in future mobile communications.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Proceedings Volume International Symposium on Photoelectronic Detection and Imaging 2013: Optical Storage and Display Technology, 89130X (2013) https://doi.org/10.1117/12.2034790
3D dynamic holographic display is one of the most attractive techniques for achieving real 3D vision
with full depth cue without any extra devices. However, huge 3D information and data should be
preceded and be computed in real time for generating the hologram in 3D dynamic holographic display,
and it is a challenge even for the most advanced computer. Many fast algorithms are proposed for
speeding the calculation and reducing the memory usage, such as:look-up table (LUT), compressed
look-up table (C-LUT), split look-up table (S-LUT), and novel look-up table (N-LUT) based on the
point-based method, and full analytical polygon-based methods, one-step polygon-based method based
on the polygon-based method. In this presentation, we overview various fast algorithms based on the
point-based method and the polygon-based method, and focus on the fast algorithm with low memory
usage, the C-LUT, and one-step polygon-based method by the 2D Fourier analysis of the 3D affine
transformation. The numerical simulations and the optical experiments are presented, and several other
algorithms are compared. The results show that the C-LUT algorithm and the one-step polygon-based
method are efficient methods for saving calculation time. It is believed that those methods could be
used in the real-time 3D holographic display in future.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Proceedings Volume International Symposium on Photoelectronic Detection and Imaging 2013: Optical Storage and Display Technology, 89130Y (2013) https://doi.org/10.1117/12.2034872
In traditional signal sampling process, Shannon - Nyquist (Shoon-Nyquist) sampling theorem is a fundamental
principle that must be followed, in that the sampling frequency must be at least twice the highest frequency of the
sampled signal. However, with the increasing of data acquisition capabilities of sensing systems, acquisition of
high-resolution images will inevitably lead to a flood of sampling data according to Shoon-Nyquist sampling theorem,
which increases the cost of data transport and storage, and also the demand for the resolution of the detector. Donoho and
Candes proposed the compressed sensing theory which is considered as a revolutionary breakthrough in that it breaks
Shoon-Nyquist sampling frequency requirements. For compressible or sparse signals, signal sampling can be
implemented with the sampling frequency that is less than that of Shoon-Nyquist sampling theorem, and the signal is
also compressed meanwhile. This paper studied compressive coding imaging based on optical wavelet transform coupled
with the frequency spectrum coding. The imaging quality can be enhanced by introducing optical wavelet transform for
pre-treatment of the target image before the compression coding on the frequency spectrum plane. Simulation results
show that higher quality images can be obtained with the pre-treatment of optical wavelet transform than that of purely
optical Fourier transform without any increasing of the transmitted data. With the proposed method, we have conducted
the numerical simulations. The results show that the proposed compression sampling method can achieve the real-time
compression sampling of the images without distortion, and a compression ratio of 4:1 can be obtained.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Jin-li Yao, Si Tian, Xiang-rong Wang, Jing-li Wang
Proceedings Volume International Symposium on Photoelectronic Detection and Imaging 2013: Optical Storage and Display Technology, 89130Z (2013) https://doi.org/10.1117/12.2034965
Thin specimens of biological tissue appear three dimensional transparent under a microscope. The optic slice images can
be captured by moving the focal planes at the different locations of the specimen. The captured image has low resolution
due to the influence of the out-of-focus information comes from the planes adjacent to the local plane. Using traditional
methods can remove the blur in the images at a certain degree, but it needs to know the point spread function (PSF) of
the imaging system accurately. The accuracy degree of PSF influences the restoration result greatly. In fact, it is difficult
to obtain the accurate PSF of the imaging system. In order to restore the original appearance of the specimen under the
conditions of the imaging system parameters are unknown or there is noise and spherical aberration in the system, a blind
restoration methods of three-dimensional microscope based on the R-L algorithm is proposed in this paper. On the basis
of the exhaustive study of the two-dimension R-L algorithm, according to the theory of the microscopy imaging and the
wavelet transform denoising pretreatment, we expand the R-L algorithm to three-dimension space. It is a nonlinear
restoration method with the maximum entropy constraint. The method doesn’t need to know the PSF of the microscopy
imaging system precisely to recover the blur image. The image and PSF converge to the optimum solutions by many
alterative iterations and corrections. The matlab simulation and experiments results show that the expansion algorithm is
better in visual indicators, peak signal to noise ratio and improved signal to noise ratio when compared with the PML
algorithm, and the proposed algorithm can suppress noise, restore more details of target, increase image resolution.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Proceedings Volume International Symposium on Photoelectronic Detection and Imaging 2013: Optical Storage and Display Technology, 891310 (2013) https://doi.org/10.1117/12.2034978
Computer holography is a highly complicated and demanding task when it comes to producing realistic
three-dimensional (3D) display. We present a calculation algorithm for generating 3D computer-generated holograms
(CGHs) which can produce multiple shading effects. Computer graphics rendering technique is introduced in the CGH
computing, which determines the reflectance distributions of the object surface according to the lighting model. The
CGHs are then generated using point source method and the intensities of the sampling points are determined by the
reflectance properties. The ambient, diffuse and specular reflections are included in the calculation to produce realistic
shading effects of the 3D scene. A phase-only spatial light modulator (SLM) is used to perform the optical reconstruction
and the experimental results show that our proposed method can reconstruct 3D CGHs with multiple shading effects
successfully.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Proceedings Volume International Symposium on Photoelectronic Detection and Imaging 2013: Optical Storage and Display Technology, 891311 (2013) https://doi.org/10.1117/12.2034980
Liquid crystal on silicon (LCoS) can be devised as a pure phase type spatial light modulator (SLM) with proper
arrangement of incident light polarization and choice of liquid crystal mode. The applications include digital holography,
optical switching and adaptive optics etc. The phase retardation at each pixel on the LCoS SLM can be controlled by
driving voltage, but the relationship is dependent of temperature and wavelength. In this paper, a vertical aligned
nematics (VAN) mode LCoS has been used for investigation.
Consideration the application environment, temperature range was set in between 30 to 70°C, and the selected
wavelength was 623 nm, 526 nm and 462 nm. We measured the E-O curve and converted into phase shift by equation.
The result shows that the phase retardation decreases with both the increase of temperature and wavelength. The
dynamic behavior of LC material is also reported in this paper, and a digital hologram from a digital-drive LCoS SLM
with 6.4 μm pixel is demonstrated.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Proceedings Volume International Symposium on Photoelectronic Detection and Imaging 2013: Optical Storage and Display Technology, 891312 (2013) https://doi.org/10.1117/12.2035037
Volume holographic correlator (VHC) calculates the inner product between two data pages through parallel optical
correlation. It has great potential in the field of information processing and real-time identification because of its high
storage density, integration of storing and computing, and multi-channel parallel processing ability. Current studies on
the improvements of VHC mainly focus on the processing speed and channel uniformity. However, the accuracy of the
VHC is mainly related to the minimum output intensity varying with the spatial light modulator (SLM) pixel intensity,
which is the sensitivity of the VHC. In this work, the Minimum Pixel Block Size (MPBS) is proposed to characterize the
sensitivity of the VHC. The Effective Number of Pixels (ENP) is employed to evaluate the optical computing ability,
which is more accurate compared with traditional calculating method based on the pixel number of the SLM. The
theoretical and experimental results are instructive in the system design. Desired system performance can be achieved by
optimizing the system parameters.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Proceedings Volume International Symposium on Photoelectronic Detection and Imaging 2013: Optical Storage and Display Technology, 891313 (2013) https://doi.org/10.1117/12.2035188
A search engine containing various target images or different part of a large scene area is of great use for many
applications, including object detection, biometric recognition, and image registration. The input image captured in realtime
is compared with all the template images in the search engine. A volume holographic correlator is one type of these
search engines. It performs thousands of comparisons among the images at a super high speed, with the correlation task
accomplishing mainly in optics. However, the inputted target image always contains scale variation to the filtering
template images. At the time, the correlation values cannot properly reflect the similarity of the images. It is essential to
estimate and eliminate the scale variation of the inputted target image. There are three domains for performing the scale
measurement, as spatial, spectral and time domains. Most methods dealing with the scale factor are based on the spatial
or the spectral domains. In this paper, a method with the time domain is proposed to measure the scale factor of the input
image. It is called a time-sequential scaled method. The method utilizes the relationship between the scale variation and
the correlation value of two images. It sends a few artificially scaled input images to compare with the template images.
The correlation value increases and decreases with the increasing of the scale factor at the intervals of 0.8~1 and 1~1.2,
respectively. The original scale of the input image can be measured by estimating the largest correlation value through
correlating the artificially scaled input image with the template images. The measurement range for the scale can be
0.8~4.8. Scale factor beyond 1.2 is measured by scaling the input image at the factor of 1/2, 1/3 and 1/4, correlating the
artificially scaled input image with the template images, and estimating the new corresponding scale factor inside
0.8~1.2.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Proceedings Volume International Symposium on Photoelectronic Detection and Imaging 2013: Optical Storage and Display Technology, 891314 (2013) https://doi.org/10.1117/12.2035604
This paper reports on our study of algorithms of stripe noise detection and removal in THz image processing. Based
on an analysis of the frequency spectrogram of images with stripe noises, we propose a new algorithm for stripe noise
detection and removal. Our experimental results show that the algorithm can effectively ascertain the existence of stripe
noises. Furthermore, it can remove stripe noises while preserving the original image details as much as possible.
Compared with other traditional de-noising algorithms, such as Mean filter and Gauss filter, the new algorithm is more
effective and convenient for detecting and subsequent removing of stripe noises.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Proceedings Volume International Symposium on Photoelectronic Detection and Imaging 2013: Optical Storage and Display Technology, 891315 (2013) https://doi.org/10.1117/12.2041203
Previous secondary optical designs, however, have approximated the LED light distribution curve as a Lambertian
distribution which is unable to suit all custom sources. In this paper, a freeform surface lens is utilized to redistribute the
light rays emitted from a LED device with a random light distribution curve in order to effect a prescribed illumination
distribution on the target area. A mapping relationship is also established between the spatial light intensity distribution of
a light source and the rectangular uniform illumination used for road-going cases. A set of first-order partial differential
equations is deduced according to Snell’s Law and the Law of Energy Conservation. A numerical solution is obtained
through MATLAB software, and freeform surface data of the secondary optical lens is recorded to aid in modeling the
freeform surface lens. Simultaneously, an analysis and comparison of different differential algorithm and mapping
methods is carried out in order to solve the differential equations and designate the most superior mapping relationship.
With the help of 3D software, a 3D model of the freeform surface lens is established, and Light Tools software is employed
to simulate the effects of illumination. In this model, the light distribution curve of the LED source is described as an
eighth-order polynomial. According to the simulated results, a 10m height, 40m×40m road uniform illumination is realized
by the secondary optical design. The energy efficiency and uniformity of this illumination is 99.7% and 91.52%
respectively. This method of secondary optical design is easy to operate and can be applied to any other lighting systems
with random LED sources.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.