We propose an integral infrared scene simulation system. The proposed system, which is based on the parameters of the thermal physical property and optical property, computes the radiation distribution of the scenery on the focus plane of the camera according to the scene of the geometrical parameter, the position and intensity of the light source, the location and direction of the camera and so on. Then the radiation distribution is mapped to the space of gray, and we finally obtain the virtual image of the scene. The proposed system includes eight modules namely basic data maintaining, model importing, scene saving, geometry parameters setting and infrared property parameters of the scene, data pre-processing, infrared scene simulation, and scene loading. The proposed system organizes all the data by the mode of database lookup table that stores all relative parameters and computation results of different states to avoid repetitive computation. Experimental results show that the proposed system produces three dimension infrared images in real time to some extent, and can reach 60 frames/second in simple scene drawing and 20 frames/second in complex scene drawing. Experimental results also show that the simulated images can represent infrared features of the scenery to a certain degree.
In real traffic scenes, the quality of captured images are generally low due to some factors such as lighting conditions, and occlusion on. All of these factors are challengeable for automated recognition algorithms of traffic signs. Deep learning has provided a new way to solve this kind of problems recently. The deep network can automatically learn features from a large number of data samples and obtain an excellent recognition performance. We therefore approach this task of recognition of traffic signs as a general vision problem, with few assumptions related to road signs. We propose a model of Convolutional Neural Network (CNN) and apply the model to the task of traffic signs recognition. The proposed model adopts deep CNN as the supervised learning model, directly takes the collected traffic signs image as the input, alternates the convolutional layer and subsampling layer, and automatically extracts the features for the recognition of the traffic signs images. The proposed model includes an input layer, three convolutional layers, three subsampling layers, a fully-connected layer, and an output layer. To validate the proposed model, the experiments are implemented using the public dataset of China competition of fuzzy image processing. Experimental results show that the proposed model produces a recognition accuracy of 99.01 % on the training dataset, and yield a record of 92% on the preliminary contest within the fourth best.
Infrared texture is an important feature in identifying scenery. To simulate infrared image texture effectively at different distances, we propose a model of infrared image texture generation based on scenery space frequency and the image pyramid degradation principle. First, we build a spatial frequency filter model based on imaging distance, taking into account the detector’s maximum spatial frequency, and use the filter to process a “zero” distance infrared image texture. Second, taking into consideration the actual temperature difference of the scenery’s details due to variation of the imaging distance and the effect of atmospheric transmission, we compare the actual temperature difference with the minimum resolvable temperature difference of the thermal imaging system at a specific frequency and produce a new image texture. The results show that the simulated multiresolution infrared image textures produced by the proposed model are very similar (lowest mean square error=0.51 and highest peak signal-to-noise ratio=117.59) to the images captured by the thermal imager. Therefore, the proposed model can effectively simulate infrared image textures at different distances.
In order to effectively simulate infrared features of the scene and infrared high light phenomenon, Based on the visual light illumination model, according to the optical property of all material types in the scene, the infrared imaging illumination models are proposed to fulfill different materials: to the smooth material with specular characteristic, adopting the infrared imaging illumination model based on Blinn-Phone reflection model and introducing the self emission; to the ordinary material which is similar to black body without highlight feature, ignoring the computation of its high light reflection feature, calculating simply the material’s self emission and its reflection to the surrounding as its infrared imaging illumination model, the radiation energy under zero range of visibility can be obtained according to the above two models. The OpenGl rendering technology is used to construct infrared scene simulation system which can also simulate infrared electro-optical imaging system, then gets the synthetic infrared images from any angle of view of the 3D scenes. To validate the infrared imaging illumination model, two typical 3D scenes are made, and their infrared images are calculated to compare and contrast with the real collected infrared images obtained by a long wave infrared band imaging camera. There are two major points in the paper according to the experiment results: firstly, the infrared imaging illumination models are capable of producing infrared images which are very similar to those received by thermal infrared camera; secondly, the infrared imaging illumination models can simulate the infrared specular feature of relative materials and common infrared features of general materials, which shows the validation of the infrared imaging illumination models. Quantitative analysis shows that the simulation images are similar to the collected images in the aspects of main features, but their histogram distribution does not match very well, the reasons are analyzed and the further improvement direction is proposed. Theories and experiments show that the validation of the infrared illumination models in the paper.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.