Terahertz (THz) imaging systems use active sources, specialized optics, and detectors in order to penetrate through certain materials. Each of these components have design and manufacturing characteristics (e.g. coherence for sources, aberrations for optics, and dynamic range and noise in detectors) that can lead to a nonideal performance for the overall imaging system. Thus, system designers are frequently challenged to designs systems that approach theoretical performance, making quantitative measurement of imaging performance a key feedback element of system design. Quantitative evaluation of actual THz system performance will be performed using many of the same figures of merit that have been developed for imaging at other wavelengths (e.g. infrared imaging systems nominally operating in the shorter 3-12 μm wavelength range). The suitability and limitations of these evaluation criteria will be analyzed as part of the process for improving the modeling and design of high performance THz imaging systems.
This paper will take an initial look at the effect of variations in a sensor’s Fλ/d metric value (FLD) on the performance of Yolo_v3 (You Only Look Once) algorithm for object classification. The Yolo_v3 algorithm will initially be trained using static imagery provided in the commonly available Advanced Driver Assist System (ADAS) dataset. Image processing techniques will then be used to degrade image quality of the test data set, simulating detector-limited to optics-limited performance of the imagery. The degraded test set will then be used to evaluate the performance of Yolo_v3 for object classification. Results of Yolo_v3 will be presented for the varying levels of image degradation. An initial summary of the results will be discussed along with recommendations for evaluating an algorithm’s performance using a sensors FLD metric value.
In 2023, Richards and Hübner proposed silux as a new standard unit of irradiance for the full 350-1100 [nm] band, specifically addressing the mismatch between the photopic response of the human eye and spectral sensitivity of new low-light, Silicon, CMOS sensors with enhanced NIR response. This spectral mismatch between the response of the human eye and the spectral sensitivity of the sensor can lead to significant errors in measuring the magnitude of the signal available to a different camera system with the traditional lux unit. In this correspondence, we demonstrate a per-pixel calibration of a camera to create the first imaging siluxmeter. To do this, we developed a comprehensive per-pixel model as well as the experimental and data reduction methods to estimate the parameters. These parameters are then combined to an updated NVIPM measured system component that now provides the conversion factor from device units of DN to silux, lux, and other radiometric units. Additionally, the accuracy of the measurements and modeling are assessed through comparisons to field observations and validating/transferring calibration from one low light camera to another. Following this process, other low-light cameras can be calibrated and applied to scenes such that they may be accurately characterized using silux as the standard unit.
The Night Vision Integrated Performance Model (NVIPM) is a versatile tool used in conducting trade studies of EOIR imaging system performance. Although its primary application is the calculation of human performance metrics, the linear systems model and standard imaging chain provides a convenient method of modeling and validating laboratory measurements. In this correspondence, we discuss how to model signal to noise measurements for both emissive and reflective band cameras within NVIPM. We provide convenient components that can be used to model the calibrated detectors necessary in most laboratory setups. Additionally, we present an update to the measured system component in order to output in digital numbers (DN), matching the default units available to the experimentalist. Using this measured system component and the methods developed for modeling the experiment with reference detectors, we show how NVIPM is an ideal tool for demonstrating measurement and laboratory agreement.
Specifications for microbolometer defective detector pixel outages, cluster sizes, and row/column outages are common for many electro-optical imaging programs. These specifications for bad pixels and clusters often do not take into account the user’s ability to perceive the lack of information from areas of a focal plane with outages that are replaced using substitution algorithms. This is because the defective pixels are typically specified as a sensor parameter and does not take into account a camera’s system level descriptors: modulation transfer function (MTF), outage substitution strategy, post processing MTF, display performance, and observer’s psychophysical performance. These parameters combine to determine the total system MTF, which can be used to determine the minimum resolution at which a replaced pixel or cluster can be observed. This study analyzes different defective pixel specifications and their visibility from the system level descriptors and proposes specifications that are better aligned to camera performance.
Modeling and simulation of the full electro-optical/infrared observation chain remains an incompletely solved problem, and approximations are made at many stages. Including support for current advances in sensor and imaging-system technology with greater spatial, spectral, and temporal resolution only increases the challenge. In this paper we will present results of a US Navy effort to develop an integrated tool that provides enhanced 3D physics-based EO/IR observation chain modeling support for complex dynamic scenes at hyperspectral radiometric fidelity levels, to support research and development in multiple areas of importance for EO and IR imaging systems. A new prototype software system integrates the US Navy TrueView EO/IR/hyperspectral scene simulation and signature modeling tool with the mature US Army Integrated Performance Model (IPM).
Image intensified systems are a compact, low power device that converts visible through near-infrared illumination to visible imagery. These devices provide usable imagery in a variety of ambient illuminations, and they are a preferred means for night imaging. Even though the device consists of objective or relay optics and an image intensified tube, to perform critical measurements on the device performance one needs to dis-assemble the device to perform testing on only the image intensified tube. This is a non-trivial process that requires the hardware to be re-aligned and re-purged during re-assembly. Using proper sources, reference cameras, and image processing techniques, it is possible to fully characterize an image intensified device for its relevant measurable parameters (signal to noise ratio, tube gain, and limiting resolution) without disassembly. This paper outlines the classic component image intensified measurement methodology, assumptions on performance that support those measurement techniques, and the new methodology procedure. A comparison of measurement results using both methods will demonstrate the validity of this new measurement approach.
In this paper we will analyze the effect of vibration on the performance of the YOLO (You Only Look Once) algorithm on object classification. The YOLO algorithm will initially be trained using static imagery of common objects. Image processing techniques will then be used to create video clips with varying levels of blur due to vibration. These videos will then be used to evaluate the performance of the algorithm on object classification. Results of the classification will be presented. An initial summary of the classification results will be discussed in relation to the amount of vibration added to the imagery.
Infrared target acquisition models are used by the military analysis, acquisition, and testing community for sensor design, trade studies, and field performance prediction. We analyze the results of an identification field test, a laboratory perception test on imagery collected during the field identification test, and a perception test using the imagery collected during the field identification test with synthetically added vibration profiles. The purpose of the experiments is to validate model performance predictions in the field, perception laboratory, and to improve model performance predictions of an imaging sensor in the presence of a time varying vibration profile using the night vision integrated performance model (NV-IPM). The results of the observer responses to the image stimuli are analyzed using a mixed effects model at the level of the individual response to derive standard error estimates for the observer responses. The NV-IPM model of the imaging system is derived from field measurements. The field measurements are input into NV-IPM’s new simplified input interface. Results of the observer responses are compared to the NV-IPM model output. Finally, and in a similar fashion, the results of the synthetically added vibration profile experiment are analyzed and a discussion follows on modeling a system that has a time varying vibration component.
At NVESD, the targeting task performance (TTP) metric applies a weighting of different system specifications, that are determined from the scene geometry, to calculate a probability of task performance. In this correspondence we detail how to utilize an imaging system specification document to obtain a baseline performance estimate using the Night Vision Integrated Performance Model (NV-IPM), the corresponding input requirements, and potential assumptions. We then discuss how measurements can be performed to update the model to pro- vide a more accurate prediction of performance, detailing the procedures taken at the NVESD Advanced Sensor Evaluation Facility (ASEF) lay utilizing the Night Vision Laboratory Capture (NVLabCap) software. Finally, we show how the outputs of the measurement can be compared to those of the initial specification sheet based model, and evaluated against a requirements document. The modeling components and data set produced for this work are available upon request, and will serve as a means to benchmark performance for both modeling and measurement methods.
At NVESD, the targeting task performance (TTP) metric applies a weighting of different system specifications that is determined from the scene geometry to calculate a probability of task performance. In this correspondence we detail how to utilize an imaging system specification document to obtain a baseline performance estimate using the Night Vision Integrated Performance Model (NV-IPM), the corresponding requirements and potential assumptions. We then discuss how measurements can be performed to update the model to give a more accurate prediction of performance, detailing the procedure taken at NVESD Advanced Sensor Evaluation Facility (ASEF) lab utilizing the Night Vision Laboratory Capture (NVLabCap) software. Finally, we show how the outputs of the measurement can be compared to those of the initial specification sheet based model and evaluated against a requirements document. The modeling components and data sets produced for this work are available upon request and will serve as a means to benchmark of performance for both modeling and measurement methods.
The development of image simulation techniques which map the effects of a notional, modeled sensor system onto an existing image can be used to evaluate the image quality of camera systems prior to the development of prototype systems. In addition, image simulation or `virtual prototyping' can be utilized to reduce the time and expense associated with conducting extensive field trials. In this paper we examine the development of a perception study designed to assess the performance of the NVESD imager performance metrics as a function of fixed pattern noise. This paper discusses the development of the model theory and the implementation and execution of the perception study. In addition, other applications of the image simulation component including the evaluation of limiting resolution and other test targets is provided.
The U.S. Army RDECOM CERDEC NVESD MSD’s target acquisition models have been used for many years by the military analysis community for sensor design, trade studies, and field performance prediction. This paper analyzes the results of perception tests performed to compare the results of a field DRI (Detection, Recognition, and Identification Test) performed in 2009 to current Soldier performance viewing the same imagery in a laboratory environment and simulated imagery of the same data set. The purpose of the experiment is to build a robust data set for use in the virtual prototyping of infrared sensors. This data set will provide a strong foundation relating, model predictions, field DRI results and simulated imagery.
KEYWORDS: Systems modeling, Imaging systems, Performance modeling, Visual process modeling, Image analysis, Night vision systems, Interference (communication), Modulation transfer functions, Night vision, Integration, Video, Sensors, 3D modeling, Quantization, Point spread functions
Characterizing an imaging system through the use of linear transfer functions allows prediction of the output for an arbitrary input. Through careful measurement of the systems transfer function, imaging effects can then be applied to desired imagery in order to conduct subjective comparison, image based analysis, or evaluate algorithm performance. The Night Vision Integrated Performance Model (NV-IPM) currently utilizes a two-dimensional linear model of the systems transfer function to emulate the systems response and additive signal independent noise. In this correspondence, we describe how a two-dimensional MTF can be obtained through correct interpolation of one-dimensional measurements. We also present a model for the signal dependent noise (additive and multiplicative) and the details of its calculation from measurement. Through modeling of the experimental setup, we demonstrate how the emulated sensor replicates the observed objective performance in resolution, sampling, and noise. In support of the reproducible research effort, many of the Matlab functions associated with this work can be found on the Mathworks file exchange [1].
Laboratory measurements on thermal imaging systems are critical to understanding their performance in a field
environment. However, it is rarely a straightforward process to directly inject thermal measurements into thermal
performance modeling software to acquire meaningful results. Some of the sources of discrepancies between
laboratory and field measurements are sensor gain and level, dynamic range, sensor display and display brightness,
and the environment where the sensor is operating. If measurements for the aforementioned parameters could
be performed, a more accurate description of sensor performance in a particular environment is possible. This
research will also include the procedure for turning both laboratory and field measurements into a system model.
The generation of accurate reflective band imagery is complicated by the intrinsic properties of the scene, target, and camera system. Unlike emissive systems, which can be represented in equivalent temperature given some basic assumptions about target characteristics, visible scenes depend highly on the illumination, reflectivity, and orientation of objects in the scene as well as the spectral characteristics of the imaging system. Once an image has been sampled spectrally, much of the information regarding these characteristics is lost. In order to provide reference scene characteristics to the image processing component, the visible image processor in the Night Vision Integrated Performance Model (NV-IPM) utilizes pristine hyper-spectral data cubes. Using these pristine spectral scenes, the model is able to generate accurate representations of a scene for a given camera system. In this paper we discuss the development of the reflective band image simulation component and various methodologies for collecting or simulating the hyperspectral reference scenes.
A critical step in creating an image using a Bayer pattern sampled color camera is demosaicing, the process of
combining the individual color channels using a post-processing algorithm to produce the final displayed image. The
demosaicing process can introduce degradations which reduce the quality of the final image. These degradations must be
accounted for in order to accurately predict the performance of color imaging systems. In this paper, we present
analytical derivations of transfer functions to allow description of the effects of demosaicing on the overall system blur
and noise. The effects of color balancing and the creation of the luminance channel image are also explored. The
methods presented are validated through Monte Carlo simulations, which can also be utilized to determine the transfer
functions of non-linear demosaicing methods. Together with this new treatment of demosaicing, the framework behind
the color detector component in NV-IPM is discussed.
Version 1.6 of the Night Vision Integrated Performance Model (NV-IPM) introduced two-dimensional Modulation Transfer Function (MTF) and noise signals within the model architecture. These two-dimensional signals enable the model to more accurately treat systems with non-separable MTF components. These non-separable MTF components may be introduced by optical elements, electronic post-processing, or atmospheric effects. In this paper we discuss the differences between the new two-dimensional signal architecture and the one-dimensional separable representation used in earlier versions of the model and highlight some cases which demonstrate the utility of the two-dimensional signals.
KEYWORDS: Systems modeling, Performance modeling, Visual process modeling, Imaging systems, Sensors, Modulation transfer functions, Interference (communication), Signal to noise ratio, Temperature metrology, 3D modeling
The Night Vision Integrated Performance Model (NV-IPM) introduced a variety of measured system components in version 1.6 of the model. These measured system components enable the characterization of systems based on lab measurements which treat the system as a ‘black-box.’ This encapsulation of individual component terms into higher level measurable quantities circumvents the need to develop costly, time-consuming measurement techniques for each individual input term. Each of the ‘black-box’ system components were developed based upon the minimum required system level measurements for a particular type of imaging system. The measured system hierarchy also includes components for cases where a very limited number of measurements are possible. We discuss the development of the measured system components, the transition of lab measurements into model inputs, and any assumptions inherent to this process.
KEYWORDS: Performance modeling, Visual process modeling, Eye models, Systems modeling, Eye, NVThermIP, Image quality, Modulation transfer functions, Contrast transfer function, Imaging systems
NVESDs new integrated sensor performance model, NV-IPM, replaces the discrete spectral band models that preceded it (NVTherm, SSCamIP, etc.). Many advanced modeling functions are now more readily available, easier to implement, and integrated within a single model architecture. For the legacy model user with ongoing modeling duties, however, the conversion of legacy decks to NV-IPM is of more immediate concern than mastering the many “power features” now available. This paper addresses the processes for the legacy model user to make a smooth transition to NV-IPM, including the conversion of legacy sensor decks to NV-IPM format decks, differences in parameters entered in the new versus old model, and a comparison of the predicted performance differences between NV-IPM and legacy models. Examples are presented to demonstrate the ease of sensor deck conversion from legacy models and to highlight enhanced model capabilities available with minimal transition effort.
KEYWORDS: Signal to noise ratio, Systems modeling, Imaging systems, Modulation transfer functions, Performance modeling, Cameras, Visual process modeling, Phase shift keying, Modulation, Eye
The Thermal Range Model (TRM4)1 developed by the Fraunhofer IOSB of Germany is a commonly used performance model for military target acquisition systems. There are many similarities between the U.S Army Night Vision Integrated Performance Model (NV-IPM)2 and TRM4. Almost all of the camera performance characterizations, such as signal-to-noise calculations and modulation transfer theory are identical, only the human vision model and performance metrics differ. Utilizing the new Custom Component Generator in the NV-IPM we develop a component to calculate the Average Modulation at Optimal Phase (AMOP) and Minimum Difference Signal Perceived (MDSP) calculations used in TRM4. The results will be compared with the actual TRM4 results for a variety of imaging systems. This effort demonstrates that the NV-IPM is a versatile system design tool which can easily be extended to include a variety of image quality and performance metrics.
KEYWORDS: Modulation transfer functions, Systems modeling, Contrast transfer function, Visual process modeling, Sensors, Performance modeling, Night vision, Signal processing, Integrated modeling, Transmittance
The latest version of the U.S. Army imager performance model, the Night Vision Integrated Performance Model (NV-IPM), is now contained within a single, system engineering oriented design environment. This new model interface allows sensor systems to be represented using modular, reusable components. A new feature, added in version 1.3 of the NV-IPM, allows users to create custom components which can be incorporated into modeled systems. The ability to modify existing component definitions and create entirely new components in the model greatly enhances the extensibility of the model architecture. In this paper we will discuss the structure of the custom component and parameter generators and provide several examples where this feature can be used to easily create new and unique component definitions within the model.
KEYWORDS: Signal to noise ratio, Systems modeling, Interference (communication), Performance modeling, Imaging systems, Modulation transfer functions, Visual process modeling, Temperature metrology, Mid-IR, Eye models
Typically, the modeling of linear and shift-invariant (LSI) imaging systems requires a complete description of each subcomponent in order to estimate the final system transfer function. To validate the modeled behavior, measurements are performed on each component. When dealing with packaged systems, there are many situations where some, if not all, data is unknown. For these cases, the system is considered a blackbox, and system level measurements are used to estimate the transfer characteristics in order to model performance. This correspondence outlines the blackbox measured system component in the Night Vision Integrated Performance Model (NV-IPM). We describe how estimates of performance can be achieved with complete or incomplete measurements and how assumptions affect the final range. The blackbox measured component is the final output of a measurement characterization and is used to validate performance of delivered and prototype systems.
KEYWORDS: Sensors, Imaging systems, Black bodies, Modulation transfer functions, Machine vision, Temperature metrology, Contrast transfer function, Cameras, Systems modeling, Eye
Researchers at the US Army Night Vision and Electronic Sensors Directorate have added the functionality of Machine Vision MRT (MV-MRT) to the NVLabCap software package. While the original calculations of MV-MRT were compared to human observers performance using digital imagery in a previous effort,1 the technical approach was not tested on 8-bit imagery using a variety of sensors in a variety of gain and level settings. Now that it is more simple to determine the MV-MRT for a sensor in multiple gain settings, it is prudent to compare the results of MV-MRT in multiple gain settings to the performance of human observers for thermal imaging systems that are linear and shift invariant. Here, a comparison of the results for a LWIR system to trained human observers is presented.
KEYWORDS: Sensors, Photons, Target detection, Long wavelength infrared, Mid-IR, Monte Carlo methods, Atmospheric optics, Eye models, Signal to noise ratio, Targeting Task Performance metric
This paper describes the sensitivity analysis capabilities to be added to version 1.2 of the NVESD imaging sensor model NV-IPM. Imaging system design always involves tradeoffs to design the best system possible within size, weight, and cost constraints. In general, the performance of a well designed system will be limited by the largest, heaviest, and most expensive components. Modeling is used to analyze system designs before the system is built. Traditionally, NVESD models were only used to determine the performance of a given system design. NV-IPM has the added ability to automatically determine the sensitivity of any system output to changes in the system parameters. The component-based structure of NV-IPM tracks the dependence between outputs and inputs such that only the relevant parameters are varied in the sensitivity analysis. This allows sensitivity analysis of an output such as probability of identification to determine the limiting parameters of the system. Individual components can be optimized by doing sensitivity analysis of outputs such as NETD or SNR. This capability will be demonstrated by analyzing example imaging systems.
KEYWORDS: Sensors, Modulation transfer functions, Systems modeling, Data modeling, Performance modeling, Video, Image sensors, Cameras, Software development, Imaging systems
Engineers at the US Army Night Vision and Electronic Sensors Directorate have recently developed a software package called NVLabCap. This software not only captures sequential frames from thermal and visible sensors, but it also can perform measurements of signal intensity transfer function, 3-dimensional noise, field of view, super-resolved modulation transfer function, and image bore sight. Additionally, this software package, along with a set of commonly known inputs for a given thermal imaging sensor, can be used to automatically create an NV-IPM element for that measured system. This model data can be used to determine if a sensor under test is within certain tolerances, and this model can be used to objectively quantify measured versus given system performance.
The monochromatic modulation transfer function (MTF) is a spectral average across wavelength weighted by the
sensor’s spectral sensitivity and scaled by the spectral behavior of the source. For reflective band sensors, where there
are significant variations in spectral shape of the reflected light, this spectral averaging can result in very different MTFs
and, therefore, the resulting performance. In this paper, we explore the influence of this spectral averaging on
performance utilizing NV-IPM v1.1 (Night Vision Integrated Performance Model). We report the errors in range
performance when a system is characterized with one illumination and the performance is quoted for another. Our
results summarize the accuracy of different assumptions to how a monochromatic MTF can be approximated, and how
the measurement conditions under which a system was characterized should be considered when modeling performance.
The necessity of color balancing in day color cameras complicates both laboratory measurements as well as modeling for task performance prediction. In this proceeding, we discuss how the raw camera performance can be measured and characterized. We further demonstrate how these measurements can be modeled in the Night Vision Integrated Performance Model (NV-IPM) and how the modeled results can be applied to additional experimental conditions beyond those used during characterization. We also present the theoretical framework behind the color camera component in NV-IPM, where an effective monochromatic imaging system is created from applying a color correction to the raw color camera and generating the color corrected grayscale image. The modeled performance shows excellent agreement with measurements for both monochromatic and colored scenes. The NV-IPM components developed for this work are available in NV-IPM v1.2.
The battlefield has shifted from armored vehicles to armed insurgents. Target acquisition (identification, recognition, and detection) range performance involving humans as targets is vital for modern warfare. The acquisition and neutralization of armed insurgents while at the same time minimizing fratricide and civilian casualties is a mounting concern. U.S. Army RDECOM CERDEC NVESD has conducted many experiments involving human targets for infrared and reflective band sensors. The target sets include human activities, hand-held objects, uniforms & armament, and other tactically relevant targets. This paper will define a set of standard task difficulty values for identification and recognition associated with human target acquisition performance.
There have been significant improvements in the image quality metrics used in the NVESD model suite in recent
years. The introduction of the Targeting Task Performance (TTP) metric to replace the Johnson criteria yielded
significantly more accurate predictions for under-sampled imaging systems in particular. However, there are
certain cases which cause the TTP metric to predict optimistic performance. In this paper a new metric for
predicting performance of imaging systems is described. This new weighted contrast metric is characterized as
a hybrid of the TTP metric and Johnson criteria. Results from a number of historical perception studies are
presented to compare the performance of the TTP metric and Johnson criteria to the newly proposed metric.
The presence of noise in an IR system adversely impacts task performance in many cases. Typically when
modeling the effect of noise on task performance the focus is on the noise generated at the front end of the
system (detector, amplifier, etc). However, there are cases when noise may arise in the post-sample of the system
due to different display technologies, etc. This paper presents a means to determine the effect of display noise on
the sensor system noise under a variety of conditions. A modeling study demonstrates that the effect of display
noise correlates to the predicted modeled performance.
KEYWORDS: Imaging systems, Visual process modeling, Performance modeling, Sensors, Systems modeling, Night vision, Computer architecture, Systems engineering, Night vision systems
The next generation of Army imager performance models is currently under development at NVESD. The aim of this
new model is to provide a flexible and extensible engineering tool for system design which encapsulates all of the
capabilities of the existing Night Vision model suite (NVThermIP, SSCamIP, etc) along with many new design tools and
features including a more intuitive interface, the ability to perform trade studies, and a library of standard and user
generated components. By combining the previous model architectures in one interface the new design is better suited to
capture emerging technologies such as fusion and new sensor modalities. In this paper we will describe the general
structure of the model and some of its current capabilities along with future development plans.
This paper presents a comparison of the predictions of NVThermIP to human perception experiment results in the
presence of large amounts of noise where the signal to noise ratio is around 1. First, the calculations used in the NVESD
imager performance models that deal with sensor noise are described outlining a few errors that appear in the
NVThermIP code. A perception experiment is designed to test the range performance predictions of NVThermIP with
varying amounts of noise and varying frame rates. NVThermIP is found to overestimate the impact of noise, leading to
pessimistic range performance predictions for noisy systems. The perception experiment results are used to find a best
fit value of the constant α used to relate system noise to eye noise in the NVESD models. The perception results are also
fit to an alternate eye model that handles frame rates below 30Hz and smoothly approaches an accurate prediction of the
performance in the presence of static noise. The predictions using the fit data show significantly less error than the
predictions from the current model.
The predicted Minimum Resolvable Temperature (MRT) values from five MRT models are compared to the
measured MRT values for eighteen long-wave thermal imaging systems. The most accurate model, which is
based upon the output of NVTherm IP, has an advantage over the other candidate models because it accounts
for performance degradations due to blur and bar sampling. Models based upon the FLIR 92 model tended
to predict overly optimistic values for all frequencies. The earliest models for MRT's for staring arrays did
not incorporate advanced eye effects and had the tendency to provide pessimistic estimates as the frequency
approached the Nyquist limit.
The impact of bit depth on human in the loop recognition and identification performance is of particular importance
when considering trade-offs between resolution and band-width of sensor systems. This paper presents the
results from two perception studies designed to measure the effects of quantization and finite bit depth on target
acquisition performance. The results in this paper allow for the inclusion of limited bit depth and quantization
as an additional noise term in NVESD sensor performance models.
The current US Army target acquisition models have a dependence on magnification. This is due in part to the
structure of the observer Contrast Threshold Function (CTF) used in the model. Given the shape of the CTF,
both over-magnification and under-magnification can dramatically impact modeled performance. This paper
presents the results from two different perception studies, one using degraded imagery and the other using field
imagery. The results presented demonstrate the correlation between observer performance and model prediction
and provide guidance accurately representing system performance in under and over-magnified cases.
Recently the U.S. Army Night Vision and Electronic Sensors Directorate released a revision to the target acquisition
models. The Targeting Task Performance (TTP) metric represents a significant improvement in the
U.S. Army's target acquisition modeling capabilities. The purpose of this paper is to describe the experiments
and calculation methodologies behind generating the task difficulty parameter (V-50) value used in the model
to predict range performance. Included in this paper are experimental designs for recognition and identification
tasks for various target sets. Based upon the results to these experiments new V-50 values are calculated to
provide proper guidance for the most accurate performance predictions possible.
Most infrared sensors allow for adjustment of the sensors gain and level settings. This adjustment of gain and
level effects the contrast of the output image. This process is accounted for in the current US Army thermal
target acquisition model (NVThermIP), using the scene contrast temperature. By changing the scene contrast
temperature in NVThermIP, the system gain can be modified to reflect varying contrast levels presented at the
display. In this paper, the results of perception experiments dealing with image contrast and saturation are
reviewed. These results are compared with predicted performance based on the target task difficulty metric used
in NVThermIP.
US Army thermal target acquisition models based on the Johnson metric do not accurately predict sensor performance with electronic zoom (E-zoom). For this reason, NVTherm2002 removed the limiting E zoom Modulation Transfer Functions (MTF) to agree better with measured performance results. In certain scenarios, especially with under-sampled staring sensors, the model shows incorrect performance improvements with E-zoomed images. The current Army model NVThermIP, based upon the new targeting task performance (TTP) metric, more accurately models range performance in these cases. E-zoom provides system design flexibility when limited to a single optical field-of-view and/or eye distance is constrained by ergonomic factors. This paper demonstrates that targeting acquisition range performance, modeled using the TTP metric, shows increases only up to an optimized magnification and then decreases beyond this optimal value. A design "rule of thumb" is provided to determine this optimal magnification. NVThermIP modeled range performance is supported with E-zoom perception experiment results.
KEYWORDS: Speckle, Imaging systems, Sensors, Systems modeling, Performance modeling, Scintillation, Visual process modeling, Contrast transfer function, Data modeling, Modulation transfer functions
The U.S. Army RDECOM CERDEC Night Vision & Electronic Sensors
Directorate has developed a laser-range-gated imaging system
performance model for the detection, recognition, and identification
of vehicle targets. The model is based on the established US Army
RDECOM CERDEC NVESD sensor performance models of the human system
response through an imaging system. The Java-based model, called
NVLRG, accounts for the effect of active illumination, atmospheric
attenuation, and turbulence effects relevant to LRG imagers, such as
speckle and scintillation, and for the critical sensor and display
components. This model can be used to assess the performance of
recently proposed active SWIR systems through various trade studies.
This paper will describe the NVLRG model in detail, discuss the
validation of recent model components, present initial trade study
results, and outline plans to validate and calibrate the end-to-end
model with field data through human perception testing.
KEYWORDS: 3D acquisition, 3D image processing, 3D modeling, Performance modeling, Target acquisition, 3D displays, Electro optical modeling, Sensors, Optical engineering, Imaging systems
This research determines if there is an improvement in human observer performance, identifying potential weapons or threat objects, when imagery is presented in three dimensions instead of two dimensions. The potential improvement in performance is quantified by evaluating the change in the N50 cycle criteria, for this task and target set. The data suggests that as much as a 30% improvement in range capability may result from using 3-D imagery. The advent of affordable, practical, and real-time 3-D displays has led to a desire to evaluate and quantify the performance trade space for this application of the technology. Imagery was collected using a dual-camera stereo imaging system. A series of eight different resolutions were presented to observers in both 2-D and 3-D formats. The set of targets consisted of 12 handheld objects. The objects were a mix of potential threats or weapons and possible confusers. For example, a cellular telephone and a hand grenade are two such objects. This target set is the same target set used in previously reported research that determined the N50 requirements for handheld objects for both visible and infrared imagers.
Most sensors allow the user to adjust a parameter at will that will modify the displayed scene contrasts in the image. This parameter is usually referred to as gain. The current US Army thermal target acquisition model (NVThermIP) accounts for gain by introducing a scaling parameter called scene contrast temperature. First, the current US Army theory of target acquisition is reviewed and the particular means for modeling sensor gain are highlighted. Then the definitions of scene and target contrast are discussed. The results of two paired comparison perception experiments are analyzed. One of the experiments gives insight into a target identification task, and another gives some insight into a search task. Conclusions regarding the limits of applicability of values for scene contrast temperature are then discussed.
Noise in an imaging infrared (IR) sensor is one of the major limitations on its performance. As such, noise estimation is one of the major components of imaging IR sensor performance models and modeling programs. When computing noise, current models assume that the target and background are either at or near a temperature of 300 K. This paper examines how the temperature of the scene impacts the noise in IR sensors and their performance. It exhibits a strategy that can be used to make a 300 K assumption-based model to compute the correct noise. It displays the results of some measurements of signatures of a cold target against a cold background. Range performance of a notional 3rd Gen sensor (midwave IR and long wave IR) is then modeled as a function of scene background temperature.
This paper proposes a practical sensor deblur filtering method for images that are contaminated with noise. A sensor blurring function is usually modeled via a Gaussian-like function having a bell shape. The straightforward inverse function results in magnification of noise at the high frequencies. In order to address this issue, we apply a special window to the inverse blurring function. This special window is called the power window, which is a Fourier-based smoothing window that preserves most of the spatial frequency components in the pass-band and attenuates quickly at the transition-band. The power window is differentiable at the transition point which gives a desired smooth property and limits the ripple effect. Utilizing properties of the power window, we design the deblurring filter adaptively by estimating energy of the signal and noise of the image to determine the pass-band and transition-band of the filter. The deblurring filter design criteria are: a) filter magnitude is less than one at the frequencies where the noise is stronger than the desired signal (transition-band); b) filter magnitude is greater than one at the other frequencies (pass-band). Therefore, the adaptively designed deblurring filter is able to deblur the image by a desired amount based on the estimated or known blurring function while suppressing the noise in the output image. The deblurring filter performance is demonstrated by a human perception experiment which 10 observers are to identify 12 military targets with 12 aspect angles. The results of comparing target identification probabilities with blurred, deblurred, adding 2 level of noise to blurred, and deblurred noisy images are reported.
KEYWORDS: 3D acquisition, 3D image processing, 3D modeling, Performance modeling, 3D displays, Imaging systems, Electro optical modeling, Contrast transfer function, Sensors, Cameras
The objective of this research was to determine if there was an improvement in human observer performance, identifying potential weapons or threat objects, when imagery is presented in three dimensions instead of two dimensions. Also it was desired to quantify this potential improvement in performance by evaluating the change in N50 cycle criteria, for this task and target set. The advent of affordable, practical and real-time 3-D displays has led to a desire to evaluate and quantify the performance trade space for this potential application of the technology.
The imagery was collected using a dual camera stereo imaging system. A series of eight different resolutions were presented to observers in both two and three dimensional formats. The set of targets consisted of twelve hand held objects. The objects were a mix of potential threats or weapons and possible confusers. Two such objects, for example, are a cellular telephone and a hand grenade. This target set was the same target set used in previously reported research which determined the N50 requirements for handheld objects for both visible and infrared imagers.
KEYWORDS: Mid-IR, Long wavelength infrared, Image fusion, Sensors, Staring arrays, Received signal strength, Image analysis, Black bodies, Medium wave, Analytical research
Different systems are optimized for and are capable of addressing issues in the different spectral regions. Each sensor has its own advantages and disadvantages. The research presented in this paper focuses on the fusion of MWIR (0.3-0.5 μm) and LWIR (0.8-12 μm) spectrums on one IR Focal Plane Array (FPA). The information is processed and then displayed in a single image in an effort to analyze possible benefits of combining the two bands. The analysis addresses how the two bands differ by revealing the dominant band in terms of temperature value for different objects in a given scene, specifically the urban environment
Existing target acquisition models are effective in the prediction of Target Acquisition (TA) performance for monochrome or single band imagers (visible or infrared). There is currently no performance model for color imagers or fused imagery rendered on color displays. This study is a first step in extending the intensity contrast-based TA models to a three-dimensional color space. The monochrome TA model was developed with a variable, perceived signal to noise ratio threshold (SNRT), in contrast space, to determine target detection probability. In this research, we determined the noise-limited SNRT in the chroma direction for three equally spaced hue angles of CIELAB (Commission Internationale Eclairage L* A* B*) color space. The comparison of the chroma SNRT in these three hues of L*, a*, b* space that may allow an extension of the noise-limited sensor model to this three-dimensional space for color display modeling. Such a model improvement would result in a sensor performance model for both color imagers and for fused imaging color displays.
This study investigates the target detection signal to noise threshold of single targets in a noisy color background. Noise is described in terms of signals of both target and noise at three different hues. Target images are presented to observers who are required to locate the target and select it with a mouse. The format of each experiment is that of a "forced choice" human detection perceptual experiment. The images are 64 pixel square with targets of one or four pixels. The results are the signal to noise thresholds in the chroma direction for the three different hues.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.