Infrared (IR) imagery sequences are commonly used for detecting moving targets in the presence of evolving cloud
clutter or background noise. This research focuses on slow moving point targets that are less than one pixel in size, such
as aircraft at long ranges from a sensor.
The target detection performance is measured via the variance estimation ratio score (VERS), which essentially
calculates the pixel scores of the sequences, where a high score indicates a target is suspected to traverse the pixel. VERS
uses two parameters – long and short term windows, which were predetermined individually for each movie, depending
on the target velocity and on the clouds intensity and amount, as opposed to clear sky (noise), in the background. In this
work, we examine the correlation between the sequences' spatial and temporal features and these two windows. In
addition, we modify VERS calculation, to enhance target detection and decrease cloud-edge scores and false detection.
We conclude this work by evaluating VERS as a detection measure, using its original version and its modified version.
The test sequences are both original real IR sequences as well as their relative compressed sequences using our
designated temporal DCT quantization method.
We analyze the connection between viewer-perceived quality and encoding schemes. The encoding schemes depend on transmission bit-rate, MPEG compression depth, frame size and frame rate in a constant bit-rate (CBR) video transmission of a MPEG-2 video sequence. The compressed video sequence is transmitted over a lossy communication network with quality of service (QoS) and a certain Internet (IP) loss model. On the end-user side, viewer-perceived quality depends on changes in the network conditions, the video compression, and the video content complexity. We demonstrate that, when jointly considering the impact of coding bit rate, packet loss, and video complexity, there is an optimal encoding scheme, which also depends on the video content. We use a set of subjective tests to demonstrate that this optimal encoding scheme maximizes the viewer-perceived quality.
One result of the recent advances in different components of imaging systems technology is that, these systems have become more resolution-limited and less noise-limited. The most useful tool utilized in characterization of resolution- limited systems is the Modulation Transfer Function (MTF). The goal of this work is to use the MTF as an image quality measure of image compression implemented by the JPEG (Joint Photographic Expert Group) algorithm and transmitted MPEG (Motion Picture Expert Group) compressed video stream through a lossy packet network. Although we realize that the MTF is not an ideal parameter with which to measure image quality after compression and transmission due to the non- linearity shift invariant process, we examine the conditions under which it can be used as an approximated criterion for image quality. The advantage in using the MTF of the compression algorithm is that it can be easily combined with the overall MTF of the imaging system.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.