We present an image quality metric and prediction model for SAR imagery that addresses automated information
extraction and exploitation by imagery analysts. This effort drarws on our team's direct experience with the development
of the Radar National Imagery Interpretability Ratings Scale (Radar NIIRS), the General Image Quality Equations
(GIQE) for other modalities, and extensive expertise in ATR characterization and performance modeling. In this study,
we produced two separate GIQEs: one to predict Radar NIIRS and one to predict Automated Target Detection (ATD)
performance. The Radar NIIRS GIQE is most significantly influenced by resolution, depression angle, and depression
angle squared. The inclusion of several image metrics was shown to improve performance. Our development of an ATD
GIQE showed that resolution and clutter characteristics (e.g., clear, forested, urban) are the dominant explanatory
variables. As was the case with NIIRS GIQE, inclusion of image metrics again increased performance, but the
improvement was significantly more pronounced. Analysis also showed that a strong relationship exists between ATD
and Radar NIIRS, as indicated by a correlation coefficient of 0.69; however, this correlation is not strong enough that we
would recommend a single GIQE be used for both ATD and NIIRS prediction.
In this paper, we focus on the problem of automated surveillance in a parking lot scenario. We call our research system
VANESSA, for Video Analysis for Nighttime Surveillance and Situational Awareness. VANESSA is capable of: 1)
detecting moving objects via background modeling and false motion suppression, 2) tracking and classifying pedestrians
and vehicles, and 3) detecting events such as person entering or exiting a vehicle. Moving object detection utilizes a
multi-stage cascading approach to identify pixels that belong to the true objects and reject any spurious motion, (e.g.,
due to vehicle headlights or moving foliage). Pedestrians and vehicles are tracked using a multiple hypothesis tracker
coupled with a particle filter for state estimation and prediction. The space-time trajectory of each tracked object is
stored in an SQL database along with sample imagery to support video forensics applications. The detection of pedestrians
entering/exiting vehicles is accomplished by first estimating the three-dimensional pose and the corresponding entry
and exit points of each tracked vehicle in the scene. A pedestrian activity model is then used to probabilistically assign
pedestrian tracks that appear or disappear in the vicinity of these entry/exit points. We evaluate the performance of
tracking and pedestrian-vehicle association on an extensive data set collected in a challenging real-world scenario.
Commercial security and surveillance systems offer advanced sensors, optics, and display capabilities but lack intelligent
processing. This necessitates human operators who must closely monitor video for situational awareness and threat
assessment. For instance, urban environments are typically in a state of constant activity, which generates numerous
visual cues, each of which must be examined so that potential security breaches do not go unnoticed. We are building a
prototype system called BALDUR (Behavior Adaptive Learning during Urban Reconnaissance) that learns probabilistic
models of activity for a given site using online and unsupervised training techniques. Once a camera system is set up, no
operator intervention is required for the system to begin learning patterns of activity. Anomalies corresponding to unusual
or suspicious behavior are automatically detected in real time. All moving object tracks (pedestrians, vehicles,
etc.) are efficiently stored in a relational database for use in training. The database is also well suited for answering human-
initiated queries. An example of such a query is, "Display all pedestrians who approached the door of the building
between the hours of 9:00pm and 11:00pm." This forensic analysis tool complements the system's real-time situational
awareness capabilities. Several large datasets have been collected for the evaluation of the system, including one database
containing an entire month of activity from a commercial parking lot.
Image exploitation algorithms for Intelligence, Surveillance and Reconnaissance (ISR) and weapon systems are extremely sensitive to differences between the operating conditions (OCs) under which they are trained and the extended operating conditions (EOCs) in which the fielded algorithms are tested. As an example, terrain type is an important OC for the problem of tracking hostile vehicles from an airborne camera. A system designed to track cars driving on highways and on major city streets would probably not do well in the EOC of parking lots because of the very different dynamics. In this paper, we present a system we call ALPS for Adaptive Learning in Particle Systems. ALPS takes as input a sequence of video images and produces labeled tracks. The system detects moving targets and tracks those targets across multiple frames using a multiple hypothesis tracker (MHT) tightly coupled with a particle filter. This tracker exploits the strengths of traditional MHT based tracking algorithms by directly incorporating tree-based hypothesis considerations into the particle filter update and resampling steps. We demonstrate results in a parking lot domain tracking objects through occlusions and object interactions.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.