In the process of developing new technologies for displaying 360° visual data supporting Local Area Awareness (LAA)
in complex environments (e.g. tactical military environments), one important, though often overlooked, area is system
evaluation. Without an accurate and reliable evaluation, it is impossible to determine which elements of the new display
are useful and which need further development. Evaluating a system properly requires two types of tests: one for testing
capabilities (e.g. given a display, what types of threats can be detected and identified?), and another for probing whether
a given display configuration is useful (e.g. will the human operator use this more complex interface appropriately in the
real world?). While established methodologies exist for the former, the latter often appears as a much less tractable
problem. This is primarily because of the difficulties with modeling the complexity of the real world in a simulated
environment. This paper presents a methodology for architecting a distributed simulation to support evaluation of a 360°
LAA display system for usefulness to human participants within virtual environments. The evaluation that leveraged the
methodology ultimately reported several unexpected results due to the effectiveness of the evaluation; for example, the
experiment discovered a much greater "keyhole effect" than expected, where participants focused almost entirely on the
forward 180°, even when presented with imagery covering the full 360°. Such results demonstrate the utility of the
methodology, particularly for developing evaluations that discover unexpected aspects of operational use in complex
environments.
KEYWORDS: High power microwaves, Data modeling, Performance modeling, Modeling and simulation, Intelligence systems, Situational awareness sensors, Systems modeling, Visualization, Computer simulations, Virtual reality
The proliferation of intelligent systems in today's military demands increased focus on the optimization of human-robot
interactions. Traditional studies in this domain involve large-scale field tests that require humans to operate semiautomated
systems under varying conditions within military-relevant scenarios. However, provided that adequate
constraints are employed, modeling and simulation can be a cost-effective alternative and supplement. The current
presentation discusses a simulation effort that was executed in parallel with a field test with Soldiers operating military
vehicles in an environment that represented key elements of the true operational context. In this study, "constructive"
human operators were designed to represent average Soldiers executing supervisory control over an intelligent ground
system. The constructive Soldiers were simulated performing the same tasks as those performed by real Soldiers during
a directly analogous field test. Exercising the models in a high-fidelity virtual environment provided predictive results
that represented actual performance in certain aspects, such as situational awareness, but diverged in others. These
findings largely reflected the quality of modeling assumptions used to design behaviors and the quality of information
available on which to articulate principles of operation. Ultimately, predictive analyses partially supported expectations,
with deficiencies explicable via Soldier surveys, experimenter observations, and previously-identified knowledge gaps.
As the Army's Future Combat Systems (FCS) introduce emerging technologies and new force structures to the
battlefield, soldiers will increasingly face new challenges in workload management. The next generation warfighter will
be responsible for effectively managing robotic assets in addition to performing other missions. Studies of future
battlefield operational scenarios involving the use of automation, including the specification of existing and proposed
technologies, will provide significant insight into potential problem areas regarding soldier workload.
The US Army Tank Automotive Research, Development, and Engineering Center (TARDEC) is currently executing an
Army technology objective program to analyze and evaluate the effect of automated technologies and their associated
control devices with respect to soldier workload. The Human-Robotic Interface (HRI) Intelligent Systems Behavior
Simulator (ISBS) is a human performance measurement simulation system that allows modelers to develop constructive
simulations of military scenarios with various deployments of interface technologies in order to evaluate operator
effectiveness. One such interface is TARDEC's Scalable Soldier-Machine Interface (SMI). The scalable SMI provides a
configurable machine interface application that is capable of adapting to several hardware platforms by recognizing the
physical space limitations of the display device.
This paper describes the integration of the ISBS and Scalable SMI applications, which will ultimately benefit both
systems. The ISBS will be able to use the Scalable SMI to visualize the behaviors of virtual soldiers performing HRI
tasks, such as route planning, and the scalable SMI will benefit from stimuli provided by the ISBS simulation
environment. The paper describes the background of each system and details of the system integration approach.
KEYWORDS: Intelligence systems, Roads, Standards development, Navigation systems, Reconnaissance, Process modeling, Visualization, Data modeling, Control systems, Vehicle control
The level of automation in combat vehicles being developed for the Army's objective force is greatly increased over the Army's legacy force. This automation is taking many forms in emerging vehicles; varying from operator decision aides to fully autonomous unmanned systems. The development of these intelligent vehicles requires a thorough understanding of all of the intelligent behavior that needs to be exhibited by the system so that designers can allocate functionality to humans and/or machines. Traditional system specification techniques focused heavily on the functional description of the major systems and implicitly assumed that a well-trained crew would operate these systems in a manner to accomplish the tactical mission assigned to the vehicle. In order to allocate some or all of these intelligent behaviors to machines in future vehicles it is necessary to be able to identify and describe these intelligent behaviors in detail. In this paper, we describe an effort to develop an intelligent systems (IS) ontology using Protege. The goal of this effort is to develop a common, implementation-independent, extendable knowledge source for researchers and developers in the intelligent vehicle community that will:
* Provide a standard set of domain concepts along with their attributes and inter-relations
* Allow for knowledge capture and reuse
* Facilitate systems specification, design, and integration, and
* Accelerate research in the field.
This paper describes the methodology we have used to identify knowledge in this domain and an approach to capture and visualize the knowledge in the ontology.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.