Tracking systems often provide sets of tracks rather than raw detections obtained from sensors. Integrating these track sets into other tracking systems is challenging because the usual sensor models do not apply. In this work we present a method for fusing track data from multiple sensors in a central fusion node. The algorithm exploits the covariance intersection algorithm as a pseudo-Kalman filter which is integrated into a multi-sensor multi-target tracker within a Bayesian paradigm. This makes it possible to (i) integrate the proposed fusion method seamlessly into any existing tracker; (ii) modify multi-target trackers to take a set of tracks as a set of measurements; and (iii) perform gating to enable data association between tracks. The described method is demonstrated in simulations using several target trackers within the Stone Soup tracking framework.
The tracking and state estimation community is broad, with diverse interests. These range from algorithmic research and development, applications to solve specific problems, to systems integration. Yet until recently, in contrast to similar communities, few tools for common development and testing were widespread. This was the motivation for the development of Stone Soup - the open source tracking and state estimation framework. The goal of Stone Soup is to conceive the solution of any tracking problem as a machine. This machine is built from components of varying degrees of sophistication for a particular purpose. The encapsulated nature and modularity of these components allow efficiency and reuse. Metrics give confidence in evaluation. The open nature of the code promotes collaboration. In April 2019, the Stone Soup initial beta version (v0.1b) was released, and though development continues apace, the framework is stable, versioned and subject to review. In this paper, we summarise the key features of and enhancements to Stone Soup - much advanced since the original beta release - and highlight several uses to which Stone Soup has been applied. These include a drone data fusion challenge, sensor management, target classification, and multi-object tracking in video using TensorFlow object detection. We also detail introductory and tutorial information of interest to a new user.
Over recent years, drone identification and detection has become an increasing concern for public safety and security. In this paper, we explore the use of convolutional neural networks (CNNs) applied to the continuous and discrete wavelet transform (CWT/DWT) scalogram of reflected radar signals from drones. In particular, we use the Martin-Mulgrew model to simulate the radar signals reflected off of five different types of drones from an X-band and W-band radar. The drones have different blade lengths and blade rotation rates, and these parameters will affect their respective scalograms, allowing for the use of CNNs in this classification problem. Results with real radar data sets collected in the laboratory are also presented.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.