This will count as one of your downloads.
You will have access to both the presentation and article (if available).
Here, we present UAV integration with a compact optically pumped magnetometer (QuSpin QTFM) and characterize its performance in vehicle sensing from the drone platform. We first characterize magnetic noise produced by the drone rotors to understand the detection limits it imposes on SNR. We then demonstrate vehicle detection at standoff distances of 5 and 10m. We utilize a volumetric magnetic field background subtraction method in order to enhance vehicle detection, and we characterize the impact of UAV velocity on field detection.
This paper discusses the differences observed in scene contrast between mid-wave infrared (MWIR) and long-wave infrared (LWIR) bands. This provides a scene contrast characterization for emissive infrared applications. Radiometrically calibrated imagery is acquired with MWIR and LWIR cameras in various environments and the measured MWIR and LWIR scene contrast is compared. The radiometric comparison is performed in terms of the standard deviation of the scene equivalent blackbody temperature. Comparisons are provided under different conditions such as rural versus urban and day versus night. This comparison enables the infrared system designer with the means to perform detailed engineering trades.
Different types of 3D sensors, such as LiDAR and RGB-D cameras, capture data with different resolution, range, and noise characteristics. It is often desired to merge these different types of data together into a coherent scene, but automatic alignment algorithms generally assume that the characteristics of each fragment are all similar. Our goal is to evaluate the performance of these algorithms on data with different characteristics to enable the integration of data from multiple types of sensors.
We use the Redwood dataset, which has high-resolution scans of several different environments captured using a stationary LiDAR scanner. We first develop a method to emulate the capture of these environments as viewed by different types of sensor by leveraging OpenGL and a mesh creation process. Next, we take fragments of these captures which represent scenarios in which each type of sensor would be used, using our scanning experience to inform the selection process. Finally, we attempt to merge the fragments together using several automatic algorithms and evaluate how the results compare with the original scenes. We evaluate based on transformation similarity to ground truth, algorithm speed and ease of use, and subjective quality assessments.We have developed and used a variety of IQMs and VQMs related to the pilot’s ability to detect hazards in the scene and to maintain situational awareness. Some of these metrics can be made agnostic to sensor type. Not only are the metrics suitable for evaluating algorithm and sensor variation, they are also suitable for choosing the most cost effective solution to improve operating conditions in degraded visual environments.
This will count as one of your downloads.
You will have access to both the presentation and article (if available).
View contact details
No SPIE Account? Create one