Being able to identify defects is an essential step during manufacturing processes. Yet, not all defects are necessarily known and sufficiently well described in the databases images. The challenge we address in this paper is to detect any defect by fitting a model using only normal samples of industrial parts. For this purpose, we propose to test fast AnoGAN (f-AnoGAN) approach based on a generative adversarial network (GAN). The method is an unsupervised learning algorithm, that contains two phases; first, we train a generative model using only normal images, which proposes a fast mapping of new data into the latent space. Second, we add and train an encoder to reconstruct images. The anomaly detection is defined by the reconstruction error between the defected data and the reconstructed ones, and the residual error of the discriminator. For our experiments, we use two sets of industrial data; the MVTec Anomaly Detection Dataset and a private dataset which is based on thermal-wave and used for non-destructive testing. This technique has been utilized in research for the evaluation of industrial materials. Applying the f-AnoGAN in this domain offers high anomaly detection accuracy.
Compared to frame-based visual streams, event-driven visual streams offer very low bandwidth needs and high temporal resolution, making them an interesting choice for embedded object recognition. Such visual systems are seen to overcome standard cameras performances but have not yet been studied in the frame of Homing Guidance for projectiles, with drastic navigation constraints. This work starts from a first interaction model between a standard camera and an event camera, validated in the context of unattended ground sensors and situational awareness applications from a static position. In this paper we propose to extend this first interaction model by bringing a higher-level activity analysis and object recognition from a moving position. The proposed event-based terminal guidance system is studied firstly through a target laser designation scenario and the optical flow computation to validate guidance parameters. Real-time embedded processing techniques are evaluated, preparing the design of a future demonstrator of a very fast navigation system. The first results have been obtained using embedded Linux architectures with multi-threaded features extractions. This paper shows and comments these first results.
A new challenging vision system has recently gained prominence and proven its capacities compared to traditional imagers: the paradigm of event-based vision. Instead of capturing the whole sensor area in a fixed frame rate as in a frame-based camera, Spike sensors or event cameras report the location and the sign of brightness changes in the image. Despite the fact that the currently available spatial resolutions are quite low (640x480 pixels) for these event cameras, the real interest is in their very high temporal resolution (in the range of microseconds) and very high dynamic range (up to 140 dB). Thanks to the event-driven approach, their power consumption and processing power requirements are quite low compared to conventional cameras. This latter characteristic is of particular interest for embedded applications especially for situational awareness. The main goal for this project is to detect and to track activity zones from the spike event stream, and to notify the standard imager where the activity takes place. By doing so, automated situational awareness is enabled by analyzing the sparse information of event-based vision, and waking up the standard camera at the right moments, and at the right positions i.e. the detected regions of interest. We demonstrate the capacity of this bimodal vision approach to take advantage of both cameras: spatial resolution for standard camera and temporal resolution for event-based cameras. An opto-mechanical demonstrator has been designed to integrate both cameras in a compact visual system, with embedded Software processing, enabling the perspective of autonomous remote sensing. Several field experiments demonstrate the performances and the interest of such an autonomous vision system. The emphasis is placed on the ability to detect and track fast moving objects, such as fast drones. Results and performances are evaluated and discussed on these realistic scenarios.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.