In this paper, we make use of a convolutional autoencoder to predict multiple unseen views of an object in the infrared domain. The dataset we use for this purpose is called ‘DSIAC-ATR Image database’ which has never been used before for view prediction in the non-linear feature subspace. Our method involves exploiting the underlying feature subspace – the manifold of the object - to predict an unseen view. We address a more challenging task of view prediction by working with greyscale images- the infrared images collected both during the day and night. We propose multiple architectures that not only predict how an object (a military vehicle in this case) will look like at a certain orientation but also learn to predict day or night infrared image and produce either as asked. We train our networks and show via experiments that the weights do not learn the geometry of transformation in the Euclidean space but rather in the Riemannian space. We explore the underlying feature subspace and observe that the networks learn the manifolds and thereby produce sharp, distinct and natural-looking images.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.