Metal objects in the field of view cause artifacts in the image, which manifest as dark and bright streaks and degrade the diagnostic value of the image. Standard approaches for metal artifact reduction are often unable to correct these artifacts sufficiently or introduce new artifacts. We propose a new data-based method to reduce metal artifacts in CT images applying conditional Generative Adversarial Networks to the corrupted data. A generator network is applied directly to the corrupted projections by the metal objects to learn the corrected sinogram data. Further, two discriminator networks are used to evaluate the image quality of the enhanced data from the generator. The method was initially developed based on a supervised approach. However, there is usually no ground truth for actual clinical data without artifacts, which is needed to train the networks. Therefore, the method was further improved to train an unsupervised network, i.e., without the ground truth. In addition the input data, the neighboring slices and the stochastic components of the image are included using the latent space representation of the data. The results show that the trained generator network can reasonably replace the missing projection data and reduce the artifacts in the reconstructed image.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.