The recent advent of clinical data warehouses (CDWs) has facilitated the sharing of very large volumes of medical data for research purposes. MRIs can be affected by various artefacts such as motion, noise or poor contrast that can severely degrade the overall quality of an image. In CDWs, a large amount of MRIs are unusable because corrupted by these diverse artefacts. Given the huge number of MRIs present in CDWs, manually detecting these artefacts becomes an impractical task. Therefore, it is necessary to develop an automated tool that can efficiently identify and exclude corrupted images. We previously proposed an approach for the detection of motion artefacts in 3D T1-weighted brain MRIs. In this paper, we propose to extend our work to two other types of artefacts: poor contrast and noise. We rely on a transfer learning approach, which leverages synthetic artefact generation, and comprises two steps: model pre-training on research data using synthetic artefacts, followed by a fine-tuning step, where we generalise the pre-trained models to clinical routine data relying on the manual labelling of 5000 images. The main objectives of our study were two-fold: to be able to exclude images with severe artefacts and to detect moderate artefacts. Our approach excelled in meeting the first objective, achieving a balanced accuracy of over 84% for the detection of severe noise and very poor contrast, which closely matched the performance of human annotators. Nevertheless, performance in the pursuit of the second objective was less satisfactory and inferior to that of the human annotators. Overall, our framework will be useful for taking full advantage of MRIs present in CDWs.
Clinical data warehouses (CDWs) contain the medical data of millions of patients and represent a great opportunity to develop computational tools. MRIs are particularly sensitive to patient movements during image acquisition, which will result in artefacts (blurring, ghosting and ringing) in the reconstructed image. As a result, a significant number of MRIs in CDWs are unusable because corrupted by these artefacts. Since their manual detection is impossible due to the number of scans, it is necessary to develop a tool to automatically exclude images with motion in order to fully exploit CDWs. In this paper, we propose a CNN for the automatic detection of motion in 3D T1-weighted brain MRI. Our transfer learning approach, based on synthetic motion generation, consists of two steps: a pre-training on research data using synthetic motion, followed by a fine-tuning step to generalise our pre-trained model to clinical data, relying on the manual labelling of 5500 images. The objectives were both (1) to be able to exclude images with severe motion, (2) to detect mild motion artefacts. Our approach achieved excellent accuracy for the first objective with a balanced accuracy nearly similar to that of the annotators (balanced accuracy<80%). However, for the second objective, the performance was weaker and substantially lower than that of human raters. Overall, our framework will be useful to take advantage of CDWs in medical imaging and to highlight the importance of a clinical validation of models trained on research data.
KEYWORDS: Image quality, Image segmentation, Brain, Magnetic resonance imaging, Neuroimaging, Data modeling, 3D modeling, Tissues, Gadolinium, 3D image enhancement
Clinical data warehouses provide access to massive amounts of medical images and thus offer unprecedented opportunities for research. However, they also pose important challenges, a major challenge being their heterogeneity. In particular, they contain patients with numerous different diseases. The exploration of some neurological diseases with magnetic resonance imaging (MRI) requires injecting a gadolinium-based contrast agent (for instance to detect tumors or other contrast-enhancing lesions) while other diseases do not require such injection. Image harmonization is a key factor to enable unbiased differential diagnosis in such context. Additionally, classical neuroimaging software tools that extract features used as inputs of classification algorithms are typically applied only to images without gadolinium. The objective of this work is to homogenize images from a clinical data warehouse and enable the extraction of consistent features from brain MR images, no matter the initial presence or absence of gadolinium. We propose a deep learning approach based on a 3D U-Net to translate contrast-enhanced into non-contrast-enhanced T1-weighted brain MRI. The approach was trained/validated using 230 image pairs and tested on 26 image pairs of good quality and 51 image pairs of low quality from the data warehouse of the hospitals of the Greater Paris area (Assistance Publique-Hˆopitaux de Paris [AP-HP]). We tested two different 3D U-Net architectures and we chose the one reaching the best image similarity metrics for a further validation for a segmentation task. We tested two 3D U-Net architectures with the addition either of residual connections or of attention mechanisms. The U-Net with attention mechanisms reached the best image similarity metrics and was further validated on a segmentation task. We showed that features extracted from the synthetic images (gray matter, white matter and cerebrospinal fluid volumes) were closer to those obtained from the non-contrast-enhanced T1-weighted brain MRI (considered as reference) than the original, contrast-enhanced, images.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.