Deep learning neural networks are a common tool in medical imaging and frequently used to solve a variety of complex problems. Magnetic resonance (MR) images are frequently employed to develop these networks because of their high spatial resolution and user selectable image contrast between tissues. More advanced deep learning models are being developed, which when combined with improvements in MR image acquisition techniques, will allow for image analysis techniques that are more efficient yet able to solve increasingly challenging problems. A current significant disadvantage of deep learning networks is that they are extremely sensitive to the distribution of the data used for training, therefore, network implementation can be challenging in clinical applications with heterogeneous images. The main problem is that, in a clinical environment, data distributions of target datasets can vary from subject-to-subject due to differences in scanner vendor, magnetic field strength, and the setting of specific MR acquisition parameters. These variations create inherent scan variability that diversifies the data distributions of different datasets. This effect can result in the model becoming inaccurate and producing undesirable outcomes. Thus, to improve model generalizability, we explored a supervised domain-adaptation approach. To test this method, we created a convolutional neural network model that performed a classification task and was composed of three components: (1) a feature extractor, (2) a pathology classifier, and (3) a domain classifier. In a single, unified training process, the pathology classifier was trained by minimizing the pathology loss function and the domain classifier was trained by maximizing the domain loss function. This procedure allows the model to penalize learning of features specific to the domain, and thus attempts to produce a domaininvariant feature vector. The performance of this domain-adapted model was compared to the same model but without domain classification (i.e., a baseline traditional model consisting of a feature extractor and a pathology classifier). We found that the domain-adapted model achieved a higher accuracy rate in predicting images from both source and target datasets.
|