Automated analysis of chest imaging in coronavirus disease (COVID-19) has mostly been performed on smaller datasets leading to overfitting and poor generalizability. Training of deep neural networks on large datasets requires data labels. This is not always available and can be expensive to obtain. Self-supervision is being increasingly used in various medical imaging tasks to leverage large amount of unlabeled data during pretraining. Our proposed approach pretrains a vision transformer to perform two self-supervision tasks - image reconstruction and contrastive learning on a Chest Xray (CXR) dataset. In the process, we generate more robust image embeddings. The reconstruction module models visual semantics within the lung fields by reconstructing the input image through a mechanism which mimics denoising and autoencoding. On the other hand, the constrastive learning module learns the concept of similarity between two texture representations. After pretraining, the vision transformer is used as a feature extractor towards a clinical outcome prediction task on our target dataset. The pretraining multi-kaggle dataset comprises 27499 CXR scans while our target dataset contains 530 images. Specifically, our framework predicts ventilation and mortality outcomes for COVID-19 positive patients using baseline CXR. We compare our method against a baseline approach using pretrained ResNet50 features. Experimental results demonstrate that our proposed approach outperforms the supervised method.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.