Spinal degeneration and deformity present an enormous healthcare burden, with spine surgery among the main treatment modalities. Unfortunately, spine surgery (e.g., lumbar fusion) exhibits broad variability in the quality of outcome, with ~20-40% of patients gaining no benefit in pain or function (“failed back surgery”) and earning criticism that is difficult to reconcile versus rapid growth in frequency and cost over the last decade. Vital to advancing the quality of care in spine surgery are improved clinical decision support (CDS) tools that are accurate, explainable, and actionable: accurate in prediction of outcomes; explainable in terms of the physical / physiological factors underlying the prediction; and actionable within the shared decision process between a surgeon and patient in identifying steps that could improve outcome. This technical note presents an overview of a novel outcome prediction framework for spine surgery (dubbed SpineCloud) that leverages innovative image analytics in combination with explainable prediction models to achieve accurate outcome prediction. Key to the SpineCloud framework are image analysis methods for extraction of high-level quantitative features from multi-modality peri-operative images (CT, MR, and radiography) related to spinal morphology (including bone and soft-tissue features), the surgical construct (including deviation from an ideal reference), and longitudinal change in such features. The inclusion of such image-based features is hypothesized to boost the predictive power of models that conventionally rely on demographic / clinical data alone (e.g., age, gender, BMI, etc.). Preliminary results using gradient boosted decision trees demonstrate that such prediction models are explainable (i.e., why a particular prediction is made), actionable (identifying features that may be addressed by the surgeon and/or patient), and boost predictive accuracy compared to analysis based on demographics alone (e.g., AUC improved by ~25% in preliminary studies). Incorporation of such CDS tools in spine surgery could fundamentally alter and improve the shared decisionmaking process between surgeons and patients by highlighting actionable features to improve selection of therapeutic and rehabilitative pathways.
Motivation/Purpose: This work reports the development and validation of an algorithm to automatically detect and localize vertebrae in CT images of patients undergoing spine surgery. Slice-by-slice detections using the state-of-the art 2D convolutional neural network (CNN) architectures were combined to estimate vertebra centroid location in 3D including a method that combined detections in sagittal and coronal slices. The solution facilitates applications in image guided surgery and automatic computation of image analytics for surgical data science. Methods: CNN-based object detection models in 3D (volume) and 2D (slice) images were implemented and evaluated for the task of vertebrae detection. Slice-by-slice detections in 2D architectures were combined to estimate the 3D centroid location including a model that simultaneously evaluated 2D detections in orthogonal directions (i.e., sagittal and coronal slices) to improve the robustness against spurious false detections – called Ortho-2D. Performance was evaluated in a data set consisting of 85 patients undergoing spine surgery at our institution, including images presenting spinal instrumentation/implants, spinal deformity, and anatomical abnormalities that are realistic exemplars of pathology in the patient population. Accuracy was quantified in terms of precision, recall, F1 score, and the 3D geometric error in vertebral centroid annotation compared to ground truth (expert manual) annotation. Results: Three CNN object detection models were able to successfully localize vertebrae, with Ortho-2D model that combined 2D detections in orthogonal directions achieving best performance: precision = 0.95, recall = 0.99, and F1 score = 0.97. Overall centroid localization accuracy was 3.4 mm (median) [interquartile range (IQR) = 2.7 mm], and ~97% of detections (154/159 lumbar cases) yielded acceptable centroid localization error <15 mm (considering average vertebrae size ~25 mm). Conclusions: State-of-the-art CNN architectures were adapted for vertebral centroid annotation, yielding accurate and robust localization even in the presence of anatomical abnormalities, image artifacts, and dense instrumentation. The methods are employed as a basis for streamlined image guidance (automatic initialization of 3D-2D and 3D-3D registration methods in image-guided surgery) and as an automatic spine labeling tool to generate image analytics.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.