Cochlear implants (CIs) are neural prosthetics that can improve hearing in patients with severe-to-profound hearing loss. CIs induce hearing sensation by stimulating auditory nerve fibers (ANFs) using an electrode array that is surgically implanted into the cochlea. After the device is implanted, an audiologist programs the CI processor to optimize hearing performance. However, without knowing which ANFs are being stimulated by each electrode, audiologists must rely solely on patient performance to inform the programming adjustments. Patient-specific neural stimulation modeling has been proposed to provide objective information to assist audiologists with programming, but this approach requires accurate localization of ANFs in patient CT images. In this paper, we propose an automatic neural-network-based method for atlas-based localization of the ANFs. Our results show that our method is able to produce smooth ANF predictions that are more realistic than those produced by a previously proposed semi-manual localization method. Accurate and realistic ANF localizations are critical for constructing patient-specific ANF stimulation models for model guided CI programming.
Cochlear implants (CIs) are considered standard treatment for patients who experience sensory-based hearing loss. During the surgery, an electrode array will be inserted into the cochlea to directly stimulate auditory nerve fibers (ANFs). Although CI devices have been remarkably successful at restoring audibility, the neural interface is unknown to audiologists, and patients need to experience programming sessions that are frustratingly long and usually lead to suboptimal results. Our group developed a high-resolution computational model in order to simulate the neural response triggered by CIs. However, the semi-automatic ANF segmentation approach we used for that model relies heavily on manual adjustment, and the central axons of those ANFs may still pass through the bone mistakenly due to the limitation of only one set of landmarks in the modiolus. In this work, we introduced a fully automatic ANF segmentation method. The peripheral and central axon of an ANF will be estimated individually based on five sets of automatically generated landmarks. The fast marching method is used to find the geodesic paths for the peripheral axons between the surfaces of the scala tympani (ST) and scala vestibuli (SV) meshes. Cylindrical coordinate systems are constructed based on landmarks and are used to smoothly interpolate trajectories for the spiral central axons. Experiments show that our proposed method outperforms the original method and achieves impressive performance with 0 overlapping ANFs and 0 ANFs passing through the bone. The number of ANFs that pass through ST or SV is also reduced by 36.1%.
The standard-of-care treatment to restore sound perception for individuals with severe-to-profound sensorineural hearing loss is the Cochlear Implant (CI) — a small, surgically-inserted electronic device that bypasses most of the mechanism of unaided acoustic hearing to directly stimulate Auditory Nerve Fibers (ANFs). Although many individuals experience success with these devices, a significant portion of recipients receive only marginal benefits. Biophysical models of ANFs have been developed that could be used in an image-guided treatment pipeline for patient-customized CI interventions. However, due to the difficult nature of determining neuron properties in humans, existing models rely on parameters derived from animal studies that were subsequently adapted to human models. Additionally, it is well-established that individual neurons of a single type can be non-homogeneous. In this research, we present a sensitivity analysis of a set of parameters used in one existing fiber model to (1) establish the influence of these parameters on predicted neural activity and (2) explore whether incorporation of these properties as patient-specific tunable parameters in a neural health optimization algorithm can produce a more comprehensive picture of ANF health when used in an image-guided treatment pipeline.
KEYWORDS: Equipment, 3D modeling, Image segmentation, Image restoration, Education and training, Cameras, 3D mask effects, 3D image reconstruction, Surgery, 3D image processing
Surgical instrument tracking is an active research area that can provide surgeons feedback about the location of their tools relative to anatomy. Recent tracking methods are mainly divided into two parts: segmentation and object detection. However, both can only predict 2D information, which is limiting for application to real-world surgery. An accurate 3D surgical instrument model is a prerequisite for precise predictions of the pose and depth of the instrument. Recent singleview 3D reconstruction methods are only used in natural object reconstruction and do not achieve satisfying reconstruction accuracy without 3D attribute-level supervision. Further, those methods are not suitable for the surgical instruments because of their elongated shapes. In this paper, we firstly propose an end-to-end surgical instrument reconstruction system — Self-supervised Surgical Instrument Reconstruction (SSIR). With SSIR, we propose a multi-cycle-consistency strategy to help capture the texture information from a slim instrument while only requiring a binary instrument label map. Experiments demonstrate that our approach improves the reconstruction quality of surgical instruments compared to other self-supervised methods and achieves promising results.
Cochlear implants (CIs) are considered the standard-of-care treatment for profound sensory-based hearing loss. After CI surgery, an audiologist will adjust the CI processor settings for CI recipients to improve overall hearing performance. However, this programming procedure can be long and may lead to suboptimal outcomes due to the lack of objective information. In previous research, our group has developed methods that use patient-specific electrical characteristics to simulate the activation pattern of auditory nerves when they are stimulated by CI electrodes. However, estimating those electrical characteristics require extensive computation time and resources. In this paper, we proposed a deep-learning-based method to coarsely estimate the patient-specific electrical characteristics using a cycle-consistent network architecture. These estimates can then be further optimized using a limited range conventional searching strategy. Our network is trained with a dataset generated by solving physics-based models. The results show that our proposed method can generate high-quality predictions that can be used in the patient-specific model and largely improves the speed of constructing models.
Cochlear implants (CIs) use an array of electrodes implanted in the cochlea to directly stimulate the auditory nerve. After surgery, CI recipients undergo many programming sessions with an audiologist who adjusts CI processor settings to improve performance. However, few tools exist to help audiologists know what settings will lead to better performance. In order to provide objective information to the audiologist for programming, our group has developed a system to permit estimating which auditory neural sites are stimulated by which CI electrodes. To do this, we have proposed physics-based models to calculate the electric field in the cochlea generated by electrical stimulation. However, solving these models require days of computation time and substantial computational resources. In this paper, we proposed a deep-learningbased method to estimate the patient-specific electric fields using a U-Net-like architecture with physics-based loss function. Our network is trained with a dataset generated by solving physics-based models and the results show that the proposed method can achieve similar accuracy with traditional method and largely improves the speed of estimating the intra-cochlear electric field.
Cochlear implants (CIs) use an array of electrodes implanted in the cochlea to directly stimulate the auditory nerve. After surgery, CI recipients undergo many programming sessions with an audiologist who adjusts CI processor settings to improve performance. However, few tools exist to help audiologists know what settings will lead to better performance. In this paper, we propose a new method to assist audiologists by determining a customized firing order of the electrodes on the array using image-based models of patient specific neural stimulation patterns. Our models permit estimating the time delay needed after firing an electrode so that the nerve fibers they stimulate can recover from the refractory period. These predictions allow us to design an optimization algorithm that determines a customized electrode firing order that minimizes negative effects of overlapping stimulation between electrodes. The customized order reduces how often nerves that are in a refractory state from previous stimulation by one electrode are targeted for activation by a subsequent electrode in the sequence. Our experiments show that this method is able to reduce the theoretical stimulation overlap artifacts and could lead to improved hearing outcomes for CI recipients.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.