Deciphering brain architecture at a system level requires the ability to quantitatively map its structure with cellular and subcellular resolution. Besides posing significant challenges to current optical microscopy methods, this ambitious goal requires the development of a new generation of tools to make sense of the huge number of raw images generated, which can easily exceed several TeraBytes for a single sample. We present an integrated pipeline enabling transformation of the acquired dataset from a collection of voxel gray levels to a semantic representation of the sample. This pipeline starts with a software for image stitching that computes global optimal alignment of the 3D tiles. The fused volume is then accessed virtually by means of a dedicated API (Application Programming Interface). The virtually fused volume is then processed to extract meaningful information. We demonstrate two complementary approaches based on deep convolutional networks. In one case, a 3D conv-net is used to ‘semantically deconvolve’ the image, allowing accurate localization of neuronal bodies with standard clustering algorithms (e.g. mean shift). The scalability of this approach is demonstrated by mapping the spatial distribution of different neuronal populations in a whole mouse brain with singlecell resolution. To go beyond simple localization, we exploited a 2D conv-net estimating for each pixel the probability of being part of a neuron. The output of the net is then processed with a contour finding algorithm, obtaining reliable segmentation of cell morphology. This information can be used to classify neurons, expanding the potential of chemical labeling strategies.
Quantitative and scalable whole-brain neuroanatomical mapping, with cellular resolution and molecular specificity, poses significant technological challenges. Indeed, a high image quality must be preserved reliably across the entire specimen and not only in a few representative volumes. On the other hand, robust and automated image analysis methods must be appropriately scalable to teravoxel datasets. Here, we present an experimental pipeline, involving tissue clearing, high-resolution light-sheet microscopy, volume registration to atlas, and deep learning strategies for image analysis, allowing the reconstruction of 3D maps of selected cell types in the whole mouse brain. We employed RAPID autofocusing [Silvestri et al., submitted] to keep the system sharply in focus throughout the entire mouse brain, without reducing the microscope throughput. Images were spatially anchored to reference atlas using semi-automatic tools (xNII family, http://www.nesys.uio.no). Finally, we used novel high-throughput tools for image processing, including deep learning strategies [Frasconi et al., 2014] to localize single neurons with high accuracy. By applying our pipeline to transgenically-labeled samples, we can produce an atlas of spatial distribution of genetically-defined cell types. Besides being a valuable reference for neurobiologists, these datasets can be used to build realistic simulations of neuronal functioning, such as in the Human Brain Project.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.