Allen Brain Atlas API
The primary data of the Allen Mouse Brain Connectivity Atlas consists of high-resolution images of axonal projections targeting different anatomic regions or various cell types using Cre-dependent specimens. Each data set is processed through an informatics data analysis pipeline to obtain spatially mapped quantified projection information.
From the API, you can:
Download quantified projection values by structure
Download quantified projection values as 3-D grids
Query the image synchronization service
Download atlas images, drawings and structure ontology
This document provides a brief overview of the data, database organization and example queries. API database object names are in camel case. See the main API documentation for more information on data models and query syntax.
Experimental Overview and Metadata
Experimental data from the Atlas is associated with the "Mouse Connectivity Projection" Product.
Each Specimen is injected with a viral tracer that labels axons by expressing a fluorescent protein. For each experiment, the injection site is analyzed and assigned a primary injection structure and, if applicable, a list of secondary injection structures.
Labeled axons are visualized using serial two-photon tomography. A typical SectionDataSet consists of 140 coronal images at 100 µm sampling density. Each image has 0.35 µm pixel resolution and raw data is in 16-bit per channel format. Background fluorescence in the red channel illustrates basic anatomy and structures of the brain, and the injection site and projections are shown in the green channel. No data was collected in the blue channel.
From the API, detailed information about SectionDataSets, SectionImages, Injections and TransgenicLines can be obtained using RMA queries.
- All experiments in the "Mouse Connectivity Projection" Product
- All experiments with injection in the primary visual area (VISp, structure_id=385)
- Detailed metadata for one experiment with injection in the VISp (id=126862385) (http://api.brain-map.org/api/v2/data/query.xml?criteria=model::SectionDataSet,rma::criteria,%5Bid$eq126862385%5D,rma::include,specimen(injections(structure%5Bid$eq385%5D)),equalization,sub_images,rma::options%5Border$eq'sub_images.section_number$asc'%5D)
Figure: Projection dataset (id=126862385) with injection in the primary visual area (VISp) as visualized in the web application image viewer.
To provide a uniform look over all experiments, default window and level values were computed using intensity histograms. For each experiment, the upper threshold defaults to (2.33 x the 95th percentile value) for the red channel and (6.33 x the 95th percentile value) for the green channel. The default threshold can be used to download images and/or image region in 8-bit per channel image format.
In the web application, images from the experiment are visualized in an experimental detail page. All displayed information, images and structural projection values are also available through the API.
Figure:Experiment detail page for an injection into the primary visual area.
image download page to learn how to download images at different resolutions and regions of interest.See the
- RMA query to fetch meta-information of one projection image
- Download image downsampled by factor of 6 using default thresholds
- Download a region of interest at full resolution using default thresholds
Informatics Data Processing
The informatics data processing pipeline produces results that enable navigation, analysis and visualization of the data. The pipeline consists of the following components:
- an annotated 3-D reference space,
- an alignment module,
- a projection detection module,
- a projection gridding module, and
- a structure unionizer module.
The output of the pipeline is quantified projection values at a grid voxel level and at a structure level according to the integrated reference atlas ontology. The grid level data are used downstream to provide a correlative search service and to support visualization of spatial relationships. See the informatics processing white paper for more details.
3-D Reference Models
The backbone of the automated pipeline is an annotated 3-D reference space based on the same Specimen used for the coronal plates of the integrated reference atlas. A brain volume was reconstructed from the SectionImages using a combination of high frequency section-to-section histology registration with low-frequency histology to (ex-cranio) MRI registration. This first-stage reconstructed volume was then aligned with a sagittally sectioned Specimen. Once a straight mid-sagittal plane was achieved, a synthetic symmetric space was created by reflecting one hemisphere to the other side of the volume.
Over 800 Structures were extracted from the 2-D coronal reference atlas plates and interpolated to create symmetric 3-D annotations. Structures in the reference atlas are arranged in a hierarchical organization. Each structure has one parent and denotes a "part-of" relationship. Structures are assigned a color to visually emphasize their hierarchical positions in the brain.
atlas drawings and ontologies page for more information.See the
To avoid possible bias introduced by using a single specimen as a registration target, the Nissl-based 3-D reference volume was not directly used. Instead, a large number of brains were mapped in advanced and averaged to form the registration target. This averaged template may be updated periodically to include more brain specimens and is available for download.
All SectionDataSets are registered to ReferenceSpace id = 9 in PIR orientation (+x = posterior, +y = inferior, +z = right).
Figure: The common reference space is in PIR orientation where x axis = Anterior-to-Posterior, y axis = Superior-to-Inferior and z axis = Left-to-Right.
3-D annotation volumes were updated in the June 2013 release to reflect changes in the atlas drawings and ontology. Also note that the volumes are now in a 32-bit format to accommodate large structure identifiers.
Five volumetric data files are available for download:
- atlasVolume: uchar (8bit) grayscale Nissl volume of the reconstructed brain at 25 µm resolution.
- annotation: uint (32bit) structural annotation volume without fiber tracts at 25 µm resolution. The value represents the ID of the finest level structure annotated for the voxel. Note: the 3-D mask for any structure is composed of all voxels annotated for that structure and all of its descendents in the structure hierarchy.
- annotationFiber: uint (32bit) fiber tracts annotation volume at 25 µm resolution.
- averageTemplate: ushort (16bit) average brain template used as registration target at 25 µm resolution.
- gridAnnotation - 100 µm: uint (32bit) combined structural and fiber tract annotation volume at grid (100 µm) resolution for projection analysis.
Volumetric data is stored in an uncompressed format with a simple text header file in MetaImage format.
Example Matlab code snippet to read in the 25µm atlas and annotation volumes:
Example Matlab code snippet to read in the 100µm grid annotation volume:
The aim of image alignment is to establish a mapping from each SectionImage to the 3-D reference space. The module reconstructs a 3-D Specimen volume from its constituent SectionImages and registers the volume to the 3-D reference model by maximizing mutual information between the red channel of the experimental data and the average template.
Once registration is achieved, information from the 3-D reference model can be transferred to the reconstructed Specimen and vice versa. The resulting transform information is stored in the database. Each SectionImage has an Alignment2d object that represents the 2-D affine transform between an image pixel position and a location in the Specimen volume. Each SectionDataSet has an Alignment3d object that represents the 3-D affine transform between a location in the Specimen volume and a point in the 3-D reference model. Spatial correspondence between any two SectionDataSets from different Specimens can be established by composing these transforms.
"Image Sync" API methods is available to find corresponding positions between SectionDataSets, the 3-D reference model and structures. Note that all locations on SectionImages are reported in pixel coordinates and all locations in 3-D ReferenceSpaces are reported in microns. These methods are used by the Web application to provide the image synchronization feature in the multiple image viewer (see Figure).For convenience, a set of
- Sync a VISp and VISal experiment to a location in a SCs SectionDataSet
- Sync the P56 coronal reference atlas to a location in the SCs SectionDataSet
Figure: Point-based image synchronization. Multiple image-series in the Zoom-and-Pan (Zap) viewer can be synchronized to the same approximate location. Before and after synchronization screenshots show projection data with injection in the superior colliculus (SCs), primary visual area (VISp) anteolateral visual area (VISal), and the relevant coronal plates of the Allen Reference Atlas. All experiments show strong signal in the thalamus.
Projection Data Segmentation
For every Projection image, a grayscale mask is generated that identifies pixels corresponding to labeled axon trajectories. The segmentation algorithm is based on image edge/line detection and morphological filtering.
Figure: Signal detection for projection data with injection in the primary motor area. Screenshot of a segmentation mask showing detected signal in the ventral posterolateral nucleus of the thalamus (VPL), internal capsule (int), caudoputamen (CP) and supplemental somatosensory area (SSs). In the Web application, the mask is color-coded for display: green indicates a pixel is part of an edge-like object while yellow indicates pixels that are part of a more diffuse region.
Projection Data Gridding
For each dataset, the gridding module creates a low resolution 3-D summary of the labeled axonal trajectories and resamples the data to the common coordinate space of the 3-D reference model. Casting all data into a canonical space allows for easy cross-comparison between datasets. The projection data grids can also be viewed directly as 3-D volumes or used for analysis (i.e. target, spatial and correlative searches).
Each image in a dataset is divided into a 100 x 100 µm grid. Pixel-based statistics are computed using information from the primary image and the segmentation mask:
- projection density = sum of detected pixels / sum of all pixels in division
- projection intensity = sum of detected pixel intensity / sum of detected pixels
- projection energy = projection intensity * projection density
The resulting 3-D grid is then transformed into the standard reference space.
3-D Grid Data Service. The service returns a zip file containing the volumetric data for density, intensity and/or energy in an uncompressed format with a simple text header file in MetaImage format. Structural annotation for each grid voxel can be obtained via the ReferenceSpace gridAnnotation volume file at 100 µm grid resolution.Grid data can be downloaded for each SectionDataSet using the
Voxels with no data are assigned a value of "-1".
Example Matlab code snippet to read in the 100 µm density grid volume:
Comparing Projection Data Grids and Gene Expression Grids
Due to section sampling density, projection data grids are at 100µm resolution while gene expression grids are at 200µm resolution. Upsampling with appropriate interpolation of the gene expression data is necessary in order to numerically compare between the two different types of data. When interpolating the data, "no data" (-1) voxels needs to be handled specifically.
Example Matlab code snippet to upsample gene expression grid with "no data" handling:
Figure: ISH SectionDataSet (id=73636089) for gene Rasd2 showing enriched expression in the striatum (left). Projection SectionDataSet (id=73636089) with injection in the anterior part of the basolateral amygdalar nucleus (BLAa) showing projection to the striatum and other brain areas (center). One coronal slice of the BLAa projection density grid (green) merged with an upsampled and interpolated Rasd2 expression density grid (red).
Projection Structure Unionization
Projection signal statistics can be computed for each structure delineated in the reference atlas by combining or unionizing grid voxels with the same 3-D structural label. While the reference atlas is typically annotated at the lowest level of the ontology tree, statistics at upper level structures can be obtained by combining measurements of the hierarchical children to obtain statistics for the parent structure. The unionization process also separates out the left versus right hemisphere contributions as well as the injection versus non-injection components.
RMA.Projection statistics are encapsulated as a ProjectionStructureUnionize object associated with one Structure, either left, right or both Hemispheres and one SectionDataSet. ProjectionStructureUnionize can be downloaded via
- Download structure projection signal statistics for one VISp injection experiment exclusive of injection area
- Download injection site statistics for the same experiment
ProjectionStructureUnionize data is used in the web application to display projection summary bar graphs.