Skip to end of metadata
Go to start of metadata

Allen Brain Atlas API

The primary data Allen Mouse Brain Connectivity Atlas consists of high-resolution images of axonal projections targeting different anatomic regions or various cell types using Cre-dependent specimens. Each data set is processed through an informatics data analysis pipeline to obtain spatially mapped quantified projection information.

From the API, you can:

 Download images
Download quantified projection values by structure
Download quantified projection values as 3-D grids
Query the image synchronization service
Download atlas images, drawings and structure ontology

This document provides a brief overview of the data, database organization and example queries. API database object names are in camel case. See the main API documentation for more information on data models and query syntax.

Experimental Overview and Metadata

Experimental data from the Atlas is associated with the "Mouse Connectivity Projection" Product.

Each Specimen is injected with a viral tracer that labels axons by expressing a fluorescent protein. For each experiment, the injection site is analyzed and assigned a primary injection structure and if applicable a list of secondary injection structures.

Labeled axons are visualized using serial two-photon tomography. A typical SectionDataSet consists of 140 coronal images at 100 µm sampling density. Each image has 0.35 µm pixel resolution and raw data is in 16-bit per channel format.

From the API, detailed information about SectionDataSets, SectionImages, Injections and TransgenicLines can be obtained using RMA queries.


Figure: Projection dataset (id=126862385) with injection in the primary visual area (VISp) as visualized in the web application image viewer.

To provide a uniform look over all experiments, default window and level values were computed using intensity histograms. For each experiment, the upper threshold defaults to (2.33 x the 95th percentile value) for the red channel and (6.33 x the 95th percentile value) for the green channel. The default threshold can be used to download images and/or image region in 8-bit per channel image format.

In the web application, images from the experiment are visualized in an experimental detail page. All displayed information, images and structural projection values is also available through the API.

Figure:Experiment detail page for an injection into the primary visual area.

See the image download page to learn how to download images at different resolution and regions of interest.


Informatics Data Processing

The informatics data processing pipeline produces results that enable the navigation, analysis and visualization. The pipeline consists of the following components:

  • an annotated 3-D reference space,
  • an alignment module,
  • an projection detection module,
  • an projection gridding module, and
  • a structure unionizer module.

The output of the pipeline is quantified projection values at a grid voxel level and at a structure level according to the integrated reference atlas ontology. The grid level data are used downstream to provide a correlative search service and to support visualization of spatial relationships. See the informatics processing whitepaper for more details.

3-D Reference Models

The backbone of the automated pipeline is an annotated 3-D reference space based on the same Specimen used for the coronal plates of the integrated reference atlas. A brain volume was reconstructed from the SectionImages using a combination of high frequency section-to-section histology registration with low-frequency histology to (ex-cranio) MRI registration. This first-stage reconstructed volume was then aligned with a sagittally sectioned Specimen. Once a straight mid-sagittal plane was achieved, a synthetic symmetric space was created by reflecting one hemisphere to the other side of the volume.

Over 800 Structures were extracted from the 2-D coronal reference atlas plates and interpolated to create symmetric 3-D annotations. Structures in the reference atlas are arranged in a hierarchical organization. Each structure has one parent and denotes a "part-of" relationship. Structures are assigned a color to visually emphasize their hierarchical positions in the brain.

See the atlas drawings and ontologies page for more information.

To avoid possible bias introduced by using a single specimen as a registration target, the Nissl-based 3-D reference volume was not directly used. Instead, a large number of brains were mapped in advanced and averaged to form the registration target. This averaged template maybe updated periodically to include more brain specimens and is available for download.

All SectionDataSets are registered to ReferenceSpace id = 9 in PIR orientation (+x = posterior, +y = inferior, +z = right).

Figure: The common reference space is in PIR orientation where x axis = Anterior-to-Posterior, y axis = Superior-to-Inferior and z axis = Left-to-Right.

3-D annotation volumes were updated in the June 2013 release to reflect changes in the atlas drawings and ontology. Also note that the volumes are now in a 32-bit format to large structure identifiers.

Four volumetric data files are available for download:

  • atlasVolume: uchar (8bit) grayscale Nissl volume of the reconstructed brain at 25 µm resolution.
  • annotation: uint (32bit) structural annotation volume without fiber tracts at 25 µm resolution. The value represents the ID of the finest level structure annotated for the voxel. Note: the 3-D mask for any structure is composed of all voxels annotated for that structure and all of its descendents in the structure hierarchy.
  • annotationFiber: uint (32bit) fiber tracts annotation volume at 25 µm resolution.
  • averageTemplate: ushort (16bit) average brain template used as registration target at 25 µm resolution.
  • gridAnnotation - 100 µm: uint (32bit) combined structural and fiber tract annotation volume at grid (100 µm) resolution for projection analysis.

Volumetric data is stored in an uncompressed format with a simple text header file in MetaImage format.

Example Matlab code snippet to read in the 25µm atlas and annotation volumes:

% ------------
% Download and unzip the atlasVolume, annotation, annotationFiber and averageTemplate zip files
% ------------

% 25 micron volume size
size = [528 320 456];

% VOL = 3-D matrix of atlas Nissl volume
fid = fopen('atlasVolume/atlasVolume.raw', 'r', 'l' );
VOL = fread( fid, prod(size), 'uint8' );
fclose( fid );
VOL = reshape(VOL,size);

% ANO = 3-D matrix of structural annotation labels
fid = fopen('P56_Mouse_annotation/annotation.raw', 'r', 'l' );
ANO = fread( fid, prod(size), 'uint32' );
fclose( fid );
ANO = reshape(ANO,size);

% FIBT = 3-D matrix of fiber tract annotation labels
fid = fopen('P56_Mouse_annotationFiber/annotationFiber.raw', 'r', 'l' );
FIBT = fread( fid, prod(size), 'uint32' );
fclose( fid );
FIBT = reshape(FIBT,size);

% AVGT = 3-D matrix of average template volume
fid = fopen('averageTemplate/atlasVolume.raw', 'r', 'l' );
AVGT = fread( fid, prod(size), 'uint16' );
fclose( fid );
AVGT = reshape(AVGT,size);

% Display one coronal section

% Display one sagittal section

Example Matlab code snippet to read in the 100µm grid annotation volume:

% -----------
% Download and unzip the 100 micron gridAnnotation zip files
% -----------

%  grid volume size
sizeGrid = [133, 81, 115];

% ANOGD = 3-D matrix of grid-level annotation labels
fid = fopen( 'P56_Mouse_gridAnnotation_100micron/gridAnnotation.raw', 'r', 'l' );
ANOGD = fread( fid, prod(sizeGrid), 'uint32' );
fclose( fid );
ANOGD = reshape(ANOGD,sizeGrid);

% Display one coronal and one sagittal section
figure;imagesc(squeeze(ANOGD(73,:,:)));colormap(lines);caxis([0 3000]);
figure;imagesc(squeeze(ANOGD(:,:,78)));colormap(lines);caxis([0 3000]);

Image Alignment

The aim of image alignment is to establish a mapping from each SectionImage to the 3-D reference space. The module reconstructs a 3-D Specimen volume from its constituent SectionImages and registers the volume to the 3-D reference model by maximizing mutual information between red channel of the experimental data and the average template.

Once registration is achieved, information from the 3-D reference model can be transferred to the reconstructed Specimen and vice versa. The resulting transform information is stored in the database. Each SectionImage has an Alignment2d object that represents the 2-D affine transform between a image pixel position to a location in the Specimen volume. Each SectionDataSet has an Alignment3d object that represents the 3-D affine transform between a location in the Specimen volume and point in the 3-D reference model. Spatial correspondence between any two SectionDataSets from different Specimens can be established by composing these transforms.

For convenience, a set of "Image Sync" API methods is available to find corresponding position between SectionDataSets, the 3-D reference model and structures. Note that all locations on SectionImages are reported in pixel coordinates and all locations in 3-D ReferenceSpaces are reported in microns. These methods are used by the Web application to provide the image synchronization feature in the multiple image viewer (see Figure).


Figure: Point-based image synchronization. Multiple image-series in the Zoom-and-Pan (Zap) viewer can be synchronized to the same approximate location. Before and after synchronization screenshots show projection data with injection in the superior colliculus (SCs), primary visual area (VISp) anteolateral visual area (VISal), and the relevant coronal plates of the Allen Reference Atlas. All experiments show strong signal in the thalamus.

Projection Data Segmentation

For every Projection image, a gray scale mask is generated that identifies pixels corresponding to labeled axon trajectories. The segmentation algorithm is based on image edge/line detection and morphological filtering.

The segmentation mask image is the same size and pixel resolution as the primary projection image and can be downloaded through the image download service.

Figure: Signal detection for projection data with injection in the primary motor area. Screenshot of a segmentation mask showing detected signal in the ventral posterolateral nucleus of the thalamus (VPL), internal capsule (int), caudoputamen (CP) and supplemental somatosensory area (SSs). In the Web application, the mask is color-coded for display: green indicates a pixel is part of an edge-like object while yellow indicates pixels that are part of a more diffuse region.

Projection Data Gridding

For each dataset, the gridding module creates a low resolution 3-D summary of the labeled axonal trajectories and resamples the data to the common coordinate space of the 3-D reference model. Casting all data into a canonical space allows for easy cross-comparison between datasets. The projection data grids can also be viewed directly as 3-D volumes or used for analysis (i.e. afferent and correlative searches).

Each image in a dataset is divided into a 100 x 100 µm grid. Pixel-based statistics are computed using information from the primary image and the segmentation mask:

  • projection density = sum of detected pixels / sum of all pixels in division
  • projection intensity = sum of detected pixel intensity / sum of detected pixels
  • projection energy = projection intensity * projection density

The resulting 3-D grid is then transformed into the standard reference space.

Grid data can be downloaded for each SectionDataSet using the 3-D Grid Data Service. The service returns a zip file containing the volumetric data for density, intensity and/or energy in an uncompressed format with a simple text header file in MetaImage format. Structural annotation for each grid voxel can be obtained via the ReferenceSpace gridAnnotation volume file at 100 µm grid resolution.

Voxels with no data are assigned a value of "-1".


Example Matlab code snippet to read in the 100 µm density grid volume:

% Download and unzip the density grid file for VISp SectionDataSet
% -----------

%  grid volume size
sizeGrid = [133, 81, 115];

% DENSITY = 3-D matrix of projection density grid volume
fid = fopen('11_wks_coronal_126862385/density.raw', 'r', 'l' );
DENSITY = fread( fid, prod(sizeGrid), 'float' );
fclose( fid );
DENSITY = reshape(DENSITY,sizeGrid);

% Display one coronal and one sagittal section
figure;imagesc(squeeze(DENSITY(73,:,:)));colormap(hot);caxis([0 1]);
figure;imagesc(squeeze(DENSITY(:,:,78)));colormap(hot);caxis([0 1]);

Comparing Projection Data Grids and Gene Expression Grids

Due to section sampling density, projection data grids are at 100µm resolution while gene expression grids are at 200µm resolution. Upsampling with appropriate interpolation of the gene expression data is necessary in order to numerically compare between the two different types of data.  When interpolating the data, "no data" (-1) voxels needs to be handled specifically.

Example Matlab code snippet to upsample gene expression grid with "no data" handling:

% Download and unzip energy volume file for gene Rasd2 coronal SectionDataSet 73636089
urlwrite('', '');

% Download and unzip density volume file for BLAa injection SectionDataSet 113144533
urlwrite('', '');

% Gene expression grids are at 200 micron resolution.
geneGridSize = [67 41 58];
fid = fopen('Rasd2_73636089/density.raw', 'r', 'l'  );
Rasd2 = fread( fid, prod(geneGridSize), 'float' );
Rasd2 = reshape( Rasd2, geneGridSize );

% Projection grids are at 100 micron resolution
projectionGridSize = [133 81 115];
fid = fopen('BLAa_113144533/density.raw', 'r', 'l'  );
BLAa = fread( fid, prod(projectionGridSize), 'float' );
BLAa = reshape( BLAa, projectionGridSize );

% Upsample gene expression grid to same dimension as projection grid using linear interpolation
[xi,yi,zi] = meshgrid(1:0.5:41,1:0.5:67,1:0.5:58); %note: matlab transposes x-y
d = Rasd2; d(d<0) = 0; % fill in missing data as zeroes
Rasd2_100 = interp3(d ,xi,yi,zi,'linear');

% Handle "no data" (-1) voxels.
% Create a mask of "data" vs "no data" voxels and apply linear interpolation
m = zeros(size(Rasd2));
m(Rasd2  >= 0) = 1; mi = interp3(m,xi,yi,zi,'linear');

% Normalize data by dividing by interpolated mask. Assign value of "-1" to "no data" voxels.
Rasd2_100 = Rasd2_100 ./ mi;
Rasd2_100( mi <= 0 ) = -1;

% Create a merged image of one coronal plane;
gimg = squeeze(Rasd2_100(52,:,:)); gimg = max(0,gimg); gimg = gimg / 0.025; gimg = min(1,gimg);
pimg = squeeze(BLAa(52,:,:)); pimg = max(0,pimg); pimg = pimg / 0.8; pimg = min(1,pimg);
rgb = zeros([size(gimg),3]); rgb(:,:,1) = gimg; rgb(:,:,2) = pimg;
figure; image(rgb);

Figure: ISH SectionDataSet (id=73636089) for gene Rasd2 showing enriched expression in the striatum (left). Projection SectionDataSet (id=73636089) with injection in the anterior part of the basolateral amygdalar nucleus (BLAa) showing projection to the striatum and other brain areas (center). One coronal slice of the BLAa projection density grid (green) merged with an upsampled and interpolated Rasd2 expression density grid (red).

Projection Structure Unionization

Projection signal statistics can be computed for each structure delineated in the reference atlas by combining or unionizing grid voxels with the same 3-D structural label. While the reference atlas is typically annotated at the lowest level of the ontology tree, statistics at upper level structures can be obtained by combining measurements of the hierarchical children to obtain statistics for the parent structure. The unionization process also separates out the left versus right hemisphere contributions as well as the injection versus non-injection components.

Projection statistics are encapsulated as a ProjectionStructureUnionize object associated with one Structure, either left, right or both Hemispheres and one SectionDataSet.


ProjectionStructureUnionize data is used in the web application to display projection summary bar graphs.

  • No labels