Skip to end of metadata
Go to start of metadata

Allen Brain Atlas API

The primary data of the Allen Mouse Brain Connectivity Atlas consists of high-resolution images of axonal projections targeting different anatomic regions or various cell types using Cre-dependent specimens. Each data set is processed through an informatics data analysis pipeline to obtain spatially mapped quantified projection information.

From the API, you can:

Download images
Download quantified projection values by structure
Download quantified projection values as 3-D grids
Query the source, target, spatial and correlative search services
Query the image synchronization service
Download atlas images, drawings and structure ontology

This document provides a brief overview of the data, database organization and example queries. API database object names are in camel case. See the main API documentation for more information on data models and query syntax.

Experimental Overview and Metadata

Experimental data from the Atlas is associated with the "Mouse Connectivity Projection" Product.

Each Specimen is injected with a viral tracer that labels axons by expressing a fluorescent protein. For each experiment, the injection site is analyzed and assigned a primary injection structure and, if applicable, a list of secondary injection structures.

Labeled axons are visualized using serial two-photon tomography. A typical SectionDataSet consists of 140 coronal images at 100 µm sampling density. Each image has 0.35 µm pixel resolution and raw data is in 16-bit per channel format. Background fluorescence in the red channel illustrates basic anatomy and structures of the brain, and the injection site and projections are shown in the green channel. No data was collected in the blue channel.

From the API, detailed information about SectionDataSets, SectionImages, Injections and TransgenicLines can be obtained using RMA queries.

Examples:


Figure: Projection dataset (id=126862385) with injection in the primary visual area (VISp) as visualized in the web application image viewer.

To provide a uniform look over all experiments, default window and level values were computed using intensity histograms. For each experiment, the upper threshold defaults to (2.33 x the 95th percentile value) for the red channel and (6.33 x the 95th percentile value) for the green channel. The default threshold can be used to download images and/or image region in 8-bit per channel image format.

In the web application, images from the experiment are visualized in an experimental detail page. All displayed information, images and structural projection values are also available through the API.

Figure:Experiment detail page for an injection into the primary visual area.

See the image download page to learn how to download images at different resolutions and regions of interest.

Examples:

Informatics Data Processing

The informatics data processing pipeline produces results that enable navigation, analysis and visualization of the data. The pipeline consists of the following components:

  • an annotated 3-D reference space,
  • an alignment module,
  • a projection detection module,
  • a projection gridding module, and
  • a structure unionizer module.

The output of the pipeline is quantified projection values at a grid voxel level and at a structure level according to the integrated reference atlas ontology. The grid level data are used downstream to provide a correlative search service and to support visualization of spatial relationships. See the informatics processing white paper for more details.

3-D Reference Models

The backbone of the automated pipeline is an annotated 3-D reference space based on the same Specimen used for the coronal plates of the integrated reference atlas. A brain volume was reconstructed from the SectionImages using a combination of high frequency section-to-section histology registration with low-frequency histology to (ex-cranio) MRI registration. This first-stage reconstructed volume was then aligned with a sagittally sectioned Specimen. Once a straight mid-sagittal plane was achieved, a synthetic symmetric space was created by reflecting one hemisphere to the other side of the volume.

Over 800 Structures were extracted from the 2-D coronal reference atlas plates and interpolated to create symmetric 3-D annotations. Structures in the reference atlas are arranged in a hierarchical organization. Each structure has one parent and denotes a "part-of" relationship. Structures are assigned a color to visually emphasize their hierarchical positions in the brain.

See the atlas drawings and ontologies page for more information.

To avoid possible bias introduced by using a single specimen as a registration target, the Nissl-based 3-D reference volume was not directly used. Instead, a large number of brains were mapped in advanced and averaged to form the registration target. This averaged template may be updated periodically to include more brain specimens and is available for download.

All SectionDataSets are registered to ReferenceSpace id = 9 in PIR orientation (+x = posterior, +y = inferior, +z = right).

Figure: The common reference space is in PIR orientation where x axis = Anterior-to-Posterior, y axis = Superior-to-Inferior and z axis = Left-to-Right.

3-D annotation volumes were updated in the June 2013 release to reflect changes in the atlas drawings and ontology. Also note that the volumes are now in a 32-bit format to accommodate large structure identifiers.

Five volumetric data files are available for download:

  • atlasVolume: uchar (8bit) grayscale Nissl volume of the reconstructed brain at 25 µm resolution.
  • annotation: uint (32bit) structural annotation volume without fiber tracts at 25 µm resolution. The value represents the ID of the finest level structure annotated for the voxel. Note: the 3-D mask for any structure is composed of all voxels annotated for that structure and all of its descendents in the structure hierarchy.
  • annotationFiber: uint (32bit) fiber tracts annotation volume at 25 µm resolution.
  • averageTemplate: ushort (16bit) average brain template used as registration target at 25 µm resolution.
  • gridAnnotation - 100 µm: uint (32bit) combined structural and fiber tract annotation volume at grid (100 µm) resolution for projection analysis.

All volumetric data is stored in an uncompressed format with a simple text header file in MetaImage format. The raw numerical data is stored as a 1-D array as shown in the figure below.
Figure: Packing of 3-D volumetric data into a 1-D numerical array.

Example Matlab code snippet to read in the 25µm atlas and annotation volumes:

% ------------
% Download and unzip the atlasVolume, annotation, annotationFiber and averageTemplate zip files
% ------------

% 25 micron volume size
size = [528 320 456];

% VOL = 3-D matrix of atlas Nissl volume
fid = fopen('atlasVolume/atlasVolume.raw', 'r', 'l' );
VOL = fread( fid, prod(size), 'uint8' );
fclose( fid );
VOL = reshape(VOL,size);

% ANO = 3-D matrix of structural annotation labels
fid = fopen('P56_Mouse_annotation/annotation.raw', 'r', 'l' );
ANO = fread( fid, prod(size), 'uint32' );
fclose( fid );
ANO = reshape(ANO,size);

% FIBT = 3-D matrix of fiber tract annotation labels
fid = fopen('P56_Mouse_annotationFiber/annotationFiber.raw', 'r', 'l' );
FIBT = fread( fid, prod(size), 'uint32' );
fclose( fid );
FIBT = reshape(FIBT,size);

% AVGT = 3-D matrix of average template volume
fid = fopen('averageTemplate/atlasVolume.raw', 'r', 'l' );
AVGT = fread( fid, prod(size), 'uint16' );
fclose( fid );
AVGT = reshape(AVGT,size);

% Display one coronal section
figure;imagesc(squeeze(VOL(264,:,:)));colormap(gray);
figure;imagesc(squeeze(ANO(264,:,:)));colormap(lines);
figure;imagesc(squeeze(FIBT(264,:,:)));colormap(lines);
figure;imagesc(squeeze(AVGT(264,:,:)));colormap(gray);

% Display one sagittal section
figure;imagesc(squeeze(VOL(:,:,220)));colormap(gray);
figure;imagesc(squeeze(ANO(:,:,220)));colormap(lines);
figure;imagesc(squeeze(FIBT(:,:,220)));colormap(lines);
figure;imagesc(squeeze(AVGT(:,:,220)));colormap(gray);

Example Matlab code snippet to read in the 100µm grid annotation volume:

% -----------
% Download and unzip the 100 micron gridAnnotation zip files
% -----------

%  grid volume size
sizeGrid = [133, 81, 115];

% ANOGD = 3-D matrix of grid-level annotation labels
fid = fopen( 'P56_Mouse_gridAnnotation_100micron/gridAnnotation.raw', 'r', 'l' );
ANOGD = fread( fid, prod(sizeGrid), 'uint32' );
fclose( fid );
ANOGD = reshape(ANOGD,sizeGrid);

% Display one coronal and one sagittal section
figure;imagesc(squeeze(ANOGD(73,:,:)));colormap(lines);caxis([0 3000]);
figure;imagesc(squeeze(ANOGD(:,:,78)));colormap(lines);caxis([0 3000]);

Image Alignment

The aim of image alignment is to establish a mapping from each SectionImage to the 3-D reference space. The module reconstructs a 3-D Specimen volume from its constituent SectionImages and registers the volume to the 3-D reference model by maximizing mutual information between the red channel of the experimental data and the average template.

Once registration is achieved, information from the 3-D reference model can be transferred to the reconstructed Specimen and vice versa. The resulting transform information is stored in the database. Each SectionImage has an Alignment2d object that represents the 2-D affine transform between an image pixel position and a location in the Specimen volume. Each SectionDataSet has an Alignment3d object that represents the 3-D affine transform between a location in the Specimen volume and a point in the 3-D reference model. Spatial correspondence between any two SectionDataSets from different Specimens can be established by composing these transforms.

For convenience, a set of "Image Sync" API methods is available to find corresponding positions between SectionDataSets, the 3-D reference model and structures. Note that all locations on SectionImages are reported in pixel coordinates and all locations in 3-D ReferenceSpaces are reported in microns. These methods are used by the Web application to provide the image synchronization feature in the multiple image viewer (see Figure).

Examples:

Figure: Point-based image synchronization. Multiple image-series in the Zoom-and-Pan (Zap) viewer can be synchronized to the same approximate location. Before and after synchronization screenshots show projection data with injection in the superior colliculus (SCs), primary visual area (VISp) anteolateral visual area (VISal), and the relevant coronal plates of the Allen Reference Atlas. All experiments show strong signal in the thalamus.

Projection Data Segmentation

For every Projection image, a grayscale mask is generated that identifies pixels corresponding to labeled axon trajectories. The segmentation algorithm is based on image edge/line detection and morphological filtering.

The segmentation mask image is the same size and pixel resolution as the primary projection image and can be downloaded through the image download service.

Figure: Signal detection for projection data with injection in the primary motor area. Screenshot of a segmentation mask showing detected signal in the ventral posterolateral nucleus of the thalamus (VPL), internal capsule (int), caudoputamen (CP) and supplemental somatosensory area (SSs). In the Web application, the mask is color-coded for display: green indicates a pixel is part of an edge-like object while yellow indicates pixels that are part of a more diffuse region.

Projection Data Gridding

For each dataset, the gridding module creates a low resolution 3-D summary of the labeled axonal trajectories and resamples the data to the common coordinate space of the 3-D reference model. Casting all data into a canonical space allows for easy cross-comparison between datasets. The projection data grids can also be viewed directly as 3-D volumes or used for analysis (i.e. target, spatial and correlative searches).

Each image in a dataset is divided into a 100 x 100 µm grid. Pixel-based statistics are computed using information from the primary image and the segmentation mask:

  • projection density = sum of detected pixels / sum of all pixels in division
  • projection intensity = sum of detected pixel intensity / sum of detected pixels
  • projection energy = projection intensity * projection density

The resulting 3-D grid is then transformed into the standard reference space.

Grid data can be downloaded for each SectionDataSet using the 3-D Grid Data Service. The service returns a zip file containing the volumetric data for density, intensity and/or energy in an uncompressed format with a simple text header file in MetaImage format. Structural annotation for each grid voxel can be obtained via the ReferenceSpace gridAnnotation volume file at 100 µm grid resolution.

Voxels with no data are assigned a value of "-1".

Examples:

Example Matlab code snippet to read in the 100 µm density grid volume:

%------------
% Download and unzip the density grid file for VISp SectionDataSet
% -----------

%  grid volume size
sizeGrid = [133, 81, 115];

% DENSITY = 3-D matrix of projection density grid volume
fid = fopen('11_wks_coronal_126862385/density.raw', 'r', 'l' );
DENSITY = fread( fid, prod(sizeGrid), 'float' );
fclose( fid );
DENSITY = reshape(DENSITY,sizeGrid);

% Display one coronal and one sagittal section
figure;imagesc(squeeze(DENSITY(73,:,:)));colormap(hot);caxis([0 1]);
figure;imagesc(squeeze(DENSITY(:,:,78)));colormap(hot);caxis([0 1]);

Comparing Projection Data Grids and Gene Expression Grids

Due to section sampling density, projection data grids are at 100µm resolution while gene expression grids are at 200µm resolution. Upsampling with appropriate interpolation of the gene expression data is necessary in order to numerically compare between the two different types of data.  When interpolating the data, "no data" (-1) voxels needs to be handled specifically.

Example Matlab code snippet to upsample gene expression grid with "no data" handling:

% Download and unzip energy volume file for gene Rasd2 coronal SectionDataSet 73636089
mkdir('Rasd2_73636089');
urlwrite('http://api.brain-map.org/grid_data/download/74819249?include=density', 'temp.zip');
unzip('temp.zip','Rasd2_73636089');

% Download and unzip density volume file for BLAa injection SectionDataSet 113144533
mkdir('BLAa_113144533');
urlwrite('http://api.brain-map.org/grid_data/download/113144533?include=density', 'temp.zip');
unzip('temp.zip','BLAa_113144533');

% Gene expression grids are at 200 micron resolution.
geneGridSize = [67 41 58];
fid = fopen('Rasd2_73636089/density.raw', 'r', 'l'  );
Rasd2 = fread( fid, prod(geneGridSize), 'float' );
fclose(fid);
Rasd2 = reshape( Rasd2, geneGridSize );

% Projection grids are at 100 micron resolution
projectionGridSize = [133 81 115];
fid = fopen('BLAa_113144533/density.raw', 'r', 'l'  );
BLAa = fread( fid, prod(projectionGridSize), 'float' );
fclose(fid);
BLAa = reshape( BLAa, projectionGridSize );

% Upsample gene expression grid to same dimension as projection grid using linear interpolation
[xi,yi,zi] = meshgrid(1:0.5:41,1:0.5:67,1:0.5:58); %note: matlab transposes x-y
d = Rasd2; d(d<0) = 0; % fill in missing data as zeroes
Rasd2_100 = interp3(d ,xi,yi,zi,'linear');

% Handle "no data" (-1) voxels.
% Create a mask of "data" vs "no data" voxels and apply linear interpolation
m = zeros(size(Rasd2));
m(Rasd2  >= 0) = 1; mi = interp3(m,xi,yi,zi,'linear');

% Normalize data by dividing by interpolated mask. Assign value of "-1" to "no data" voxels.
Rasd2_100 = Rasd2_100 ./ mi;
Rasd2_100( mi <= 0 ) = -1;

% Create a merged image of one coronal plane;
gimg = squeeze(Rasd2_100(52,:,:)); gimg = max(0,gimg); gimg = gimg / 0.025; gimg = min(1,gimg);
pimg = squeeze(BLAa(52,:,:)); pimg = max(0,pimg); pimg = pimg / 0.8; pimg = min(1,pimg);
rgb = zeros([size(gimg),3]); rgb(:,:,1) = gimg; rgb(:,:,2) = pimg;
figure; image(rgb);

Figure: ISH SectionDataSet (id=73636089) for gene Rasd2 showing enriched expression in the striatum (left). Projection SectionDataSet (id=73636089) with injection in the anterior part of the basolateral amygdalar nucleus (BLAa) showing projection to the striatum and other brain areas (center). One coronal slice of the BLAa projection density grid (green) merged with an upsampled and interpolated Rasd2 expression density grid (red).

Projection Structure Unionization

Projection signal statistics can be computed for each structure delineated in the reference atlas by combining or unionizing grid voxels with the same 3-D structural label. While the reference atlas is typically annotated at the lowest level of the ontology tree, statistics at upper level structures can be obtained by combining measurements of the hierarchical children to obtain statistics for the parent structure. The unionization process also separates out the left versus right hemisphere contributions as well as the injection versus non-injection components.

Projection statistics are encapsulated as a ProjectionStructureUnionize object associated with one Structure, either left, right or both Hemispheres and one SectionDataSet. ProjectionStructureUnionize can be downloaded via RMA.

Example:

ProjectionStructureUnionize data is used in the web application to display projection summary bar graphs.

Projection Grid Search Service

A projection grid service has been implemented to allow users to instantly search over the whole dataset to find experiments with specific projection profiles.

  • The Source Search function retrieves experiments by anatomical location of the injection site.
  • The Target Search function returns a rank list of experiments by signal volume in the user specified structure(s).
  • The Spatial Search function returns a rank list of experiments by density of signal in the user specified voxel location.
  • The Injection Coordinate Search function returns a rank list of experiments by distance of their injection site to a user-specified reference space coordinate.
  • The Correlation Search function enables the user to find experiments that have a similar spatial projection profile to a seed experiment when compared over a user-specified domain

The projection grid search service is available through both the Web application and API.

Source Search

To perform a Source Search, a user specifies a set of source structures. The service returns all experiments for which either the primary injection structure or one of its secondary injection structures corresponding to one of the specified source structures or their descendents in the ontology. The search results can also be filtered by a specified list of transgenic lines.

See the connected service page for definitions of service::mouse_connectivity_injection_structure parameters.

Examples:

  • Source search for experiments with injection in the isocortex
    http://api.brain-map.org/api/v2/data/query.xml?criteria=
    service::mouse_connectivity_injection_structure[injection_domain$eqIsocortex]
    
  • Source search for experiments performed on wild-type specimens and injection in the isocortex
    http://api.brain-map.org/api/v2/data/query.xml?criteria=
    service::mouse_connectivity_injection_structure[injection_domain$eqIsocortex][transgenic_lines$eq0]
    
  • Source search for experiments performed on Syt6-Cre_KI148 cre-line with injection in the isocortex
    http://api.brain-map.org/api/v2/data/query.xml?criteria=
    service::mouse_connectivity_injection_structure[injection_domain$eqIsocortex][transgenic_lines$eq'Syt6-Cre_KI148']
    

The output of the source search is a xml list of objects. Each object represents one experiment and contains information about the experiment identifier, the primary injection structure, list of any secondary injection structures, injection coordinates, injection volume and transgenic line name.

Figure: Screenshot of source search results in the web application for experiments with injection in the isocortex. The injection location of each experiment is shown as a sphere on the 3D injection map.

Target Search

To perform a  Target Search, the user specifies a set of target structures. The service returns a rank list of experiments by signal volume in the target structures which are above a minimum threshold. The target structure specification can be further refined by hemisphere. The search results can also be filtered by a list of source structures and/or list of transgenic lines.

Example: Target search for experiments with projection signal in the target structure LGd (dorsal part of the lateral geniculate complex) and injection in the isocortex

Figure: Screenshot of source search results in the web application for experiments with projection in target structure LGd (dorsal part of the lateral geniculate complex) and injection in the isocortex. The injection location of each experiment is shown as a sphere on the 3D injection map.

Spatial Search

To perform a Spatial Search, a user selects a target location within the 3D reference space. The service returns a rank list of experiments by signal density in the target location and with density greater than 0.1. The search results can also be filtered by a list of source structures and/or list of transgenic lines.

Example: Spatial search for experiments with projection signal in a target location in VM (ventral medial nucleus of the thalamus) and injection in the isocortex

Figure: Screenshot of spatial search results in the web application for experiments with projection in target location within VM (ventral medial nucleus of the thalamus) and injection in the isocortex. Each line in the 3D map is the computationally generated path from the target location to injection of one experiment.

Injection Coordinate Search

To perform an  Injection Coordinate Search, a user specifies a source location within the 3D reference space. The service returns a rank list of experiments by distance of its injection to the specified source location. The search results can also be filtered by a list of source structures and/or list of transgenic lines.

 Example: Injection coordinate search for experiments with a source location in VM

Correlation Search

To perform a Correlation Search, the user selects a seed experiment and a domain over which the similarity comparison is to be made. All voxels belonging to any of the domain structures form the domain voxel set. Pearson's correlation coefficient is computed between the domain voxel set from the seed experiment and every other experiment in the product. The return list is sorted by descending correlation coefficient.

 Example: correlation with seed experiment 112670853

  • No labels