Note

This page is a reference documentation. It only explains the class signature, and not how to use it. Please refer to the user guide for the big picture.

5.3.6. sammba.registration.TemplateRegistrator

class sammba.registration.TemplateRegistrator(template, brain_volume, template_brain_mask=None, dilated_template_mask=None, output_dir=None, caching=False, verbose=True, use_rats_tool=True, clipping_fraction=0.2, convergence=0.005, registration_kind='nonlinear')

Class for registering anatomical and possibly other modality images from one animal to a given head template.

Parameters

template : str

Path to the head template image.

brain_volume : int

Volume of the brain in mm3 used for brain extraction. Typically 400 for mouse and 1650 for rat.

template_brain_mask : str or None, optional

Path to the template brain mask image, compliant with the given head template.

dilated_template_mask : str or None, optional

Path to a dilated head mask, compliant with the given head template. If None, the mask is set to the non-background voxels of the head template after one dilation.

output_dir : str, optional

Path to the output directory. If not specified, current directory is used.

caching : bool, optional

If True, caching is used for all the registration steps.

verbose : int, optional

Verbosity level. Note that caching implies some verbosity in any case.

use_rats_tool : bool, optional

If True, brain mask is computed using RATS Mathematical Morphology. Otherwise, a histogram-based brain segmentation is used.

clipping_fraction : float or None, optional

Clip level fraction is passed to nipype.interfaces.afni.Unifize, to tune the bias correction step done prior to brain mask segmentation. Only values between 0.1 and 0.9 are accepted. Smaller fractions tend to make the mask larger. If None, no unifization is done for brain mask computation.

convergence : float, optional

Convergence limit, passed to nipype.interfaces.afni.Allineate

registration_kind : one of {‘rigid’, ‘affine’, ‘nonlinear’}, optional

The allowed transform kind from the anatomical image to the template.

Attributes

template_brain_

(str) Path to the brain extracted file from the template image

anat_brain_

(str) Path to the brain extracted file from the anatomical image

__init__(template, brain_volume, template_brain_mask=None, dilated_template_mask=None, output_dir=None, caching=False, verbose=True, use_rats_tool=True, clipping_fraction=0.2, convergence=0.005, registration_kind='nonlinear')

Initialize self. See help(type(self)) for accurate signature.

fit_anat(anat_file, brain_mask_file=None)

Estimates registration from anatomical to template space.

fit_modality(in_file, modality, slice_timing=True, t_r=None, prior_rigid_body_registration=None, reorient_only=False, voxel_size=None)

Estimates registration from the space of a given modality to the template space.

Parameters

in_file : str

Path to the modality image. M0 file is expected for perfusion.

modality : one of {‘func’, ‘perf’}

Name of the modality.

slice_timing : bool, optional

If True, slice timing correction is performed

t_r : float, optional

Repetition time, only needed for slice timing correction.

prior_rigid_body_registration : bool, optional

If True, a rigid-body registration of the anat to the modality is performed prior to the warp. Useful if the images headers have missing/wrong information. NOTE: prior_rigid_body_registration is deprecated from 0.1 and will be removed in next release. Use reorient_only instead.

reorient_only : bool, optional

If True, the rigid-body registration of the anat to the func is not performed and only reorientation is done.

fit_transform(X, y=None, **fit_params)

Fit to data, then transform it.

Fits transformer to X and y with optional parameters fit_params and returns a transformed version of X.

Parameters

X : numpy array of shape [n_samples, n_features]

Training set.

y : numpy array of shape [n_samples]

Target values.

**fit_params : dict

Additional fit parameters.

Returns

X_new : numpy array of shape [n_samples, n_features_new]

Transformed array.

get_params(deep=True)

Get parameters for this estimator.

Parameters

deep : bool, default=True

If True, will return the parameters for this estimator and contained subobjects that are estimators.

Returns

params : mapping of string to any

Parameter names mapped to their values.

inverse_transform_towards_modality(in_file, modality, interpolation='wsinc5')

Transforms the given file from template space to modality space.

Parameters

in_file : str

Path to the file in the same space as the modality image.

interpolation : one of {‘nearestneighbour’, ‘trilinear’, ‘tricubic’,

‘triquintic’, ‘wsinc5’}, optional

The interpolation method used for the transformed file.

voxel_size : 3-tuple or None, optional

The target voxels size. If None, the final voxels size will match the template.

Returns

transformed_file : str

Path to the transformed file, in template space.

segment(in_file)

Bias field correction and brain extraction

set_params(**params)

Set the parameters of this estimator.

The method works on simple estimators as well as on nested objects (such as pipelines). The latter have parameters of the form <component>__<parameter> so that it’s possible to update each component of a nested object.

Parameters

**params : dict

Estimator parameters.

Returns

self : object

Estimator instance.

transform_anat_like(in_file, interpolation='wsinc5')

Transforms the given in_file from anatomical space to template space.

Parameters

in_file : str

Path to the file in the same space as the anatomical image.

interpolation : one of {‘nearestneighbour’, ‘trilinear’, ‘tricubic’,

‘triquintic’, ‘wsinc5’}, optional

The interpolation method used for the transformed file.

Returns

transformed_file : str

Path to the transformed file, in template space.

transform_modality_like(in_file, modality, interpolation='wsinc5', voxel_size=None)

Transforms the given file from the space of the given modality to the template space. If the given modality has been corrected for EPI distorsions, the same correction is applied.

Parameters

in_file : str

Path to the file in the same space as the modality image.

modality : one of {‘func’, ‘perf’}

Name of the modality.

interpolation : one of {‘nearestneighbour’, ‘trilinear’, ‘tricubic’,

‘triquintic’, ‘wsinc5’}, optional

The interpolation method used for the transformed file.

voxel_size : 3-tuple or None, optional

The target voxels size. If None, the final voxels size will match the template.

Returns

transformed_file : str

Path to the transformed file, in template space.