Note

This page is a reference documentation. It only explains the class signature, and not how to use it. Please refer to the user guide for the big picture.

5.3.7. sammba.registration.Coregistrator

class sammba.registration.Coregistrator(brain_volume=None, output_dir=None, caching=False, verbose=True, use_rats_tool=True, clipping_fraction=0.2)

Class for registering anatomical image to perfusion/functional images from one animal in native space.

Parameters

brain_volume : int or None, optional

Volume of the brain in mm3 used for brain extraction. Typically 400 for mouse and 1650 for rat. Used only if prior rigid body registration is needed.

output_dir : str or None, optional

Path to the output directory. If None, current directory is used.

caching : bool, optional

If True, caching is used for all the registration steps.

verbose : int, optional

Verbosity level. Note that caching implies some verbosity in any case.

use_rats_tool : bool, optional

If True, brain mask is computed using RATS Mathematical Morphology. Otherwise, a histogram-based brain segmentation is used.

clipping_fraction : float or None, optional

Clip level fraction is passed to nipype.interfaces.afni.Unifize, to tune the bias correction step done prior to brain mask segmentation. Only values between 0.1 and 0.9 are accepted. Smaller fractions tend to make the mask larger. If None, no unifization is done for brain mask computation.

__init__(brain_volume=None, output_dir=None, caching=False, verbose=True, use_rats_tool=True, clipping_fraction=0.2)

Initialize self. See help(type(self)) for accurate signature.

fit_modality(in_file, modality, slice_timing=True, t_r=None, prior_rigid_body_registration=None, reorient_only=False, brain_mask_file=None)

Prepare and perform coregistration.

Parameters

in_file : str

Path to the raw modality image.

modality : one of {‘perf’, ‘func’}

Name of the MRI modality.

slice_timing : bool, optional

If True, slice timing correction is performed

t_r : float, optional

Repetition time, only needed for slice timing correction.

prior_rigid_body_registration : bool, optional

If True, a rigid-body registration of the anat to the modality is performed prior to the warp. Useful if the images headers have missing/wrong information. NOTE: prior_rigid_body_registration is deprecated from 0.1 and will be removed in next release. Use reorient_only instead.

reorient_only : bool, optional

If True, the rigid-body registration of the anat to the func is not performed and only reorientation is done.

Returns

the coregistrator itself

fit_transform(X, y=None, **fit_params)

Fit to data, then transform it.

Fits transformer to X and y with optional parameters fit_params and returns a transformed version of X.

Parameters

X : numpy array of shape [n_samples, n_features]

Training set.

y : numpy array of shape [n_samples]

Target values.

**fit_params : dict

Additional fit parameters.

Returns

X_new : numpy array of shape [n_samples, n_features_new]

Transformed array.

get_params(deep=True)

Get parameters for this estimator.

Parameters

deep : bool, default=True

If True, will return the parameters for this estimator and contained subobjects that are estimators.

Returns

params : mapping of string to any

Parameter names mapped to their values.

segment(in_file)

Bias field correction and brain extraction

set_params(**params)

Set the parameters of this estimator.

The method works on simple estimators as well as on nested objects (such as pipelines). The latter have parameters of the form <component>__<parameter> so that it’s possible to update each component of a nested object.

Parameters

**params : dict

Estimator parameters.

Returns

self : object

Estimator instance.

transform_modality_like(apply_to_file, modality)

Applies modality coregristration to a file in the modality space.