Visit DIX: German-Spanish-German dictionary | diccionario Alemán-Castellano-Alemán | Spanisch-Deutsch-Spanisch Wörterbuch

next up previous contents [cite] [home]
Next: Classification Up: Diffusion Tensor Imaging Previous: Rigid Registration   Contents


Nonrigid Registration

Nonrigid registration is the general term for an algorithm for the alignment of data sets that are mismatched in a nonlinear or nonuniform manner. The term ''matching'' is used to refer to any process that determines correspondences between data sets [32].

This chapter discusses all the methods that have been used to align two tensorfields with a nonrigid registration. The following series of steps are used:

  1. Extract points with high local structure in one data set.
  2. For each extracted point find the best corresponding point in the second data set.
  3. Check the displacement for the selected points and remove overlapping displacements.
  4. Interpolate the displacement for the selected and matched points to get a displacement field for the whole data set.
  5. Apply the interpolated displacement field on the second data set.
  6. Eventually improve the alignment by using multiple resolutions or looping.

For simplicity in this section any field is called a three dimensional image $I$ with values $I(i,j,k)$ at position $(i,j,k)^T$. The values $I(i,j,k)$ can therefore be any of the types scalar, tensor or eigensystem. The stationary image is $I_S$ and the moving $I_M$. The transformation from $I_M$ to $I_S$ is denoted $T$:

I_S = T (I_M)
\end{displaymath} (7.1)

More precisely the goal is to find
I_M' = \tilde{T} (I_M)
\end{displaymath} (7.2)

which is closest to $I_S$:
I_S \simeq I_M' = \tilde{T} (I_M)
\end{displaymath} (7.3)

The displacement field is

U = \left(\begin{array}{c}u\ v\ w\end{array}\right)
...)\ w(i',j',k')\end{array}\right)\;\;\;,
\forall \;\;i',j',k'
\end{displaymath} (7.4)

so that
$\displaystyle I_S(i,j,k)$ $\textstyle \simeq$ $\displaystyle I_M(i'+u, \;j'+v,\; k'+w)$ (7.5)
  $\textstyle \simeq$ $\displaystyle I_M(i'+ u(i',j',k'),\;j'+ v(i',j',k'),\;k' + w(i',j',k'))$  

A point $P$ at position $(i,j,k)^T$ in $I_S$ in the stationary image has a neighborhood $\mathcal{N}_S(P)$ which is a small window of size $\sigma_S
\times \sigma_S$. A neighborhood of a point $Q$ at position $(i',j',k')^T$ in $I_M$ of size $\sigma_M \times \sigma_M$ is referred to as $\mathcal{N}_M(Q)$. These notation conventions are summarized in Figure 7.1. Furthermore a collection of n points $\mathcal{L} =
\{P_i\vert P_i \in I, i=1 .. n\}$ with $\mathcal{L} \subset I$ is denoted $\mathcal{L}_P(I)$.

Figure 7.1: Notation conventions used
\includegraphics[width = 0.7\textwidth]{images/matching1.eps}

Every point $Q$ for which $U(Q)$ is known can be displaced to a new location $P=Q+U(Q)$. If $U(Q)$ is known for every point $Q \in I_M$ the whole image $I_M$ can be displaced. The resulting image $I_M'$ though will have points $\bar{P}$ which have not been assigned a value from $I_M$. That is, the function $T$ is not directly invertible, since $U$ is not necessarily onto and not necessarily one to one.

As a result an image $I_M'$ is expected that has values everywhere. Instead of interpolating the points $P$ to get missing values for $\bar{P}$ the matching procedure can be inverted to get a function $T^{-1}$. This second approach is used here. Thus points are selected in the stationary image $I_S$. When the corresponding points have been found and the displacement field interpolated this leads to a displacement $U^{-1}$ approximated by $-U$ from the stationary to the moving image. Now for every position in a resulting image $I_M'$ the position where the values came from is known, and the transform is onto, so that $\mathcal{L}_{\bar{P}}(I_M')$ is empty.

Point Extraction

To find a correspondence between two points both points must be uniquely identifiable in a certain neighborhood, so that the correspondence is unambiguous. The term ''certain neighborhood'' means a region that is large enough so that the corresponding point certainly lies in it but small enough so that no two equal correspondences are present. This is well known as the ''Aperture Problem''. This section assumes that this constraint is met.

A point on a line for example cannot be unambiguously matched to a point on a line in the second image if no additional information is provided. A corner instead has a unique counterpart in both images and can therefore be matched uniquely, of course only if the corner is present in both images. A measure of cornerness tells how much structure there is around a certain point.

In the scalar case the derivatives of the image $I$ are computed and the outer product of the resulting vector is built, which is by definition the correlation matrix $H$. With the derivatives

I_x = \frac{\partial}{\partial x}I;\;\;\; I_y = \frac{\partial}{\partial y}I;\;\;\; I_z = \frac{\partial}{\partial z}I;
\end{displaymath} (7.6)

$H$ becomes
H = \left(\begin{array}{c}
I_x\ I_y\ I_z\end{array}\right)...
...^2 & I_yI_z \\
I_zI_x & I_zI_y & I_z^2 \\
\end{displaymath} (7.7)

which is a symmetric matrix where the determinant always equals zero. A classical measure of local structure is to use the local expectance of the values in $H$, i.e. averaging the components of $H$ over a window of size $\sigma_N
\times \sigma_N$.
\widehat{H} = \left(\begin{array}{ccc}
\widehat{I_x^2} & \wi...
...x} & \widehat{I_zI_y} & \widehat{I_z^2} \\
\end{displaymath} (7.8)

This matrix may have a nonzero determinant, since $\widehat{I_x^2}\widehat{I_y^2}
\neq \widehat{I_yI_x} \widehat{I_xI_y}$, as long as $\widehat{I_x},
\widehat{I_y}, \widehat{I_x} \neq 0$. An eigensystem decomposition of $\hat{H}$ allows a classification of the point under inspection in edge, corner or flat region [1]. For the two dimensional case this is illustrated in Figure 7.2.

Figure 7.2: Eigensystem decomposition of the correlation matrix
\includegraphics[width = 0.3\textwidth]{images/lambda.eps}

$\hat{H}$ can be seen as a tensor of order 2. The eigenvalues can be illustrated by an ellipsoid. The rounder the ellipsoid the bigger the cornerness, i.e. the measure close-sphere in Equation 4.15 is a measure for cornerness in this case.

For any symmetric matrix the trace is the sum of the eigenvalues

trace(H) = \sum_i \lambda_i(H)
\end{displaymath} (7.9)

as can be seen by using Equation 4.8. The determinant is the product of the eigenvalues
det(H) = \prod_i \lambda_i(H)
\end{displaymath} (7.10)

Using only these measurements to obtain a value for the cornerness avoids doing an eigensystem decomposition.

Ruiz and co-workers [15] propose to use points above a threshold

t_1 = \frac{det(\widehat{H})}{trace(\widehat{H})}
\end{displaymath} (7.11)

and remove false detections by thresholding the result with
t_2 = \frac{det(\widehat{H})}{trace(\frac{1}{N}\widehat{H})^N}
\end{displaymath} (7.12)

with $N$ the dimension of the image, i.e. usually 2 or 3. The value of $t_2$ varies between one and zero depending on the shape of the ellipsoid, one being a perfect sphere.

Here a modified version is used, where the two thresholds are incorporated in one formula. The structure $S$ is then defined as

S = \frac{N\cdot det(\widehat{H})^N}{trace(\widehat{H}) + \sigma
\end{displaymath} (7.13)

where $\sigma$ can usually be set to 1% or 0.01. $S$ will have values between zero and close to one.

Using the expectance causes blurring of the point-selection since high values are diffused. The trace of $H$ has a non-zero value without averaging and is an edge detector [1], as

trace(H) = I_x^2 + I_y^2 + I_z^2
\end{displaymath} (7.14)

There are now several options to improve the point-selection. Before the process is started an anisotropic filter can be applied to enhance edges. After using a point-selection as described the result can be masked with the trace of $H$ (not $\hat{H}$!) to undo the blurring effect of the expectance computation. Finally the local maxima of a region of the function 7.13 can be selected to get single voxel positions.

In the case of tensorfields the points should be extracted from a structure with six independent values. For each independent component of the tensor a matrix $\widehat{H_i}$ can be computed and the sum over the components of these matrix used as a final matrix $\hat{H}$

\widehat{H} = \sum_{i=1}^6 \widehat{H_i} =
...widehat{h_{izy}} & \sum\widehat{h_{izz}}\\
\end{displaymath} (7.15)

Expectance and sum are commutative, i.e.
\widehat{H} = \sum_{i=1}^6 \widehat{H_i} = \widehat{\sum_{i=1}^6 H_i}
\end{displaymath} (7.16)

The collection of points that have enough structure and have therefore been selected, is referred to as $\mathcal{M}(I_S)$

The parameters for the point-selection in the program nonrigidreg are described in Table A.3 in page [*].


The matching process involves two steps: First finding the best corresponding points in the second image and second locally optimizing the displacements found.

For each selected point $P$ in $I_S$ a neighborhood $\mathcal{N}_S(P)$ is selected. At the same position in the moving image $I_M$ a search window $\mathcal{N}_M(P)$ is selected, i.e. an area where the corresponding point is assumed to lie. For each point $Q \in \mathcal{N}_M(P)$ in this search-window a neighborhood $\mathcal{N}_S(Q)$ is compared to the neighborhood $\mathcal{N}_S(P)$ and a value measuring the result of the comparison assigned to $Q$. Two different measurements of similarity for scalar data are implemented.

Maximal normalized cross correlation

The normalized cross correlation (NCC) between two windows $\mathcal{N}_S(P)$ and $\mathcal{N}_S(Q)$ of same size $\sigma_S
\times \sigma_S$ is defined as
\mbox{NCC}(P, Q) = \frac{\sum_{k\in\mathcal{N}_S} I_S(k)\cdo...
...{N}_S} I_M^2(\sigma_S^2-k)}}
%%\quad , \forall  v\in\Omega
\end{displaymath} (7.17)

This is computed for every position $Q \in \mathcal{N}_M$. The position where the NCC is maximal, i.e. closest to one, is the best match and therefore the selected correspondence.

Least square error (LSE)

Here the distance between the image values is used as measure
\mbox{LSE}(P,Q) = \sum_{k\in\mathcal{N}_S} \Vert I_S(k) - I_M(k)\Vert
\end{displaymath} (7.18)

and the minimal value for all $Q \in \mathcal{N}_M(P)$ is the location where the matching point is selected.

Search Strategy

In both cases, the search strategy is based on brute force, that is, no local optimization to find the best match is done. The windows that are compared, $\mathcal{N}_S(P)$ and $\mathcal{N}_S(Q)$, can be weighted with a Gaussian function prior to comparison. This makes the matching less sensitive to the size of the window and increases the ''importance'' of the center points $P$ and $Q$.

Instead of simply selecting the best matching value the resulting vectors NCC($P,Q$) respectively LSE($P,Q$) are sorted so that NCC($P,0$), LSE($P,0$) are the best and NCC($P, \sigma_S^2$), LSE($P, \sigma_S^2$) the worst matches. Before accepting a match the sorted vectors can be analyzed. If the best and worst match are too close to each other, i.e. NCC($P,0$) $< 2\cdot$NCC($P, \sigma_S^2$), there is not enough structure in the search-window $\mathcal{N}_M(P)$ so that the match should be ignored. Furthermore if there are several equally or almost equally good matches the displacement field in the neighborhood should be considered and matching should additionally be based on smoothness of the over-all displacement field. Only the first approach has been implemented.

In a second step, when for all points $P \in \mathcal{M}(I_S)$ a correspondence has been found, the displacements are checked. Overlapping displacements, i.e. displacement vectors that cross each other are eliminated since the tissue of a subject should not be folded. Whenever two displacements cross each other the shorter displacement is kept and the larger removed.


In order to obtain a displacement field for the whole volume, the sparse displacements obtained at single locations have to be interpolated. This section describes the selected interpolation method. It is assumed that the sparse displacement field has an unknown underlying random field. References [35] and [38] give a good introduction and a simple example on how to use Kriging.

Kriging interpolation originates from geostatistics and is known to be the best linear unbiased estimator because it is theoretically capable of minimizing the estimation error variance while being a completely unbiased estimation procedure [34].

Kriging is a modified linear regression technique that estimates a value at a point by assuming that the value is spatially related to the known values in a neighborhood near that point. Kriging computes the value for the unknown data point using a weighted linear sum of known data values. The weights are chosen to minimize the estimation error variance and to maintain unbiasedness in the sampling. Unlike other techniques for scalar values, Kriging bases its estimates upon a dynamic, not static, neighborhood point configuration and treats those points as regionalized variables instead of random variables. Regionalized variables assume the existence of regions of influence in the data. In Kriging each region is analyzed to determine the correlation or interdependence among the data in the region and this is encoded through a function called a variogram. For the unknown value $\hat{Z}$ at the position $p$ within the neighborhood of known points $P_i$ with known values $Z_i(P_i)$ the basic Kriging equation is:

\hat{Z}(P) = \sum_{i=1}^n w_i Z_i(P_i)
\end{displaymath} (7.19)

$Z$ is the actual value at a point $p$, and $n$ is the number of known points used to compute $\hat{Z}$. The $Z_i$'s are the regionalized variables and the $w_i$'s the weights. Unlike other techniques which also use a weighted sums, in Kriging the weights are not selected based solely upon the distance between sampled and unsampled points. Kriging does not assume that the variability of the data is linear [33].

Optimal weights are determined by enforcing the error expectation in the estimate be zero

E(\hat{Z}-Z) = 0
\end{displaymath} (7.20)

and the error variance be minimal

V\!\!AR(\hat{Z}-Z)^2 = minimal
\end{displaymath} (7.21)

where $E$ is the expected value or mean and $V\!\!AR(\hat{Z}-Z)^2$ the mean-square-error of the dissimilarity between the two variables $\hat{Z}$ and $Z$. These two conditions make $\hat{Z}$ the best linear unbiased estimator and are the base equations to derive the Kriging system of equations. Furthermore the unbiasedness implies that the weights must sum up to one:

\sum_{i=1}^n w_i = 1
\end{displaymath} (7.22)

As an exact interpolator, Kriging predicts known values with zero error. Using the method of Lagrange Multipliers it is possible to obtain a linear equation for the weights $w_i$ of the estimator, where $\gamma(\parallel P_i - P_j\parallel)$ is the evaluation of the variogram between the points $P_i$ and $P_j$:
\left( \begin{array}{ccccc}
\gamma(0) & \gamma(\parallel P_...\parallel P_0 - P_n \parallel)\\
\end{array} \right)
\end{displaymath} (7.23)

Variogram Models

The variogram expresses the variability of a spatial process as a function of distance and direction. Suppose the data is collected according to a random spatial process in $\mathbb{R}^n$, i.e. we observe $Y(P_i), i \in \mathbb{R}^n$. It is assumed that $V\!\!AR(Y(P_i) -
Y(P_j))$ is only a function of the distance vector $P_i - P_j$. The process is said to be isotropic if it only depends upon $\parallel P_i - P_j
\parallel$, the Euclidean distance between sites. Then the function $2\gamma(\parallel P_i - P_j \parallel) = V\!\!AR(Y(P_i)
- Y(P_j))$ is called the variogram, while $\gamma(\parallel P_i - P_j\parallel)$ is the semivariogram. Typically, the variogram is assumed to increase with the distance $d = \;\parallel P_i - P_j \parallel$, assuming that the difference between pairs of observations closer in space should tend to exhibit less variability than that for pairs further apart. It is often the case that after a certain distance, called the range $a$, $\gamma$ will level out at a value called the sill. The range is therefore the distance beyond which the deviation in the values does not depend on distance and hence values are no longer correlated.

In addition, there may be a discontinuity at the origin, the so-called nugget effect. This means that the fitted model does not pass through the origin, but intersects the y-axis at a positive value of $\gamma(0)$, which is $\tau^2$. This quantity is an estimate of $E$, the residual, that represents spatially uncorrelated noise associated with any value of a random variable $Z$ at $P$. This terminology is illustrated in Figure 7.3.

The relationship between the variogram and the covariance is given by [36]

2\gamma(d, \tau^2, \sigma^2, a) = 2(\tau^2 + \sigma^2(1- \rho(d,a)))
\end{displaymath} (7.24)

where the sill is $\tau^2 + \sigma^2$, the range $a$ and $\rho(d, a)$ a parametric correlation function. The semivariogram is also a graphical display of $\gamma$, i.e. semivariance, versus distance.

Figure 7.3: Variogram parameters
\includegraphics[width = 0.6\textwidth]{images/varnomen.eps}

Table 7.1 is a collection of different parametric correlation functions. Figure 7.4 shows the semivariogram (Equation 7.24) for the functions in Table 7.1 with $\tau^2 = 0$ and $\sigma^2 = 1$.

Table 7.1: Common Parametric Correlation Forms ([36] and [35]).
Name $1-\rho(d, a)$  
Linear $\frac{d}{a}$  
Exponential $1-e^{-\frac{d}{a}}$  
Gaussian $1-e^{-(\frac{d}{a})^2}$  

In most cases the variogram is unknown and is approximated by a process called structural analysis. As there is no prior information on the resulting displacement field this approach is not applicable. Instead different variogram models are implemented so that the best model can be empirically determined by comparing the results of the registration process.

Figure 7.4: Plot of common parametric correlation forms $1-\rho(d, a)$ used for Kriging interpolation
\includegraphics[width = 0.65\textwidth]{images/varmodel.eps}

An example interpolation is shown where a synthetic example is randomly sampled at 10% and interpolated using the linear variogram model and two different neighborhoods (see Figure 7.5).

Figure 7.5:     Example of using Kriging interpolation. (a) Original image; (b) Randomly selected 10% of the points; (c) Interpolation using linear Kriging and neighborhood 5; (d) Interpolation using linear Kriging and neighborhood 10. It can be seen that in (d) a value reaches further so that the transition from black to white is smoother.
\includegraphics[width = 0.25\textwidth]{images/}           \fbox{\includegraphics[width = 0.25\textwidth]{images/}}
(a)         (b)
\includegraphics[width = 0.25\textwidth]{images/}           \includegraphics[width = 0.25\textwidth]{images/}
(c)          (d)

Local Warping of Tensors

After the displacement field is known for every position in the image $I_S$ the image $I_M' = T(I_M)$ can easily be computed since every location in $I_M'$ ''knows'' where it comes from. This is done for each component separately. Again, as with the rigid registration, the problem arises, that tensors are structures and have to be locally transformed according to the displacement.

Three different local transformations are presented and discussed here. The first is proposed in reference [15]. $D'$ is the tensor in image $I_M$ that is displaced by $U$ to a new position $P$. The transformation is $T$. Then the local transformation is applied on $D'$ to get the final tensor $D$ at the position $P$.

Local Warp with Scaling

The deformation gradient $A$ is computed which is the differential of the transformation or the Jacobian matrix of the mapping
A = \left(\begin{array}{ccc}
\frac{\partial}{\partial x}T_x...
y}T_z &\frac{\partial}{\partial z}T_z\end{array}\right)
\end{displaymath} (7.25)

where $T_i$ is the transformation in the $i$th dimensions. Combining Equations 7.1 and 7.6
T(I_M) = I_M(i'+ u(i',j',k'),\;j'+ v(i',j',k'),\;k' + w(i',j',k'))
\end{displaymath} (7.26)

$A$ can be expressed in terms of the displacement $U = (u,v,w)^T$
$\displaystyle A$ $\textstyle =$ $\displaystyle \left(\begin{array}{ccc}
\frac{\partial (x + u)}{\partial x} & \f...
...ial (z + w)}{\partial
y} &\frac{\partial (z + w)}{\partial z}\end{array}\right)$  
  $\textstyle =$ $\displaystyle \left(\begin{array}{ccc}
1 + \frac{\partial u}{\partial x} & \fra...{\partial w}{\partial
y} & 1+ \frac{\partial w}{\partial z}\end{array}\right)$ (7.27)

The local relation of $D$ and $D'$ is then
D = A^T D' A
\end{displaymath} (7.28)

This local mapping of a tensor includes rotation, scaling and distortion of the tensor. Figure 7.7 illustrates the result of this method on a synthetic square (see Figure 6.1 on page [*]). The applied displacement field is shown in Figure 7.6.

As can be seen, the shape of some tensors has been changed quite a bit.

Local Warp without Scaling

In a second approach the scaling of the tensors is removed while still allowing the tensor to change shape. This is done by rescaling the result of Equation 7.28 so that the determinant of $D$ is the same as for $D'$:
D = \frac{1}{det(A)^{2/3}} A^T D' A
\end{displaymath} (7.29)

The result is displayed in Figure 7.8.

Local Rotation

Using the Single Value Decomposition (SVD) the matrix $A$ can be decomposed in a pure rotation component $R$ and a strain component $W$. The SVD for any non-singular square matrix is given by
A = U\Sigma V^T
\end{displaymath} (7.30)

where $U$ and $V$ are orthogonal matrices and $\Sigma$ is a diagonal matrix. The matrix $A$ can be now written in the form
A = WR
\end{displaymath} (7.31)

where $W = UV^T$ is a matrix with orthogonal columns and $R =
V\Sigma V^T$ is a symmetric, positive semidefinite matrix 7.1. $A$ is said to be a pure strain if $W=I$ with $I$ the identity matrix, while if $R=I$, $A$ is called a rigid rotation at this position [15]. Therefore any deformation of the tensor can be avoided by applying $W$ in Equation 7.28 instead of $A$. For the case of the synthetic square this can be seen in Figure 7.9. This form of local rotation is the transformation mentioned in Section 4.2 and shown in Figure 4.6.

Again, as was done for the rigid transformation, it is argued here to use this transformation when nonrigidly aligning the tensor images. When aligning two different subjects the structures should not be changed. In the scalar case gray-values are displaced to the new position but their value is not changed. Equivalently in the case of tensors, a local rotation is part of the transformation but changing the shape is not obvious and meaningful in all cases. The meaning of any local change would have to be studied for each application separately and even separately for each tissue, so that it is safest to not apply any strain. Also, if the point-extraction and the matching part of the nonrigid registration are based on measurements of the tensors that depend on the shape, changing the shape after the transformation leads to a change of the similarity. This can make the chosen displacement appear wrong after the local transformation.

Figure 7.6: Synthetic displacement applied, used to visualize local warping
\includegraphics[width = 0.8\textwidth]{images/test_6_random_cut.eps}
Figure 7.7: Distorted synthetic square with full local warping
\fbox{\includegraphics[width = 0.8\textwidth]{images/}}

Figure 7.8: Distorted synthetic square with local warping but without scaling
\fbox{\includegraphics[width = 0.8\textwidth]{images/}}
Figure 7.9: Distorted synthetic square only with local rotation of the tensors
\fbox{\includegraphics[width = 0.8\textwidth]{images/}}

The implementation allows to choose between any of the described local transformations (see Table A.3).

Multi-scale Matching

The overall process can be improved in two ways; looping and matching using multiple resolutions. Looping is simple since the second loop does not need to know anything about the previous processing and can be seen as a completely independent matching process. If the overall displacement field should be known, i.e. not only the final match is of interest, the problem of combining the displacement fields is the same as for multiple resolution matching and will be discussed there. Figure 7.10 illustrates the principle of multi-scale matching as it has been implemented. The single steps are:
  1. Optionally smooth the given images $I_M$ and $I_S$ with a Gaussian filter. This step is not useful when working with binary or segmented data, but for all data types that have continuous values.
  2. Downsample the image to get $I_M', I_M'' ... I_M^n$ and $I_S', I_S''
... I_S^n$
  3. a) Select points in the lowest level $I_S^n$
    b) Threshold the selection
  4. Find the matches for the selected points, i.e. the displacements at this positions.
  5. Upsample the displacements to all the higher levels.
  6. Interpolate the displacements
  7. Apply the interpolated displacements to the corresponding image $I_M^i$
  8. Copy the resulting image back
  9. While $n > 0 $ set $n = n-1$, else stop
  10. Go to step 3
This form of multi-scale matching can certainly be improved. E.g. step 7 the displacement in higher levels would not need to be applied. Instead the resulting displacements could be added and in a final step the overall displacement could be applied on $I_M$. Combining two displacement fields though is not a trivial problem. For at least one of the transformations the inverse has to be known, but $T^{-1}$ cannot be computed directly as has been explained in the beginning of this chapter.

The size of the search-window is adapted to the current scale. For the lowest scale it is the specified search-window, divided by the number of scales times the scale-factor. When moving towards higher levels the search-window is a bit larger than twice the scale-factor, since any larger displacement should have been found in the lower levels.

Figure 7.10: Multi-scale matching. Explanation see Section 7.5.
\includegraphics[width = 0.7\textwidth]{images/matching2.eps}

Measuring Results

To test the nonrigid registration synthetic displacement fields were generated and applied on different test images. The synthetic displacement $U_{synth}$ is generated separately and independently for each component $u,v,w$. First a maximal displacement $d_{max}$ is fixed. Then a regular grid with distance $d_{grid} \ge d_{max}$ is built and each position is assigned a random value $x$ between $-\frac{d_{max}}{2} \le x \le \frac{d_{max}}{2}$. The sparse grid is interpolated using Kriging with a linear variogram model so that smooth random displacement fields are generated. $U_{synth}$ is then applied to $I_S$ to get a test image $I_M$. Then the nonrigid registration technique described in the previous sections is used to align $I_M$ back to $I_S$.

Measuring the results, i.e. the improvement from

\int_{\Omega} similarity(I_S, I_M) d\Omega
\end{displaymath} (7.32)

\int_{\Omega} similarity(I_S, T(I_M)) d\Omega
\end{displaymath} (7.33)

where $\Omega$ is the data set, clearly depends on the similarity measurement. A visual inspection of the image $I_S - T(I_M)$ gives a very good impression of the results.

For the binary test data measuring the improvement is trivial and boils down to be the number of voxel positions that have different values in each image. The same principle can be used when matching segmented data. Here the number of positions with different classes in each image is a meaningful measure for dissimilarity. In grayscale data counting positions where the graylevels are different is not very suggestive since this will be the case almost everywhere in the body. Of course the number should decrease as the border of the body should be better aligned after the registration, but this will be a small percentage. The total distance

\sum_{i,j,k} \Vert I_S(i,j,k) - T(I_M(i,j,k)) \Vert
\end{displaymath} (7.34)

is much more suitable since it linearly weights the difference at each position. Similarity between tensor data sets needs to be defined to be able to use the last formula. Reference [16] proposes the inner product of two tensors
\sum_{i,j} D_1(i,j)D_2(i,j)
\end{displaymath} (7.35)

being the equivalent to the vector product as similarity measure.

When registering medical data the goal is to align the structures of one data set to the other data set. Ideally the structures in both data sets $I_S$ and $I_M$ are known, e.g. by segmenting the data. The displaced structures can then be compared.

Different tests are documented in Table 7.2. The parameters used are explained in Table A.3 in Appendix A.6. The process for the case of the chessboard is illustrated in Figure 7.11 and for the segmented baby brain in Figure 7.12.

Table 7.2: Comparison of results when testing with synthetic deformation fields
Test images Parameters used Results
Chessboard and synthetic ./nonrigidreg 0 1938 points different
displacement -m=0 -sw=21 -mw=9 -d=15 before matching,
  -r=20 -pm=2 -km=2 50 after matching
MRI image and synthetic ./nonrigidreg 1 133016 total graylevel distance
displacement -s=pointmatcherdata/cm.001 before matching,
  -o=pointmatcherdata/cm 42167 after matching
  -mw=9 -sw=15 -m=1 -r=15  

Figure 7.11:     Chessboard example for the nonrigid matching process. (a) Original image; (b) Synthetically displaced version; (c) Difference between (a) and (b); (d) Displacement field for nonrigid registration from (b) to (a), zoom into the center region; (e) Resulting match; (f) Difference between (a) and (e).
\includegraphics[width = 0.35\textwidth]{images/}           \includegraphics[width = 0.35\textwidth]{images/}
(a)         (b)
\includegraphics[width = 0.35\textwidth]{images/}           \includegraphics[width = 0.35\textwidth]{images/2chessdisp.eps}
(c)          (d)
\includegraphics[width = 0.35\textwidth]{images/}           \includegraphics[width = 0.35\textwidth]{images/}
(e)          (f)

Figure 7.12:     Alignment of syntheticaly displaced MRI scan to original scan as an example for the nonrigid matching process. (a) Original image; (b) Synthetically displaced version; (c) Difference between (a) and (b); (d) Displacement field for nonrigid registration from (b) to (a); (e) Resulting match; (f) Difference between (a) and (e).
\includegraphics[width = 0.35\textwidth]{images/}           \includegraphics[width = 0.35\textwidth]{images/cm_randomapplied.eps}
(a)         (b)
\includegraphics[width = 0.35\textwidth]{images/}           \includegraphics[width = 0.35\textwidth]{images/cm_backdisp.eps}
(c)          (d)
\includegraphics[width = 0.35\textwidth]{images/}           \includegraphics[width = 0.35\textwidth]{images/cm_diffafter.eps}
(e)          (f)


... matrix7.1$\sim$toomas_l/linalg/lin2/node26.html

next up previous contents [cite] [home]
Next: Classification Up: Diffusion Tensor Imaging Previous: Rigid Registration   Contents
Raimundo Sierra 2001-07-19