Visit DIX: German-Spanish-German dictionary | diccionario Alemán-Castellano-Alemán | Spanisch-Deutsch-Spanisch Wörterbuch

next up previous contents [cite] [home]
Next: Data Preprocessing Up: Diffusion Tensor Imaging Previous: Image Acquisition   Contents

Subsections


Tensors

In this chapter the data structure of a tensor is studied. First a mathematical insight of tensor data is given. Some measurements related to tensor characteristics are presented, followed by different ways of applying transformations on tensors. Finally the visualization of a tensor is discussed.

Tensor Characteristics

A scalar is a quantity whose specification (in any coordinate system) requires just one number. A tensor of order $n$ on the other hand is an object that requires $3^n$ numbers in any given coordinate system. With this definition, scalars and vectors are special cases of a tensor. Scalars are tensors of order 0 with $3^0 = 1$ components, and vectors are tensors of order 1 with $3^1 = 3$ components. The diffusion tensors are general tensors of order 2 with $3^2 = 9$ components. The components of a second order tensor are often written as a $3 \times 3$ matrix, as will be done here. Moreover the diffusion tensor is a symmetric second order tensor so that the matrix is of the form


\begin{displaymath}
D = \left( \begin{array}{ccc}
D_{11} & D_{12} & D_{13} \\
...
...} & D_{23} \\
D_{13} & D_{23} & D_{33} \\
\end{array}\right)
\end{displaymath} (4.1)

A tensor can be reduced to principal axes (eigenvalue and eigenvector decomposition) if the equation


\begin{displaymath}
D \mathbf{e} = \lambda \mathbf{e}
\end{displaymath} (4.2)

or
\begin{displaymath}
(D - \lambda I)\mathbf{e} = 0
\end{displaymath} (4.3)

where $I$ is the identity matrix, has a nontrivial solution.

Let $\lambda_1 \ge \lambda_2 \ge \lambda_3 \ge 0$ be the eigenvalues of the symmetric tensor $D$ and let $\mathbf{\hat e}_i$ be the normalized eigenvector corresponding to $\lambda_i$. As the diffusion tensor is symmetric the eigenvalues $\lambda_i$ will always be real. Moreover the corresponding eigenvectors are perpendicular.

The values $\lambda_i$ can be found by solving the characteristic equation

\begin{displaymath}
\left\vert\begin{array}{ccc}
D_{11}-\lambda & D_{12} & D_{1...
..._{31} & D_{32} & D_{33}-\lambda \\
\end{array}\right\vert = 0
\end{displaymath} (4.4)

or
$\displaystyle \lambda^3 - \lambda^2(D_{11} + D_{22} + D_{33}) + \hspace{9cm}$      
$\displaystyle \lambda\left(
\left\vert\begin{array}{cc} D_{22} & D_{32}\  D_{2...
...} & D_{22} & D_{23} \\
D_{31} & D_{32} & D_{33} \\
\end{array}\right\vert = 0$     (4.5)

after expanding the determinant. The numbers $\lambda$ (scalars) are independent of the choice of the coordinate system and hence so are the coefficients in Equation 4.5.

Therefore the quantities

$\displaystyle I_1$ $\textstyle =$ $\displaystyle D_{11} + D_{22} + D_{33}$  
$\displaystyle I_2$ $\textstyle =$ $\displaystyle \left\vert\begin{array}{cc} D_{22} & D_{32}\  D_{23} & D_{33}\en...
...\vert\begin{array}{cc} D_{11} & D_{31}\  D_{13} &
D_{33}\end{array}\right\vert$  
$\displaystyle I_3$ $\textstyle =$ $\displaystyle \left\vert\begin{array}{ccc}
D_{11} & D_{12} & D_{13} \\
D_{21} & D_{22} & D_{23} \\
D_{31} & D_{32} & D_{33} \\
\end{array}\right\vert$ (4.6)

are all invariants of the tensor $D$. Using these invariants one can form infinitely many other invariants. $I_1$ is known as the trace, $I_3$ is the determinant of $D$.

The inverse transformation from an eigensystem to a tensor is given by

\begin{displaymath}
D = \left[\mathbf{\hat e}_1\: \mathbf{\hat e}_2 \: \mathbf{\...
...hbf{\hat e}_1\: \mathbf{\hat e}_2 \: \mathbf{\hat
e}_3\right]
\end{displaymath} (4.7)

In the case of a symmetric tensor this equation simplifies to
\begin{displaymath}
D = \lambda_1\mathbf{\hat e}_1\mathbf{\hat e}_1^T +
\lambd...
...f{\hat e}_2^T +
\lambda_3\mathbf{\hat e}_3\mathbf{\hat e}_3^T
\end{displaymath} (4.8)

where the eigenvectors $\mathbf{\hat e}_i$ form an orthonormal basis.


Measurements

Using this decomposition, diffusion can be divided into three basic cases depending on the rank of the tensor [16]:
  1. Linear case ( $\lambda_1 \gg \lambda_2 \simeq \lambda_3$): Diffusion is mainly in the direction of the eigenvector of the largest eigenvalue:
    \begin{displaymath}
D \simeq \lambda_1 D_l = \lambda_1\mathbf{\hat e}_1\mathbf{\hat e}_1^T
\end{displaymath} (4.9)

  2. Planar case ( $\lambda_1 \simeq \lambda_2 \gg \lambda_3$): Diffusion is mainly in the plane spanned by the two eigenvectors corresponding to the two largest eigenvalues:
    \begin{displaymath}
D \simeq 2\lambda_1 D_p = \lambda_1\left(\mathbf{\hat e}_1\mathbf{\hat
e}_1^T + \mathbf{\hat e}_2\mathbf{\hat e}_2^T\right)
\end{displaymath} (4.10)

  3. Spherical case ( $\lambda_1 \simeq \lambda_2 \simeq \lambda_3$): Diffusion is isotropic in all directions:
    \begin{displaymath}
D \simeq 3\lambda_1 D_s = \lambda_1\left(\mathbf{\hat e}_1\m...
...thbf{\hat e}_2^T + \mathbf{\hat e}_3\mathbf{\hat e}_3^T\right)
\end{displaymath} (4.11)

A general diffusion tensor $D$ will be a combination of these cases. Expanding the diffusion tensor using these base cases gives:

$\displaystyle D$ $\textstyle =$ $\displaystyle \lambda_1\mathbf{\hat e}_1\mathbf{\hat e}_1^T +
\lambda_2\mathbf{\hat e}_2\mathbf{\hat e}_2^T +
\lambda_3\mathbf{\hat e}_3\mathbf{\hat e}_3^T$  
  $\textstyle =$ $\displaystyle (\lambda_1 -\lambda_2 )\mathbf{\hat e}_1\mathbf{\hat e}_1^T +
(\l...
...thbf{\hat e}_2\mathbf{\hat e}_2^T + \mathbf{\hat e}_3\mathbf{\hat e}_3^T\right)$  
  $\textstyle =$ $\displaystyle (\lambda_1 -\lambda_2 )D_l + (\lambda_2 -\lambda_3 )D_p + \lambda_3 D_s$ (4.12)

where $(\lambda_1 -\lambda_2 )$, $(\lambda_2 -\lambda_3 )$ and $\lambda_3$ are the coordinates of $D$ in the tensor basis ${D_l, D_p,
D_s }$. This relation between the eigenvalues of the diffusion tensor can be used to classify the diffusion tensor according to a geometrically meaningful criteria. By using the new basis for the tensor, measures are obtained of how close the diffusion tensor is to the generic cases of line, plane and sphere. The generic shape of a tensor is obtained by normalizing with a magnitude measure of the diffusion. A useful measure in this context is the magnitude of the largest eigenvalue of the tensor, normalizing the sum of the measurements to 1:
\begin{displaymath}
c_l = \frac{\lambda_1 -\lambda_2}{\lambda_1}
\end{displaymath} (4.13)


\begin{displaymath}
c_p = \frac{\lambda_2 -\lambda_3}{\lambda_1}
\end{displaymath} (4.14)


\begin{displaymath}
c_s = \frac{\lambda_3}{\lambda_1}
\end{displaymath} (4.15)


\begin{displaymath}
c_l + c_p + c_s = 1
\end{displaymath} (4.16)

An anisotropy measure describing the deviation from the spherical case is:
\begin{displaymath}
c_a = c_l + c_p = 1 - c_s = 1 - \frac{\lambda_3}{\lambda_1}
\end{displaymath} (4.17)

Smoothing

In image processing a common operation is smoothing of the data to reduce the noise level. For diffusion data, an independent smoothing of the tensor components has proven to be a robust method. Figure 4.2 shows the effect of applying a Gauss filter to each component of the field in Figure 4.1. Figure 4.3 is a field where the tensors have a clear bias in one direction (the maximum angle between the eigenvectors corresponding to the largest eigenvalue was set to 10$^\circ$). When smoothing this field, the bias is clearly preserved (see Figure 4.4). This form of smoothing also allows to perform the computation in the tensor-domain.

Figure 4.1: Random tensorfield
\fbox{\includegraphics[width = 0.8\textwidth]{images/randomfield.ps}}
Figure 4.2: Smoothing with Gauss filter of length 5
\fbox{\includegraphics[width = 0.8\textwidth]{images/randomfieldg5.ps}}

Figure 4.3: Random tensorfield with bias in one direction
\fbox{\includegraphics[width = 0.8\textwidth]{images/randomfieldb.ps}}
Figure 4.4: Smoothing with Gauss filter of length 3
\fbox{\includegraphics[width = 0.8\textwidth]{images/randomfieldbg3.ps}}



Interpolation

Similarly the transition from one tensor to another, i.e. the interpolation of tensors, is also performed on the single components of the tensor which leads to a result as shown in Figure 4.5. This interpolation is certainly the safest if no information of the structure between the tensors is available. But it is not the only possible transition. Figure 4.6 shows a transition between two tensors where the shape of the tensor is preserved during the transformation. This form will be further discussed in Section 7.4 on page [*], when the displacement of a tensorfield is discussed.

Figure 4.5: Interpolation of single components of two tensors in 9 steps
\fbox{\includegraphics[width = 0.9\textwidth]{images/tensortransformation1.ps}}

Figure 4.6: Interpolation of two tensors in 9 steps by keeping the shape
\fbox{\includegraphics[width = 0.9\textwidth]{images/tensortransformation2.ps}}


Visualization

Unlike scalar data the tensor is a three dimensional structure at each voxel position. Therefore simple grayscale images are not suitable for the representation of tensor data. Some ways of displaying tensor fields are presented and discussed here.

The tensor can be represented as an ellipsoid where the main axes lengths correspond to the eigenvalues and their direction to the respective eigenvectors. This method of display has already been used to illustrate the tensor characteristics in the previous section.

However, when displaying tensors as ellipsoids, there is no difference between an edge-on, flat ellipsoid, and an oblong one, or between a face-on, flat ellipsoid, and a sphere. By assigning a specular intensity and power to the ellipsoids, the reflection of the light source gives an insight of the third dimension of the ellipsoid. This allows a distinction between the ambiguous cases mentioned. Also the field can be rotated in all three dimensions so that the ellipsoids can be inspectioned from any direction.

In the implementation, the tensors are classified into three different classes depending on their shape, and color-encoded according to the class they belong to. The tensors are assigned to a class depending on how close to a line, plane or sphere they are:


\begin{displaymath}
class = \left\{ \begin{array}{r@{\quad:\quad}l}
0 & c_l \ge...
...c_p \ge c_l, c_s\\
2 & c_s \ge c_l, c_p
\end{array} \right.
\end{displaymath} (4.18)

Figure 4.7 illustrates this displaying technique, where class 0 tensors are displayed in blue, class 1 tensors in yellow and class 2 tensors in yellow with the transparency set to 0.4.

Figure 4.7: Visualization of tensors using ellipsoids. Example showing the corpus callosum of an adult human brain
\fbox{\includegraphics[width = 0.8\textwidth]{images/westinslice11_5.ps}}

More example images using this representation of tensors can be found in Appendix B.

The representation of a tensorfield in ellipsoids is certainly limited in size, since the overall information provided is too large for a visual inspection. Therefore two dimensional representations of the tensors are useful when observing larger objects.

One way of visualizing the tensors as two-dimensional objects it to use blue headless arrows that represent the in-plane components of $c_l\hat{\mathbf{e}}_1$ [17]. The out-of-plane components of $c_l\hat{\mathbf{e}}_1$ are shown in colors ranging from green through yellow to red, with red indicating the highest value for this component. Figure 4.8 shows the example slice using this visualization technique.

Figure 4.9 shows another way of color encoding the eigenvector corresponding to the largest eigenvalue. Here the components of the first eigenvector $\hat{\mathbf{e}}_{1x}$, $\hat{\mathbf{e}}_{1y}$ and $\hat{\mathbf{e}}_{1z}$ are multiplied by the length of the eigenvalue $\lambda_1$. The red, green and blue (RGB) values for the color at a position $(i,j,k)$ are then set to

$\displaystyle R$ $\textstyle =$ $\displaystyle k\lambda_1\hat{\mathbf{e}}_{1x}$  
$\displaystyle G$ $\textstyle =$ $\displaystyle l\lambda_1\hat{\mathbf{e}}_{1y}$  
$\displaystyle B$ $\textstyle =$ $\displaystyle m\lambda_1\hat{\mathbf{e}}_{1z}$ (4.19)

with $k,m,l$ parameters to scale each component into a range from 0 to 255.

The disadvantage of this form of visualization is that fibertracts change color when changing direction, even though these color changes will be smooth. Nevertheless, it is a useful visualization to get an impression of the quality and the content of the data set.

Figure: Visualizing tensors with headless arrows and dots representing $c_l\hat{\mathbf{e}}_1$
\includegraphics[width = 0.8\textwidth]{images/matlabtensor2.eps}
Figure 4.9: Visualizing tensors by color-encoding the largest eigenvalue, -vector
\includegraphics[width = 0.8\textwidth]{images/mycolor.ps}

Finally the different measurements presented in Section 4.2 can be represented as grayscale images. Figure 4.10, 4.11 and 4.12 show this measurements for the same subject.

Figure 4.10: Grayscale representation of the measurement close-line $c_l$
\includegraphics[width = 0.8\textwidth]{images/closeline.ps}
Figure 4.11: Grayscale representation of the measurement close-plane $c_p$
\includegraphics[width = 0.8\textwidth]{images/closeplane.ps}

Figure 4.12: Grayscale representation of the measurement close-sphere $c_s$
\includegraphics[width = 0.4\textwidth]{images/closesphere.ps}


The program main allows the display of any tensorfield which has been preprocessed as described in Chapter 5. The displaying of tensor data sets with ellipsoids is a very computation intensive process. It is therefore recommended that in displaying an image that one begins with a very small data window to see if the result is presented in a reasonable time.


next up previous contents [cite] [home]
Next: Data Preprocessing Up: Diffusion Tensor Imaging Previous: Image Acquisition   Contents
Raimundo Sierra 2001-07-19