Medical Electronics and Applications: Imaging Fundamentals and Mathematics

Home | Articles | Forum | Glossary | Books

AMAZON multi-meters discounts AMAZON oscilloscope discounts

1. Purpose of Imaging

Medical images are used to obtain information about internal body organs or the skeleton to determine a patient's physical state. An image may show damage to organs that cannot be externally visualized. The dangers or stress to a patient by use of a non-invasive imaging technique may be greatly less than those of an invasive operation. The latter may also be slow and disturb the physical state which it is intended to evaluate.

A variety of imaging modalities is used. They use variously photon, ultrasonic, thermal and electrical impedance methods to obtain characteristics of the body being examined. These characteristics are evaluated and transformed into a form of picture which indirectly yields information about that patient's pathology.

In principle an image is a representation of some object. Its nature is indirect, and its form is different from the original incarnation. Generally the images we use in medical applications are represented in two dimensions. From examining this representation we may evaluate certain of the characteristics shown in the image to see how they are derived or to see how they deviate from the normal.

Images are thus used to examine unusual growth, which may be cancerous, to examine bone structures which may have been physically damaged, or to look for damage to blood vessels or brain tissue. The visual representation which the image affords may enable a diagnosis of the patient's condition and enable its correction.

This section provides a brief overview of imaging technology which is treated more thoroughly in Section 6. Our main purpose for now is to introduce the common mathematical background used. This includes a basic formulation of image acquisition which leads on to the methods used to restore and enhance images. For a more thorough treatment of the mathematical background to the signal analysis and image processing covered in this section, the reader is referred to specialist texts such as Bracewell (1986) and Gonzalez (1992).

1.1. Overview of Imaging Modalities

The various imaging technologies have very different physical characteristics. Sound waves are transmitted reasonably well through tissue. Their velocity is around 1500 ms-l. At this velocity it is reasonable to be able to separate structures within a body by their differential depths. Echoes return in times of up to about 0.1 ms. The hazard from ultrasound energy is believed to be very low at the energies normally used in clinical diagnosis. The resolution of ultrasound is primarily limited by the aperture of the ultrasonic probe, diffraction and thus the wavelength of the sound employed.

On the other hand various high energy photon beams may be used, such as X radiation. These are of an energy which does not permit their focusing, and the velocities are such that pulse echo techniques are not feasible. Unlike ultrasound, however, X rays are transmitted through gaseous spaces, such as in the lungs, and may be used for general thoracic examination. The image formed by X rays is a result of the differential absorption of the radiation in different tissues. The effects of diffraction do not directly limit the resolution of X ray imaging.

As an alternative to using ionizing radiation in this fashion, a patient may be given a small amount of radioactive material bonded to a biochemical reagent. 'Nuclear Medicine' studies observe the external radiation produced to help examine, amongst other things, the manner in which the reagent is transferred about the body. This can provide useful diagnostic information of a rather different nature.

X radiation is in principle dangerous. It is generally accepted that there is a risk involved with any exposure to radiation. The risk is normally considered as a stochastic risk with the levels of dose used in imaging. These are likely to cause a number of deaths in a significant population of individuals each of whom receive the exposure. Therefore before any image is obtained, it should be established that the benefit to the patient outweighs the risk of damage due to the examination. We looked at the absorption of radiation in Section 3: its safety is further considered in Section 8.

Finally, we can also look at the use of NMR scanners. Used in a medical environment, they are normally known as Magnetic Resonance Imaging (MRI) scanners, and obtain a map of the distribution of hydrogen nuclei to provide yet another viewpoint. Other physical parameters are examined in various other imaging systems which are outside the scope of this guide.

Please refer to the Bibliography for details of literature which may help you.

2. Mathematical Background

The mathematical treatment in the following sections is largely based on two dimensional models. This should not be surprising as the form of image we are dealing with is a two dimensional entity. The objects of concern of course are solid (three dimensional) structures, but in the main, the forms of imaging used in medicine obtain two dimensional views.

2.1. Linearity

The imaging systems we will discuss will to the first approximation be assumed to be linear.

A linear system is one which obeys the superposition principle:

S{aW,y)+ bI,(x,y)} = aS{~,(x,Y)}+bS{~*(x,Y)} (1)

where S is the system function, and the Is are the input functions which are shown in a two dimensional form.

The response of this sort of system may be analyzed in terms of its impulse response. The impulse or Dirac Delta function is defined to have a volume of unity:

m IS(x,y)dxdy = 1

-m

This function is defined to have a vanishingly small width so that its value is zero at all points except (x,y). The function is used in the analysis of systems owing to two important properties. Firstly, it may be used for selection, and secondly as it contains equal power at all frequencies it may be used to characterize a system's frequency response. For mathematical convenience, its shape is defined as that of a Gaussian probability distribution function whose form is shown in FIG. 1. For a further discussion of this function, please refer to Bracewell (1986).


FIG. 1 Shape of the two dimensional Gaussian Probability function.

The output at a point from a two dimensional system is given by in which g, is the input, and S is the system transfer function which will be considered in more detail later. We use the delta function's selection property over the system in an operation which reorganizes equation 3 to obtain -00 where €, and q are dummy variables of integration. Within the integral in (4) is the system response to a two dimensional delta function. This may be more simply expressed in a different notational form as:

or the point spread function. This assumes that the system response is space invariant, or in other words that S is independent of the values of (x,y). We may now rewrite the system equation as

which is a two dimensional convolution function.

g2 =g1 *h (7)

We consider convolution in section 2.5 below. Physically, this equation tells us that the input function is blurred by the system impulse response h. This suggests that we may be able to remove the distortions introduced by the imaging system if we can characterize the system's impulse response to facilitate the generation of better quality images. Unfortunately, the problem will not turn out to be quite so straightforward when we look at image restoration in section 5.3.2 below. Before doing that we must revise the mathematics to be used and clarify its terminology.


FIG. 2 Fourier Transform Pair.

2.2. Fourier Transform

The Fourier Transform enables a function which is defined in terms of time or space to be specified as a function of frequency. In simpler language, for an image this means that the picture which we see may be expressed instead in terms of a spatial frequency function. As is the case with processing time varying electronic signals, the transformed frequency function is the spectrum of the image in space. The high frequency components therefore represent the fine detail, whereas the lower frequency components contribute the overall structure. The Fourier Transform is given by

G(u) = S{(X)} =j-g(X)e-2"%

Note that we use the symbol i to represent throughout this guide.

For example, consider the function rect (X), which is of value A for X 2 x 20 and 0 otherwise.

Its Fourier Transform is:

A transformed function may revert to its original spatial form by means of the inverse transform

g(x) = g-'{G(u)} = ~wG(u)e2"iuxdu

-(9)

The symbols 8 and 8-l are defined as the Fourier Transform and Inverse Transform operators respectively.

Fourier Transforms of functions exist only in certain circumstances. Fortunately the restrictions are not too severe in the cases in which we are interested. The function g(x) may be transformed if is integrable over its whole domain, it is not infinitely discontinuous and it has a finite number of discontinuities. This statement of the conditions for the existence of the transform is rather formal: it is better developed in specialist texts, such as Bracewell to which the interested reader is referred. If a function can be transformed into the frequency domain, its inverse transform must exist.

The definition of the Fourier Transform pair may be extended to encompass functions of two dimensions:

G(u,v) = 8{(x,y)} = j jg(x,y)e-2"'(ux+vv) hdY (10)

-and the inverse transform is

g(x, y) = g-'{G(u, v)} = G(u, v)e2"i(ux+"v)dudv

These are unsurprisingly the forms which we will most frequently encounter when processing images represented in two dimensions.

2.3. Discrete Fourier Transform

When we wish to process images or other signals by digital computer, we encounter problems relating to the nature of their storage which is obviously discrete rather than continuous in nature. This leads to two problems. The first is that we have to sample the image data, and the second is that the sampled data becomes a function of a discrete rather than continuous variable. This second problem leads to quantization noise.

We sample a continuous function flx) N times at discrete intervals to form a sequence:

{f(xo)7f(xo + A&f(xo +2A+..,f(xo +[N-l14} f(x) = f(x0 + x") (12)

…which may be considered as the function (13)

in which x has the values 0, 1, 2, ..., N-I and Ax is the sampling interval.

The sampling process is often described by a sampling function, denoted by the shah symbol 111, which is defined as

Using this notation the function flx) is sampled: When sampling, the sampling frequency must be at least twice the maximum signal frequency. This is described by Nyquist's Theorem. If this condition is not correctly observed, the periodicity of the discrete Fourier Transform (equations 16, 17 below) leads to incorrect distribution of power in the calculated spectrum, causing aliasing.

The discrete Fourier Transform is then and the inverse transform is

u=o

In these expressions, u/N is analogous to frequency measured in cycles per sampling interval (see for example Bracewell).

Again, this transform pair may be expressed in two dimensions:

x=o y=o for values of u = 0, 1, 2 ,..., M--1, and v = 0, 1, 2 ,..., N--1 and u=o v=o for values of x = 0, 1, 2 ,..., M1, and y = 0, 1, 2 ,..., N1.

In the case of images, these transforms are often quoted as pairs in a square form, so that N=M and they become symmetric. Equations 18 and 19 may then be rewritten: (uX+V?)

-2KiN-l N-l F(u, v) = N-' y, y, f(x, y)e x=O v=o u=o v=o

The placement of the constant multiplicative factors N-I is arbitrary.

2.4. Computation

The single dimension formulae, equations 16 and 17 above, require in the order of Nz complex multiplications and additions for their evaluation. This number may be significantly reduced by avoiding the repetition of calculations. A method of reduction of the number of calculations is shown in very brief outline here to endeavor to standardize on the terminology used in the description of the imaging process. For a derivation of the method, known either as the Fast Fourier Transform, or the Cooley-Tukey algorithm, see for instance Bracewell. The Fourier Transform of f(n), given in equation 16 may now be rewritten as

N-l F(u) = cf(n)Wr where WN =e n=O

and a similar expression may be written for the inverse transform. Equation 22 may now be split into even and odd parts:

m=O m=O

Note that this approach is applicable if N is a power of 2 when the algorithm provides a significant speed improvement. In practice it may be necessary to extend the size of the transformed matrix to meet this condition.

When we split the transform into these two components we may also reorganize it into two functions f, and f, representing the even and odd parts respectively. But since these factors represent the phase. So and:

F(~+N/Z)=F,(U)-W~F,(~), foru=o,1 ,...,NA1 (25)

Here F, and F2 are the N/2 point discrete transforms for the& and fi sequences respectively.

These last two calculations require (N/2)2 complex multiplications each. The second terms require a further N/2 complex multiplications. This method thus reduces the number of required operations by a factor of 2 for large values of N.

This same process may now be continued. If the total number of points is a power of two, then it may be carried on iteratively until one point transforms are calculated. Thus for v points, the calculation may be carried out v = log, N times, and the total number of complex multiplications reduced to ( N/2)10g2 N .

This process may be extended to cover transformations in two dimensions.

2.5. Convolution

We will encounter Convolution frequently in the processing of images. The convolution operation provides the basis for the description of the formation of an image. A full description of the mathematical background is beyond the scope of this guide: for a clear introduction, the reader should look elsewhere, such as Bracewell.

The convolution of two functions of x is defined as which is more simply expressed by the notation

h(x) = f(x) * g(.) (27)

u is a dummy variable of integration. An example of convolution is shown in FIG. 3.

3. Imaging Theory

In this section we look at the basis of the composition of an image. The relationship between an object and an image was previously briefly outlined. This may now be explored in more detail to see how to present the image in such a way as to reduce noise, highlight desired aspects and to remove any artifacts introduced by the imaging system. This is again presented in theoretical terms without direct reference to any specific imaging modality: the technology is described in Section 6.


FIG. 3 Example of convolution

3.1. System Transfer Function

An image provides us with a representation of an object. Let the object in space be f and its image g. The image is created by the imaging system as a representation of the object. If the transformation is perfect, then f=g. However, real imaging systems never achieve perfection, so we are concerned with the changes introduced. These are in the simplest cases two dimensional representations of solid objects in the manner of a photograph of a solid object.

The simplified general relationship between the object and image is

g(x9 Y) = h( x, Y) * f (-% Y) (28)

This is the convolution of the object f and a function of the imaging system, h. This function in this case is taken to be space invariant, or independent of the values of x and y. This may be expressed in the frequency domain as

G(u, v) = H(u, v)F(u, V) (29)

where His called the transfer function. In optical terminology His called the optical transfer function, whose magnitude is called the modulation transfer function. Rather in the manner of an amplifier of an electronic signal, the transfer function defines the frequency response of the system. For the system to transfer the finest details of the object to the image, the transfer function should have sufficient bandwidth not to cause deterioration.

When we are going to use computer enhancements of images, it is necessary to sample and digitize the image. For a discrete (or sampled) image, this relationship may be written as If we also assume that the object was itself digital, then we may write which indicates the potential amount of calculation required to restore an image in the real space domain.

Now consider equation 29 in a discrete form, and rearrange it as the inverse transform.

u=o v=o

We have attempted in this equation to recover the object from the image G and the system transfer function H. It should be clear that equation 32 indicates the danger of attempting directly to recover the best interpretation of the object from the image by simple deconvolution.

We can clarify this point after introducing a noise term in the next equation.

The system noise may be represented by introducing a further term n into the description of the image and object relationship. The noise term describes the variability between different images obtained from the same object using the same imaging system.

This equation does not have a unique solution. Rather, we may only achieve a best fit relationship between image and object. If we apply the same technique as in equation 32 to obtain a solution, there would probably be regions of H which are close to zero. These could easily coincide with regions in which the numerator was non-zero, allowing the 'improved' image to become dominated by noise.

We now express equation 33 in rather different terms. Let us consider instead that the f term is an ideal image. Then the degraded image This equation rather more clearly describes the operation of obtaining an image as a two stage process whereby the ideal image f is processed by the degradation operation H and subsequently by the additive noise term n to produce a degraded image g.

In our following analysis, we are going to assume that the degradation process is linear, homogeneous and space invariant. Then if we simplify by assuming that the noise is zero, recall equation 1: The noise free solution to the system described by equation 34 is space invariant if for any a and 0. These expressions form the basis of the description of the image recovery process which we describe in the following section.

3.2. Image Restoration

Image Restoration is the process of recovering or improving an image that is in some sense degraded. The physical system which obtained the image we have already observed will have a limited bandwidth and thus will reduce the image's fine detail. It will also introduce noise to the image. It is our purpose in the first place to attempt to obtain an image which is the best estimate of the object from which it was obtained. Expressed in slightly different terms, recall the degradation model outlined in the previous section: we will be seeking to obtain from the degraded image a best estimate of the ideal image.

Returning to equation 4, the non-degraded image may be described in terms of its convolution with the two dimensional impulse function: Using the noiseless description of the system given in equation 34 above we may substitute for f As the system is linear, it is additive and homogeneous, so

m g(x,y)=J Jf(a,P)~[~(x-a,Y-P)]dadp -w

We may define the term (39)

h( x, a, Y7 P) = H[ qx--a, Y--PI] (40)

which is called the system impulse response. This may be substituted in equation 38 to form

Equation 41 is known as the Fredholm Integral which states that if the system impulse response is known, then the response to any input f (a,P) may be calculated. Alternatively, this says that the system impulse response completely characterizes a system.

If the system is space invariant, then equation 41 reduces to the convolution integral:

(42)

In a discrete form, equation 42 may be written as

m=On=O

This may be expressed more compactly in a matrix representation in which H is the degradation operator introduced in equation 34 g=Hf (44)

where g and fare column vectors formed by stacking the rows of the mx n functions g(x,y), fTx,y). H is a matrix of dimension MN x MN. We now reintroduce the noise term n from equation 33 to the equation, again in a vector form (of the same dimension as 8):

g=Hf+n A and the best estimate image f is defined by multiplying by the inverse operator

H' 3 = H-lg = H-'Hf+H-'n (45)

or more simply, the best estimate image depends on the idealized image f, and the noise term n which may have been multiplied by components of the inverse system transfer function

H', so 3 = f +H-'n (46)

We should also note here that the matrix H-' may not exist.

Clearly another way has to be found to recover the image other than by direct deconvolution.

In the first place it is rational to try to minimize the effect of noise by using the standard method of least squares. Then which is defined as where the operator transposes the matrix.

Now minimize equation 47 by setting its differential to zero:

0 = 2HT( g--Hi) (49)

A f = H-'g (51)

Which is unfortunately the result previously obtained in equation 32 that caused noise amplification and was therefore of no use.

This problem may be resolved by using a modified technique, which instead of attempting to recover the image itself, seeks to find a best modified form of the image.

We will seek to minimize a function of the form Qf under the constraints imposed in equation 47. Q is a linear operator which operates on f. Applying the constraint, we may write a function

II *I12

In this equation, a is known as a Lagrange Multiplier. Differentiating this function JJ f J f and then minimizing by setting the differential to 0 yields

[I/ rI

Here we have substituted z =l/a for clarity. Now if z is set to zero, this equation reverts to the direct deconvolution we examined earlier.

The operator Q may be selected now to enable values of f to be obtained which are not dominated by noise in parts of the spectrum. This technique is explored in more detail by Gonzalez (1992).

4. Image Processing

The following sections outline several forms of 'improvement' which are undertaken to clarify aspects of images. This treatment is far from exhaustive: the interested reader is referred to specialized texts on image processing, such as Gonzalez (1992).

4.1. Enhancement

An image is enhanced to draw out certain characteristics that would otherwise be unclear. It is therefore different in principle from restoration, and is involved with the image itself rather than the manner in which it was produced. The sorts of enhancement discussed here involve filtering and contrast enhancement which are used to alter the emphasis on particular image features. In a medical context these improvements are undertaken to improve the diagnostic features of the image.

The methods are therefore not general: the particular form of enhancement that is indicated for a form of problem is likely to be specific to that problem and imaging modality. The major groups of methods employed are those which operate in the Spatial and Frequency domains respectively.

The first forms of spatial enhancement methods are those which operate on points in the image. These transform the grey level of points. This may be desirable for several reasons.

The first is that the dynamic range of the image information exceeds that of the display medium. For example, there may be an image with components distributed over a range of 106, to be shown on an 8 bit display. This form of image may be displayed with greater clarity by logarithmic compression of its dynamic range. On the other hand, an image may show its features better if much of the grey scale information is hidden by altering the display characteristics so that it displays only a limited dynamic range, perhaps only two bits. The threshold level for this transition needs to be selected to give the best view of the desired features.

A more general form of point image manipulation is to use a histogram method. Here the starting point is to obtain the distribution densities in the image of each discrete grey level.

Transformations may be undertaken either to shift the overall distribution or to modify the image brightness. Alternatively the density distribution function may be expanded or contracted in order to modify the resulting image contrast.

Techniques are also employed to modify the whole image in order to equalize the histogram of its grey scale, to make it fit a defined pattern, or to modify selectively different areas of an image according to different recipes. These all have characteristics which lend themselves to particular situations.

The other aspect of spatial processing of concern involves undertaking transforms based on the properties of an area rather than single points individually. These processing techniques use filters which may be used either to smooth out transitions (low pass filters) or to enhance contrast (high pass filters). If the filters are matrices populated by numbers used to multiply the points in the image's matrix, and they are defined symmetrically about their centre, they in effect carry out the convolution operation directly. These filters are linear in operation.

Other spatial filters may be defined which are not linear. An example is the median filter. This examines points in a neighborhood, and sets their values to those of the median value in the set of those points. For example the filter would select a group of nine points (a 3 x 3 array), and set their values in ascending order. The value found for the middle element--the fifth--is used to replace the value of the pixel. This technique quickly removes high noise peaks in an image as they have values which are excluded from being the median value. It has the distinct advantage over smoothing filters, which carry out a low-pass filtering operation, that it does not directly affect edge sharpness and thus lose detail.

Filtering may also be carried out in the frequency domain. These may be defined either as ideal filters, or to have a particular cut off shape. Frequency domain filtering is attractive either when it is desired to match a spectrum exactly, or when the image data are already held in the frequency domain for other reasons.

4.2. Resolution

The resolution of an image is defined as the ability to enable two points to be distinguished. It is therefore specified as the maximum spatial frequency in the object which may be recognized in the image. This is clearly a function of frequency and contrast.

The primary factor which influences the resolution of X ray images is therefore the sampling frequency. For tomographic scanners this is due to the number of samples taken when rotating the imaging system around the patient. In gamma camera or other scintillation counter systems, the size and number of sampling points defines the resolution. For a formal definition of the term 'resolution', see for example Longhurst (1967). Thus for a tomographic image, the resolution is traded against the total X ray dose given to the patient. The more samples taken, the greater the overall dose given, but the better the spatial frequency information.

4.3. Blur Removal

It is possible to remove some of the movement artifacts from digitally stored images which are likely when the image sampling time extends beyond a period of about a tenth of a second.

It is likely that many medical images suffer from such artifacts: certain of the more modern imaging equipment is able to assist in their removal.

4.4. Image Analysis

A further topic to be considered is image analysis. The term relates to the examination of the image to identify its structural components. The image may be segmented, by examining it to locate edges and thus other geometric shapes. These are the building blocks used subsequently for recognition of the structures in images. This morphological analysis is able to detect structures such as text characters in images from an assemblage of pictorial regions.

The analysis of regions requires the application of Knowledge Based System techniques.

Image structures require to be compared with known structural forms. In a medical application, we should know which structure we are looking for, but it may be desirable to use an automatic system to attempt to quantify some aspect of the information stored in an image. In many areas of image analysis the objective is to make a ready and quick assessment of a pattern. In medical analysis, rather the reverse is the case, as it may be desirable to isolate the unrecognized or abnormal so that it may be enhanced and more clearly displayed.

A serious limitation remains that the recognition of features is difficult when applied to noisy images. Improving the signal to noise ratio of medical images is difficult. Increased exposure is required to obtain an improvement potentially leading to the use of greater doses than would otherwise be required. This brings in the ethical question about when to undertake radiological examination (see Section 8). Lengthening the duration of exposure is likely to increase movement artifacts potentially obliterating the improvement obtained in the physical image signal to noise ratio.

Some recent imaging equipment uses these techniques to recognize structures, such as the chambers of the heart so that the stroke volume may be measured by imaging technology rather than using an invasive technique.

Top of Page

  PREV.
Next
  | HOME