RESEARCH BACKGROUND
2.1 Overview
In the beginning it is significant to explain the difference between digital image processing and digital image analysis. Image processing can be thought of as a transformation that takes an image into an image, i.e. starting from an image a modified (enhanced [65], [66]) image is obtained. On the other hand, digital image analysis is a transformation of an image into something different from an image, i.e. it produces some information representing a description or a decision.
Get Help With Your Essay
If you need assistance with writing your essay, our professional essay writing service is here to help!
The purpose of digital image processing is threefold; to improve the appearance of an image to a human observer, to extract from image quantitative information that is not readily apparent to the eye, and to calibrate an image in photometric or geometric terms. Image processing is an art as well as a science. It is a multidisciplinary field that contains elements of photography, computer technology, optics, electronics, and mathematics. This dissertation proposes the use of segmentation, as an effective way to achieve a variety of low-level image processing tasks one of these tasks is classification. The focus of research into segmentation is to determine logic rules or strategies that accomplish acceptably accurate classification with as little interactive analysis as possible.
2.2 Digital images
A digital image is composed of pixels that can be thought of as small dots on the screen. A digital image is an instruction of how to color each pixel. In the general case we say that an image is of size m-by-n if it is composed of m pixels in the vertical direction and n pixels in the horizontal direction. In the RGB color system, a color image consists of three (red, green, and blue) individual component images. For this reason many of the techniques developed for monochrome images can be extended to color images by processing the three component images individually.
2.3 Grayscale Images
A grayscale image is a mixture of black and white colors. These colors, or as some may term as ‘shades’, are not composed of Red, Green or Blue colors. But instead they contain various increments of colors between white and black. Therefore to represent this one range, only one color channel is needed. Thus we only need a 2 dimensional matrix, m by n by 1.
2.4 Image Segmentation
There is no theory on image segmentation. Instead, image segmentation techniques are basically ad hoc and differ mostly in the way they emphasize one or more of the desired properties of an ideal segmenter and in the way they balance and compromise one desired property against another.
Image segmentation plays an important role in medical image processing. The goal of segmentation is to extract one or several regions of interest in an image. Depending on the context, a region of interest can be characterized based on a variety of attributes, such as grayscale level, contrast, texture, shape, size etc. Selection of good features is the key to successful segmentation. There are a variety of techniques for segmentation, ranging from simple ones such as thresholding, to more complex strategies including region growing, edge-detection, morphological methods, artificial neural networks and much more. Image segmentation can be considered as a clustering process in which the pixels are classified to the specific regions based on their gray-level values and spatial connectivity.
Ideally, a good segmenter should produce regions, which are uniform and homogeneous with respect to some characteristic such as gray tone or texture yet simple, without many small holes. Further, the boundaries of each segment should be spatially accurate yet smooth, not ragged. And finally, adjacent regions should have significantly different values with respect to the characteristics on which region uniformity is based. There are two kinds of segmentation
o Complete segmentation: results in set of disjoint regions uniquely corresponding with objects in the input image.
o Cooperation with higher processing levels which use specific knowledge of the problem domain is necessary.
o Partial segmentation: in which regions do not correspond directly with image objects.
o Image is divided into separate regions that are homogeneous with respect to a chosen property such as brightness, color, reflectivity, texture, etc.
o In a complex scene, a set of possibly overlapping homogeneous regions may result. The partially segmented image must then be subjected to further processing, and the final image segmentation may be found with the help of higher level information.
Image segmentation includes three principal concepts: detection of discontinuities, e.g. edge based, thresholding, e.g. based on pixel intensities and region processing, e.g. group similar pixels.
Segmentation methods can be divided into three groups according to the dominant features they employ.
• First is global knowledge about an image or its part; the knowledge is usually represented by a histogram of image features.
• Edge-based segmentations form the second group;
• Region-based segmentations the third.
It is important to mention that:
• There is no universally applicable segmentation technique that will work for all images.
• No segmentation technique is perfect.
2.5 Edge-Based Segmentation
Edge-based segmentation schemes take local information into account but do it relative to the contents of the image, not based on an arbitrary grid. Each of the methods in this category involves finding the edges in an image and then using that information to separate the regions. In the edge detection technique, local discontinuities are detected first and then connected to form complete boundaries.
Edge detection is usually done with local linear gradient operators, such as the Prewitt [PM66], Sobel [Sob70] and Laplacian [GW92] filters. These operators work well for images with sharp edges and low amounts of noise. For noisy, busy images they may produce false and missing edges. The detected boundaries may not necessarily form a set of closed connected curves, so some edge linking may need to be required [Can86].
the result of applying an edge detector to an image may lead to a set of connected curves that indicate the boundaries of objects, the boundaries of surface markings as well curves that correspond to discontinuities in surface orientation. Thus, applying an edge detector to an image may significantly reduce the amount of data to be processed and may therefore filter out information that may be regarded as less relevant, while preserving the important structural properties of an image. If the edge detection step is successful, the subsequent task of interpreting the information contents in the original image may therefore be substantially simplified.
2.6 Edge: what is it?
• Edge detectors are a collection of very important local image pre-processing methods used to locate (sharp) changes in the intensity function.
• Edges are pixels where the brightness function changes abruptly.
• Calculus describes changes of continuous functions using derivatives.
• An image function depends on two variables — co-ordinates in the image plane — so operators describing edges are expressed using partial derivatives.
• A change of the image function can be described by a gradient that points in the direction of the largest growth of the image function.
• An edge is a (local) property attached to an individual pixel and is calculated from the image function in a neighborhood of the pixel.
• It is a vector variable with two components
o magnitude of the gradient;
o And direction φ is rotated with respect to the gradient direction ψ by -90°.
• The gradient direction gives the direction of maximal growth of the function, e.g., from black ( ) to white ( ).
• This is illustrated below; closed contour lines are lines of the same brightness; the orientation 0° points East.
• Edges are often used in image analysis for finding region boundaries.
• Boundary is at the pixels where the image function varies and consists of pixels with high (?) edge magnitude.
• Boundary and its parts (edges) are perpendicular to the direction of the gradient.
• The following figure shows several typical standard edge profiles.
Roof edges are typical for objects corresponding to thin lines in the image.
• Edge detectors are usually tuned for some type of edge profile.
• Sometimes we are interested only in changing magnitude without regard to the changing orientation.
• A linear differential operator called the Laplacian may be used.
• The Laplacian has the same properties in all directions and is therefore invariant to rotation in the image.
2.7 Approaches to edge detection
There are many methods for edge detection, but most of them can be grouped into two categories, search-based and zero-crossing based. The search-based methods detect edges by first computing a measure of edge strength, usually a first-order derivative expression such as the gradient magnitude, and then searching for local directional maxima of the gradient magnitude using a computed estimate of the local orientation of the edge, usually the gradient direction. The zero-crossing based methods search for zero crossings in a second-order derivative expression computed from the image in order to find edges, usually the zero-crossings of the Laplacian or the zero-crossings of a non-linear differential expression as shown in Figure 2.1:
Figure 2.1: Edge finding based on the zero crossing as determined by the second derivative, the Laplacian. The curves are not to scale.
As a pre-processing step to edge detection, a smoothing stage, typically Gaussian smoothing, is almost always applied. The edge detection methods that have been published mainly differ in the types of smoothing filters that are applied and the way the measures of edge strength are computed. As many edge detection methods rely on the computation of image gradients, they also differ in the types of filters used for computing gradient estimates in the x- and y-directions.
Find Out How UKEssays.com Can Help You!
Our academic experts are ready and waiting to assist with any writing project you may have. From simple essay plans, through to full dissertations, you can guarantee we have a service perfectly matched to your needs.
View our academic writing services
2.8 Scale-Space Theory
It is a framework for multi-scale signal representation developed by the computer vision, image processing and signal processing communities with complementary motivations from physics and biological vision. It is a formal theory for handling image structures at different scales, by representing an image as a one-parameter family of smoothed images, the scale-space representation, and parameterized by the size of the smoothing kernel used for suppressing fine-scale structures. The parameter t in this family is referred to as the scale parameter, with the interpretation that image structures of spatial size smaller than about √t have largely been smoothed away in the scale-space level at scale t.
2.9 Scale-Space-edge Detector
The primary goal of this image edge detector is to delineate paths that correspond with the physical boundaries and other features of the image’s subject. This detector implements an edge definition developed in Lindeberg (1998) that results in “automatic scale selection”:
1) The gradient magnitude is a local maximum (in the direction of the gradient). 2) A normalized measure of the strength of the edge response (for example, the gradient weighted by the scale) is locally maximum over scales.
The first condition is a well established technique (Canny, 1986). Accounting for edge information over a range of scales (multi scale edge detection) can be approached in two ways; appropriate scale(s) at each point can be evaluated, or edge information over a range of scales can be combined. The above definition takes the first approach, where appropriate scale(s) is taken to mean the scale(s) at which maximal information about the image is present. In this sense it is an adaptive filtering technique — the edge operator is chosen based on the local image structure.
Ss-edges implementation iteratively climbs the scale space gradient until an edge point is located and then iteratively steps, perpendicular the gradient, along a segment, repeating the gradient climb, to fully extract an ordered edge segment. The advantage of ss-edge detector, compared to a global search method, is in its use of computational resources, flexibility of choosing the search space, and flexibility of specifying the details of its edge finding/following. Fig. 2 shows the edge detecting result of “Third Degree burn” image.
Fig. 2.2. Edge detecting result of “Third Degree burn” image.
Cite This Work
To export a reference to this article please select a referencing style below: