Open Access
1 February 2005 Gradient-threshold edge detection based on the human visual system
Fuzheng Yang, Yilin Chang, Shuai Wan
Author Affiliations +
Abstract
We present an improved method that is suitable for gradient-threshold edge detectors. The method takes into account the basic characteristics of the human visual system and masks the gradient image with the luminance and the activity of local image before edge labeling. An implementation of this method on a Canny detector is described as an example. The results show that the edge images obtained by our algorithm are more consistent with the perceptive edge images.

Introduction

Edge detection plays an important role in various areas of image analysis and computer vision. Gradient-threshold-based edge detectors are widely used, such as Sobel, Prewitt, and Canny detectors.1 2 Usually, a global threshold for edge labeling is used in gradient-threshold edge detection. Although improvement has been achieved by the local threshold technique,3 the characteristics of the human visual system (HVS) are not well considered in existing gradient-threshold edge detectors.

This paper proposes an improved method based on HVS that is suitable for gradient-threshold edge detectors. Taking HVS characteristics into account, we mask the gradient image with luminance and activity before edge labeling. The edges detected by this method are much more consistent with the perceptive edges than those by traditional methods.

Improved Method

Like other gradient edge detectors, the proposed algorithm smoothes and differentiates the image to get the gradient image grad(x,y) first. Then the gradient image is masked with local luminance and local activity. The masked gradient image is used for labeling edges finally. Because HVS is extremely complex and has not been cognized clearly,4 most attention is focused on the basic HVS characteristics of luminance masking and activity masking in this paper. For example, let the gradient at (x*,y*) be grad(x*,y*). Then grad(x*,y*) will be masked as follows.

First, if the absolute value of grad(x*,y*) is smaller than a given threshold T, the final masked gradient gradm(x*,y*) is set to zero and (x*,y*) is determined to be a nonedge pixel directly. Otherwise grad(x*,y*) will be masked.

The response of HVS depends much less on the absolute luminance than on the relation of its local variations to the surrounding luminance.4 Contrast sensitivity of HVS remains nearly constant over an important range of intensities. Hence the luminance masking of the gradient is modeled by

Eq. (1)

gradb(x*,y*)=grad(x*,y*)a1+(|b(x*,y*)b0|b0)r1,
where gradb(x*,y*) denotes the gradient masked with luminance, b0 is the average luminance of the entire image, and b(x*,y*) is the local average luminance of the N×N sized region centered on (x*,y*), as shown in Fig. 1(a). Parameters a1 and r1 are estimated experimentally. Equation (1) explicates the contrast sensitivity characteristic and the luminance adaptation capability of HVS.

Fig. 1

Regions for calculating local luminance and local activity: (a) for local luminance; (b) for local activity.

013502j.1.jpg

The visibility of a stimulus is affected due to the presence of other stimuli in a surrounding spatial region of limited extent.4 Therefore the perception to edges is also seriously influenced by the activity of the local background. The gradient masked with luminance is further masked with activity by

Eq. (2)

gradm(x*,y*)=gradb(x*,y*)a2+(m(x*,y*)m0)r2,
where m0 is the average activity of the image, and parameters a2 and r2 are selected to match the variation of human visual sensitivity with the background activity. To compute the local activity m(x*,y*) we suppose that the pixel at (x*,y*) lies in an edge, which is perpendicular to the direction of grad(x*,y*) and is (E1+E2) in width, as shown in Fig. 1(b). The direction of grad(x*,y*) is denoted by the arrowhead. Let (x1,y1) and (x2,y2) be the first pixels whose gradient values are smaller than λ⋅gradb(x*,y*) along the direction and against the direction of grad(x*,y*) respectively. If the distances from (x*,y*) to (x1,y1) and (x2,y2) are defined as di(i=1,2) respectively, then Ei is defined by

Eq. (3)

Ei=min(di,4)+δ(i=1,2)
where δ is a variable correlative with the smoothing filter. The surrounding spatial regions R1 and R2 (W×S in size) adjacent to the edge are used for calculating the local activity as shown in Fig. 1(b). Then compute the mean gradient values of the regions R1 and R2 to obtain m1(x*,y*) and m2(x*,y*) respectively, of which the minimum one is used as the local activity m(x*,y*).

Then the edges can be labeled on the masked gradient image gradm(x,y) using the threshold method.

Experimental Results

We give a simple implementation of the improved algorithm on the Canny edge detector.2

Our improved Canny algorithm is based on the Matlab function edge(image, ‘canny’), in which the high threshold Th is higher than the gradient values of 80 of all pixels and the low threshold Tl is set as 0.5Th, and the standard deviation of the Gaussian filter σ is 1. We mask the gradient images with luminance and activity to get gradm(x,y). Then we apply the nonmaximal suppression and edge labeling on gradm(x,y) to get the edge images. In our implementation, the parameters are set as follows: N=8, W=2, S=4, λ=0.618, r1=2, a1=1.5, r2=2, a2=0.5, T=0.8Tl, and δ=2. They are determined by threshold experiments and are of general use.

Figure 2 shows the original image and the edge images of “Basket.” From Fig. 2(b), we can observe that the edges detected by the Canny detector are not consistent with our perception edges. For example, the obtained edge map preserves most of the lawn texture but loses some details of the grids on the basket. In Fig. 2(c), however, most of the potentially undesirable texture is eliminated and the details of interest are preserved well without a significant clutter on the lawn. The results of the image “Elephant” are shown in Fig. 3, where similar results are obtained.

Fig. 2

Experimental results of “Basket” image: (a) original image; (b) edge image obtained by Canny detector; (c) edge image obtained by the improved Canny detector.

013502j.2.jpg

Fig. 3

Experimental results of “Elephant” image: (a) original image; (b) edge image obtained by Canny detector; (c) edge image obtained by the improved Canny detector.

013502j.3.jpg

Conclusions

The edges detected by traditional edge detectors may be inconsistent with the perceptive edges. We propose an improved method based on HVS which is suitable for gradient-threshold edge detectors. The results show that the edge images obtained by our algorithm are more consistent with the perceptive edge images. The performance of the gradient-threshold edge detectors is much improved by this method, which is helpful to image partition, image analysis, and so on.

REFERENCES

1. 

D. Ziou Djemel and S. Tabbone , “Edge detection techniques—an overview,” Int. J. Patt. Recog. Image Anal., 8 (4), 537 –559 (1998). Google Scholar

2. 

J. Canny , “A computational approach to edge detection,” IEEE Trans. Pattern Anal. Mach. Intell. , 8 (6), 679 –698 (1986). Google Scholar

3. 

A. Elmabrouk and A. Aggoun , “Edge detection using local histrogram analysis,” Electron. Lett. , 34 (12), 1216 –1217 (1998). Google Scholar

4. 

S. Winkler, “Vision models and quality metrics for image processing applications,” PhD Thesis, EPFL, Switzerland (2000).
©(2005) Society of Photo-Optical Instrumentation Engineers (SPIE)
Fuzheng Yang, Yilin Chang, and Shuai Wan "Gradient-threshold edge detection based on the human visual system," Optical Engineering 44(2), 020505 (1 February 2005). https://doi.org/10.1117/1.1850747
Published: 1 February 2005
Lens.org Logo
CITATIONS
Cited by 21 scholarly publications and 1 patent.
Advertisement
Advertisement
RIGHTS & PERMISSIONS
Get copyright permission  Get copyright permission on Copyright Marketplace
KEYWORDS
Sensors

Edge detection

Visual system

Detection and tracking algorithms

Image analysis

Contrast sensitivity

Image sensors

RELATED CONTENT

Depth consistency evaluation for error-pose detection
Proceedings of SPIE (December 24 2013)
mEdgeBoxes: objectness estimation for depth image
Proceedings of SPIE (December 14 2015)
A novel method of stable edge fragment detection
Proceedings of SPIE (December 08 2011)
Performance characterization of edge detectors
Proceedings of SPIE (March 01 1992)
Hierarchical map-matching algortihm for quadtree image on MPP
Proceedings of SPIE (September 20 2001)
A novel contour detection method
Proceedings of SPIE (November 30 2012)

Back to Top