Edge Detection of an Image Based on Extended Difference of Gaussian

: Edge detection includes a variety of mathematical methods that aim at identifying points in a digital image at which the image brightness changes sharply or, more formally, has discontinuities. The points at which image brightness changes sharply are typically organized into a set of curved line segments termed edges. The same problem of finding discontinuities in one-dimensional signals is known as step detection and the problem of finding signal discontinuities over time is known as change detection. Edge detection is a fundamental tool in image processing, machine vision and computer vision, particularly in the areas of feature detection and feature extraction. It's also the most important parts of image processing, especially in determining the image quality. There are many different techniques to evaluate the quality of the image. The most commonly used technique is pixel based difference measures which include peak signal to noise ratio (PSNR), signal to noise ratio (SNR), mean square error (MSE), similarity structure index mean (SSIM) and normalized absolute error (NAE).... etc. This paper study and detect the edges using extended difference of Gaussian filter applied on many of different images with different sizes, then measure the quality images using the PSNR, MSE, NAE and the time in seconds.


Introduction
Machine vision has developed into a critical field embracing a wide range of applications, comprehensive robot assembly [1], traffic monitoring and control [2], biometric measurement [3], surveillance [4], analysis of remotely sensed images [5], automated inspection [6], vehicle guidance [7] and signature verification [8]. The work in this field is still documented by researchers and specialists. The early stages of vision, handling distinguish highlights in images that are important to estimate the Properties and structure of elements in the scene [9]. Edges are one such component. It is great nearby changes in the image [10], the active components for investigating images. It usually happens in the limit between two unique areas in the image.
Edge detection is every now and again the first stage in recouping data from images. Because of its importance, edge detection keeps on being a dynamic research region [11]. Produces a set of continuous curves that indicate the boundaries of objects when an edge detector is applied to a set of images and this situation is called ideal [12], Surface boundaries as well as curves that correspond to surface orientations interruptions. The amount of data may significantly reduce when we apply an edge detection algorithm to an image because it purifies important information and ignores information that is less important, but it preserves the structural characteristics of the image. Clarification the information contents in the original image may therefore be basically simplified if the edge detection step is successful [13]. While real life images of average complexity are not always giving us perfect edges. The edges are often non-friable when they are extracted from non-trivial images [14], as a result, we get disconnected edges and lose slices from the edge as well as false edges not compatible with interesting phenomena in the image. The edges extracted in a three -dimensional scene of a twodimensional image are divided into two types are a dependent view or an independent view. The independent view of the discovery of the edges usually fluctuates the established characteristics of the three-dimensional objects, such as surface shape and surface marks [15]. The edge of the view may change according to the change of view, and typically inverting the geometry of the scene, such as objects occluding one another [16]. Edges are coming across in the areas of feature extraction and feature selection in Computer Vision. In edge detection algorithm we enter the digital image, as input and exit the edge map as output. Many detectors have been introduced in the previous works such as canny, log, zerocross, Sobel, approxcanny, Roberts and Prewitt [17][18][19][20][21][22][23].
The purpose of this research is to detect and improve the edge information existing in the image, since it is important to understand the image and see what information it contains. This paper organized as follow: section 2 introduces a look at some of the classical methods of edge detection. Section 3 will get to know Gaussian filter and its theory, and in addition to its function. Difference of Gaussian and its extended presented in 4,5.
In section 6 provide the effectiveness of our methods when applied many of images that have different sizes. Finally, we introduce the conclusion of this paper in section 7.

Canny Edge Detector
Canny edge detection is a technique used to significantly reduce the amount of information processed and thus obtain only important information. It has been widely applied in different computer vision systems. Canny found that there was a relative similarity in the requirements for applying edge detectors to different vision systems.

Process of Canny Edge Detection Algorithm
The canny edge detection algorithm can be divided into five steps [24]: Step. 1 In order to remove noise from the image and soften it, we apply the Gaussian filter.
Step. 2 We extract the gradients of the intensity of the image.
Step. 3 apply non-maxima suppression to get edge response from one side.
Step. 4 To discover the edge and be connected, we use the double threshold method.
Step. 5 The final step in detecting edges is to dispense with all weak edges that are Not connected to strong edges.

Log Filter
The edge detection based on log filter consists of three steps. The first step is Gaussian filtering, the second is edge enhancement and the three step is the edge detection. The Laplacian operator is used for enhancement and detect the edges [25] but it's sensitive to the noise. In order to reduce noise effects, the image firstly is smoothed, then is detected for edge by Laplacian operator. The optimal smoothing filter should have good characteristics in the spatial domain and frequency domain at the same time.

Linear Filters
Let , for , , = 1, … … , , are the values for of the pixels in the image and if we Assume the linear filter has a size 2 + 1 2 + 1 and the weights are where , = − , … … . , is shown by the following equation An image can be represented in a different way like a synthesis of different sine waves in frequency rather than as a matrix [26]. This refers to the Fourier representation. The Fourier coefficients are determined from the coefficients that determine sine waves. For engineers and some researchers this is obvious to them, but for others it maybe takes some times to get used to it.

Non Linear Edge Detection Filters
There are some of simple, non linear filters for detecting edges in all directions [27] such as: 1) The difference between large and small values results in a filter that represents an output for the range of pixel values.
The filter is simple and fast at work and we can get it by adjusting the moving average algorithm. Alternatively, this filter can be processed by using the property that splits both the maximum and minimum values.
2) Roberts detector is known as the sum of the differences in the absolute values of diagonally opposite pixel values 3) the kirch filter is one of the form conformity filters, in which the filter output is the maximum response from a group of linear filters that are sensitive to edges in different directions Where , = 4 And each subsequent weight matrix, ; ,……. 2 has elements which are successively rotated through multiples 45. This is a much slower algorithm compared to the two previous methods. To minimize the impact of noise, using gradient filters which is one of the best ways to do this.

Zero -Crossing Edge Detector
The value of the Laplacian passes through zero, so the zero crossing looking for places in the Laplacian of an image. There are points in the image where there is a rapid change in the intensity and this is possible occurs at the edges of the image and possible occurs in places not easy to connect to the edges. We can say that the zero crossing detector is used to store some features and not only the edge detector. Zero crossing is always located on closed oceans, so the output is a single-pixel binary image with thick lines showing the point positions of the zero crossing. The zero crossing edge detector starts with the image filtered by Laplacian of Gaussian [28]. The results from zero sections are highly influenced by the size of Gaussian that used in the smoothing phase. When heterogeneity increases, we get less zero crossing contours, which correspond to the largest range in the image. In the LoG output, edges in images lead to zero crossings.

Laplacian of Gaussian
The Laplacian is a Two -dimensional proportional scale of the 2nd spatial derivative of an image. The Laplacian of an image indicates areas of rapid change in density and is therefore often used for edge detection. The image should be smoothed first by using something like a Gaussian filter to reduce its sensitivity to noise and then apply the Laplacian. The operator normally takes a single gray level image as input and produces another gray level image as output.
The Laplacian , @ of an image with pixel intensity values H , @ is given by Because the image is represented as a set of separate pixels, we must identify a separate torsion nucleus that can approximate the second derivatives in the definition of the Laplacian. Two small kernels are shown in figure 2. These kernels approximate the measurement of the second derivatives of the image so they are very sensitive to noise, they are. To register this, the image is often used Gaussian smoothed before using the Laplacian filter. Before the differentiation step we must do the pre-processing step because it reduces the high frequency noise components.
We can convolve the Laplacian filter with the Gaussian smoothing filter because the convolution operation is connected, and then convolve the resulting filter with the image to get the desired result. Working this way has two advantages: i. Gaussian kernel and the Laplacian kernel are usually smaller than the image. So this method requires small calculations. ii. At run time we need one convolution to be performed because the kernel of Log can be calculated at the beginning.

Gaussian Filter
Gaussian filter is a framework filter of linear layer [29], it means weighted mean. Because weights in the filter calculated according to a Gaussian distribution, it's named after famous scientist Carl Gauss. This filter has another name is Gaussian blur.

Gaussian Blur Theory
We can clarify the blur as taking a pixel as the average value of its surrounding pixels. If we assume the center point is 2, the surrounding points are 1 and the center point will take the average value of its surrounding points, it will be 1. If every point will get the average value of surrounding points, then how should we allocate weight?. If we just use simple average, it's unreasonable, because images are continuous, the closer the points in the distance, the closer the relationship between points. Hence the weighted average is more logical than the simple average, the closer the points in the distance, the larger the weight.

Gaussian Function
The density function calls the Gaussian function. The 1-D shape is shown in the following equation  Thus the function is negative exponential one of squared argument. Argument divider L plays the role of scale factor. L Parameter has a special name: standard deviation, and its square L 9 . Pre multiplier in front of the exponent is selected the area below plot to be 1. The function is marked everywhere on real axis ∈ ∞, ∞ Which means it spreads endlessly to the left and to the right. Now, the first point is we are working in a separate world, so the Gaussian distribution move to set of values at separate points. Second, we cannot work with something that has no end to the left and to the right. It means Gaussian distribution is not connected. In the usual used the rule of 3σ that is the part of Gaussian distribution utilized is ∈ _ 3L, 3L` As we see in figure 6.
Which means 2D distribution is divided into a pair of 1D ones, and so, 2D filter window is separated into a pair of 1D ones. In this case the filter is called separable one. Practically it means that to apply a filter to an image it is enough to filter it in the horizontal direction with 1D filter and then the result is filtered in vertical direction with the same filter. Which direction first really makes no difference.
2D Gaussian separable algorithm i. Calculate 1D window weights ii. Every image line is filtered as 1D signal iii. Every filtered image column is filtered as 1D signal

Difference of Gaussian (DoG)
In imaging science, difference of Gaussians (DoG) is an enhanced algorithm feature that includes the subtraction of one blurred version of an original image from another, less blurred version of the original [30]. When convolving the original grayscale images with Gaussian kernels having differing standard deviations, we obtain to the blurred images in the simple case of grayscale images. High-frequency spatial information suppresses only when an image blurred using a Gaussian kernel. The spatial information that lies between the ranges of frequencies are saved when subtracting one image from the other. Thus, the difference of Gaussians is a band-pass filter where there are some of spatial frequencies are existing in the original grayscale image but the rest is ignored.
The mathematical formula of difference of Gaussians explained as follows: Given an image its dimension is b If we have an imageH, the difference of Gaussians (DoG) function of this image is г Q m Q I : cd ⊆ f g h → cn ⊆ fh Which represents an image convoluted to the difference of two Gaussians.

Extended Difference of Gaussian (XDOG)
The difference of Gaussian filter is as follows [31]: We need to generate an edge image, if we assume an image may be realized as a thresholding of the DoG response t ∈ q Q, * H where Where ∈ is used to Controls noise sensitivity The biological models introduced by win nemoller et al and models presented by Young and others created edge images using a DoG diverse in which the strength of the inhibitory effect of the larger Gaussian is allowed to vary, which leads to the following equation After exchanging the binary thresholding function T ∈ With a continuous slope: T ∈, ‡ D ‚,ƒ," * I is referring to the XDOG filter for image I We must know the XDOG is hard to control. Increasing the sensitivity of the filter to edges requires adjusting τ, φ, ∈ well In order to simplify the difficulties in controlling XDOG using the following equation φ control the sharpness in the black and white transformations in the image.

Results and Discussion
In order to test our method in this paper and compare it with the other classical methods, many of different test images with different sizes and resolutions are detected by canny, log, zerocross, sobel, prewitt and Roberts and XDOG method.
All experiments are executed on a Windows-based machine with Intel (R) Core (TM) i7-7500U CPU@2.70GHz processor, 16GB RAM, and MATLAB R2017a installed on it. The method tested on 160 image with different sizes and compared with most of known edge detection and we run it 10 times for each image. As shown in figures 8-18 and in tables 1-4 XDOG method works effectively for different images, especially when applied image quality measurements such as Peak Signal to Noise Ratio (PSNR), Mean Square Error (MSE), Normalized Absolute Error (NAE) and finally compared with Time in seconds. From Table 1, 2 and Table 4 can observe that, PSNR, MSE, NAE obtained with the proposed method is better compared with the classical methods especially canny, log, zero-cross. But from table 3 can observe that the time for XDOG method is relatively large. In the end, we can say our method is better than all tested methods. As we see in table 1 and figure 8 XDOG method produces robust results for PSNR when compared with the other classical methods because it contains the largest possible values and for greater accuracy we applied the method on different images with different sizes and we got the best quality. We can observe from table 2 and figure 9 XDOG approach is the perfect compared to the other classical methods where it has the lowest values. These results are the best that can be obtained   figure 10 indicating the results of the time used in execution and note the time in our method is relatively large but not large in all images.  Figure 11. NAE for XDOG method and tested methods.
As shown in table 4 and figure 11 the best results were obtained from XDOG method when compared it with canny, log, zero cross, sobel, prewitt and roberts where the lowest values were derived from the used method.

Conclusion
This paper proposes extended difference of Gaussian technique for edge detection and satisfactory results were produced when compared our method with the classical methods canny, log, sobel, prewitt, Roberts and zerocross. XDOG method works well on different images with different sizes. Then we appreciate the images quality using image quality measurements (PSNR, MSE, NAE and time by seconds) is essential task in digital image processing as it provides a better way for image quality assessment and its improvement. It can be observed that higher the value of PSNR and lower value of MSE are desired results and lower values of NAE and relatively large values of time obtained from XDOG method. From the above we conclude that XDOG method is the best of all methods used.