Edge detection
Edge detection includes a variety of mathematical methods that aim at identifying points in a digital image at which the image brightness changes sharply or, more formally, has discontinuities. The points at which image brightness changes sharply are typically organized into a set of curved line segments termed edges.
For this lecture we first look at discontinuity at image. To find an edge, we will take a look at the next pixel in line, and we will decide if the next pixel value is different than the actual pixel. If the difference is above set threshold, we will label the pixel as edge. If its not, we will leave it as background.
In this practice you should try to implement search in all 4 major directions (From top to bottom, left to right and vice versa)
Different approach, and also more modern to label the edge pixels, is to calculate gradient over the image. The bigger the gradient, the better the edge is. To calculate gradient, we will use Sobel filter, which will provide us with differences in x and y coordination’s. Final gradient value is then equal to geometric sum of differences x and y.
Used convolution matrixes to find differences in x and y
As the final part of the edge detection, try to do double thresholding. For this purpose you will need two sliders, where higher value is the certain edge threshold, and value that is above this threshold in the gradient image, will be marked as edge (1). If its not above the first threshold, then we will look at the second threshold value. Each pixel, with value below the second value threshold, is instantly marked as background (0), and if its above the second, and below the first threshold, than we will take a look if any other pixel around (all 8 pixels around are considered) is marked as edge. If so, we will mark the pixel as edge.
Example of iterations in double thresholding
Example of edges
Sollution in LV using NI-Vision
Data | Value |
---|---|
Source | https://en.wikipedia.org/wiki/Edge_detection |
Code |