How to get contour of metallic, shiny objects using OpenCV

I'm trying to find the contour of metallic, shiny objects such as the image below:

enter image description here

I have used Canny from OpenCV to get the contour of the image; however, the result (below) does draw a full contour of the original image. It has a big break at the bottom right.

enter image description here

I am kindly requesting any type of resource that could assist me in refining my contours such that it is continuous and (very closely) similar to the shape of the original image.

3 answers

  • answered 2020-02-12 23:56 nathancy

    A simple approach is to apply a large Gaussian blur to smooth out the image then adaptive threshold. With the assumption that the object is that largest thing in the image, we can find contours then sort for the largest contour using contour area filtering.

    Binary image

    enter image description here

    Result

    enter image description here

    Code

    import cv2
    import numpy as np
    
    # Load image, convert to grayscale, Gaussian Blur, adaptive threshold
    image = cv2.imread('1.jpg')
    gray = cv2.cvtColor(image, cv2.COLOR_BGR2GRAY)
    blur = cv2.GaussianBlur(gray, (13,13), 0)
    thresh = cv2.adaptiveThreshold(blur,255,cv2.ADAPTIVE_THRESH_GAUSSIAN_C, cv2.THRESH_BINARY_INV,51,7)
    
    # Morph close
    kernel = cv2.getStructuringElement(cv2.MORPH_RECT, (3,3))
    close = cv2.morphologyEx(thresh, cv2.MORPH_CLOSE, kernel, iterations=1)
    
    # Find contours, sort for largest contour, draw contour
    cnts = cv2.findContours(close, cv2.RETR_EXTERNAL, cv2.CHAIN_APPROX_SIMPLE)
    cnts = cnts[0] if len(cnts) == 2 else cnts[1]
    cnts = sorted(cnts, key=cv2.contourArea, reverse=True)
    for c in cnts:
        cv2.drawContours(image, [c], -1, (36,255,12), 2)
        break
    
    cv2.imshow('thresh', thresh)
    cv2.imshow('image', image)
    cv2.waitKey()
    

  • answered 2020-02-13 00:27 fmw42

    In Python/OpenCV, you can achieve that by:

    • Reading the input
    • Convert to HSV colorspace and extract the saturation channel (since gray has no saturation and green does)
    • Blur the image to mitigate noise
    • Threshold
    • Apply morphology close to fill interior holes in shiny object
    • Find contours and filter on the largest (though there should be only one)
    • Draw the contour on the input image
    • Save the results

    Input:

    enter image description here

    import cv2
    import numpy as np
    
    # read input
    img = cv2.imread('shiny.jpg')
    
    # convert to hsv and get saturation channel
    sat = cv2.cvtColor(img, cv2.COLOR_RGB2HSV)[:,:,1]
    
    # do a little Gaussian filtering
    blur = cv2.GaussianBlur(sat, (3,3), 0)
    
    
    # threshold and invert to create initial mask
    mask = 255 - cv2.threshold(blur, 100, 255, cv2.THRESH_BINARY)[1]
    
    # apply morphology close to fill interior regions in mask
    kernel = np.ones((15,15), np.uint8)
    mask = cv2.morphologyEx(mask, cv2.MORPH_CLOSE, kernel)
    
    
    # get outer contours from inverted mask and get the largest (presumably only one due to morphology filtering)
    cntrs = cv2.findContours(mask, cv2.RETR_EXTERNAL, cv2.CHAIN_APPROX_SIMPLE)
    cntrs = cntrs[0] if len(cntrs) == 2 else cntrs[1]
    result = img.copy()
    area_thresh = 0
    for c in cntrs:
        area = cv2.contourArea(c)
        if area > area_thresh:
            area = area_thresh
            big_contour = c
    
    # draw largest contour
    cv2.drawContours(result, [big_contour], -1, (0,0,255), 2)
    
    
    # write result to disk
    cv2.imwrite("shiny_mask.png", mask)
    cv2.imwrite("shiny_outline.png", result)
    
    # display it
    cv2.imshow("IMAGE", img)
    cv2.imshow("MASK", mask)
    cv2.imshow("RESULT", result)
    cv2.waitKey(0)
    


    Threshold and Filtered Mask:

    enter image description here

    Result:

    enter image description here

    An alternate approach would be to threshold using cv2.inRange() on the green color.

  • answered 2020-02-13 00:46 eldesgraciado

    Here's another possible solution, implemented in C++ and using k-means as the main segmentation method. The idea behind this segmentation is that k-means (a clustering method) will group colors of similar value. Here, I'm setting k-means to find clusters of 2 colors: the background color and the foreground color.

    Let's take a look at the code:

    std::string imageName = "C://opencvImages/LSl42.jpg";
    cv::Mat testImage =  cv::imread( imageName );
    //apply Gaussian Blur to smooth out the input:
    cv::GaussianBlur( testImage, testImage, cv::Size(3,3), 0, 0 );
    

    Your image has a noisy (high-frequency) background. You can blur it a bit to get a smoother gradient and improve segmentation. I applied Gaussian Blur with a standard kernel size of 3 x 3. Check out the difference between the input and the smoothed image:

    enter image description here

    Very cool. Now, I can pass this image to K-means. imageQuantization is a function taken from here that implements segmentation based on K-means. As I mentioned, it can group colors of similar value in clusters. That’s very handy! Let's cluster the colors in 2 groups: foreground object and background.

    int segmentationClusters = 2; //total number of clusters in which the input will be segmented...
    int iterations = 5; // k-means iterations
    cv::Mat segmentedImage = imageQuantization( testImage, segmentationClusters, iterations );
    

    The Result:

    enter image description here

    Quite nice, eh?

    You can apply edge detection directly on this image, but I want to improve it using a little bit of morphology. I first convert the image to grayscale, apply Outsu’s thresholding and then perform a morphological Closing:

    //compute grayscale image of the segmented output:
    cv::Mat grayImage;
    cv::cvtColor( segmentedImage, grayImage, cv::COLOR_RGB2GRAY );
    
    //get binary image via Otsu:
    cv::Mat binImage;
    cv::threshold( grayImage, binImage, 0, 255, cv::THRESH_OTSU );
    
    //Perform a morphological closing to lose up holes in the target blob:
    cv::Mat SE = cv::getStructuringElement( cv::MORPH_RECT, cv::Size(3, 3) );
    cv::morphologyEx( binImage, binImage, cv::MORPH_CLOSE, SE, cv::Point(-1,-1), 10 );
    

    I use a rectangular structuring element of size 3x3 and 10 iterations of the closing operation, this is the result:

    enter image description here

    Next, detect edges using Canny's edge detector:

    cv::Mat testEdges;
    //setup lower and upper thresholds for Canny’s edge detection:
    float lowerThreshold = 30;
    float upperThreshold = 3 * lowerThreshold;
    cv::Canny( binImage, testEdges, lowerThreshold, upperThreshold );
    

    Lastly, get the blob’s contour:

    std::vector<std::vector<cv::Point> > contours;
    std::vector<cv::Vec4i> hierarchy;
    
    cv::findContours( testEdges, contours, hierarchy, CV_RETR_TREE, CV_CHAIN_APPROX_SIMPLE, cv::Point(0, 0) );
    
    for( int i = 0; i< contours.size(); i++ )
    {
     cv::Scalar color = cv::Scalar( 0,255,0 );
     cv::drawContours( resizedImage, contours, i, color, 2, 8, hierarchy, 0, cv::Point() );
    }
    

    This is the final result I get:

    enter image description here

    Want to improve the result by expanding the contour? Try dilating the binary image with a few iterations before passing it to Canny’s edge detection. This is a test, dilating the image 5 times:

    enter image description here