Image segmentation for PnP optical placement

Quick ‘n dirty (but working!) image segmenter for randomly-strewn part identification. About 1 page worth of scripting takes an image of objects on background, determines which part is the background, determines the outside contour of each object and numbers each as a separate object. Now that it’s known where to look for one specific object, the task of identifying that object (or just matching it to another just like it) becomes a whole lot simpler. Combined with the auto-aligner, this reduces a “naive” (bruteforce cross-correlation between needle and haystack images) image matcher to only having to scan against 4 orientations (90-degree rotations) to find which has Pin 1 in the right place (and whether it’s the same part, etc.) Hopefully as I dig deeper into opencv, there is a less-naive algorithm builtin for this that does not rely on contrast/color historgrams: most electronic parts basically consist of a flat black body and shiny reflective metal leads (i.e. appearing the same color as your light source and/or the background, and/or whatever happens to be nearby at the moment). Edge-based stuff still seems like a better approach, though I would welcome being proven wrong if it means not having to write the identifier from scratch myself :-)

Steps in brief:
The first image was taken using the actual webcam that will be attached to the pick n place head, looking at a handful of representative parts on a piece of white paper. This image was dumbly processed using a Sobel edge-detector (it’s builtin to Gimp and I was feeling lazy), Gaussian blur to expand the soon-to-be-resulting mask around the part a little and close any gaps in the edge-detection result, and finally threshold the result to produce the second image. The goal in these steps is to produce a closed-form contour blob for each part that’s at least as wide as the part, while minimizing stray blobs from random noise / dirt specs / etc. (internal, fully-enclosed blobs/noise due to part features/markings is OK). Finally, opencv’s FindContours function is run (mode=CV_RETR_EXTERNAL) on the resulting image, returning a vector that contains a polygonal approximation of each external contour found. Each discrete (non-touching) contour blob is returned separately, that is, every part in the frame is now effectively tagged and numbered!

There are a couple noise points identified in the image above. Better-chosen constants for the initial image operations (threshold, blur radius, …) may help, but I’ll probably end up having it measure the area of the blobs and throw away any that’s too small to possibly contain a valid part. Switching to a more advanced edge-detector, e.g. Canny, may help too. In any case, the full image matcher should figure it out eventually :-)

Code Demo – basically ripped straight from the pyopencv examples
Segmentation example – requires Python (2.6) and opencv 2.1.0 / pyopencv.







6 responses to “Image segmentation for PnP optical placement”

  1. Utkarsh Avatar

    Hi! Instead of the gaussian blur you could use some morphological operations like closing to fill up the gaps!

  2. Yann Avatar

    Looks great so far! Have you setup any code repositories or other stuff yet?

    Also, what are you using EMC2 with in order to control your table? Do you have any details of your table?

  3. PC.Tech Avatar

    Pls chk PM’s @ … soon.

  4. Tom Smith Avatar
    Tom Smith

    Have repeated your experiment with the above photo in RoboRealm (windows software for CV, its cheap and fast as it has a gui). I can detect the objects without doing edge detection, just using a filter that merges similar colors (to deal with the lighting artifacts on the background, no threshold), then a blob detector.

    I can also make out the IC pin locations this way.

    Might be worth trying the software out, even if you just use it to do the prototyping, its a lot faster then coding.

    It also includes matching functionality, which would find component location + orientation, if you have a sufficient training set of images.

    You would be better off with a distinct background color, and multiple diffuse light sources (like a led ring light). This will remove the shadowing.

    Im building my own pnp next week, so am starting to get some of the software issues done.

  5. David Avatar

    Wow, open-source pick-and-place could be a pretty cool project.


    Have you considered dropping the parts on a frosted glass surface and backlighting them?
    That gives black=object, white=no object, no matter if it is black or white or metal.
    I hear rumors that the plastic tape holding the parts is transparent or at least translucent to near-IR light.
    So you could put a light under it, and remove the IR filter from your camera …

    On the other hand, if you could somehow reliably distinguish “metal” as a distinct “color” (perhaps by somehow moving a light around and watching the shiny highlights move?),
    perhaps that would make it easier to distinguish one part from another — in particular, easier to distinguish an upside-down SMT inductor or SMT cap or D2PAK transistor from the right-side-up version:
    big, shiny metal pads visible on the bottom;
    homogenous matte plastic on top.

Leave a Reply

Your email address will not be published. Required fields are marked *