{"id":745,"date":"2010-07-27T02:39:03","date_gmt":"2010-07-27T06:39:03","guid":{"rendered":"http:\/\/tim.cexx.org\/?p=745"},"modified":"2010-07-27T03:04:39","modified_gmt":"2010-07-27T07:04:39","slug":"image-segmentation-for-pnp-optical-placement","status":"publish","type":"post","link":"https:\/\/tim.cexx.org\/?p=745","title":{"rendered":"Image segmentation for PnP optical placement"},"content":{"rendered":"<p><center><br \/>\n<img decoding=\"async\" src=\"https:\/\/tim.cexx.org\/projects\/pickplace\/segmentation\/seg-orig.png\"\/><br \/>\n<img decoding=\"async\" src=\"https:\/\/tim.cexx.org\/projects\/pickplace\/segmentation\/seg-edgified.png\"\/><br \/>\n<img decoding=\"async\" src=\"https:\/\/tim.cexx.org\/projects\/pickplace\/segmentation\/seg-segmented.png\"\/><br \/>\n<\/center><\/p>\n<p>Quick &#8216;n dirty (but working!) image segmenter for randomly-strewn part identification. About 1 page worth of scripting takes an image of objects on background, determines which part is the background, determines the outside contour of each object and numbers each as a separate object. Now that it&#8217;s known where to look for one specific object, the task of identifying that object (or just matching it to another just like it) becomes a whole lot simpler. Combined with the auto-aligner, this reduces a &#8220;naive&#8221; (bruteforce cross-correlation between needle and haystack images) image matcher to only having to scan against 4 orientations (90-degree rotations) to find which has Pin 1 in the right place (and whether it&#8217;s the same part, etc.) Hopefully as I dig deeper into opencv, there is a less-naive algorithm builtin for this that does not rely on contrast\/color historgrams: most electronic parts basically consist of a flat black body and shiny reflective metal leads (i.e. appearing the same color as your light source and\/or the background, and\/or whatever happens to be nearby at the moment). Edge-based stuff still seems like a better approach, though I would welcome being proven wrong if it means not having to write the identifier from scratch myself :-)<\/p>\n<p>Steps in brief:<br \/>\nThe first image was taken using the actual webcam that will be attached to the pick n place head, looking at a handful of representative parts on a piece of white paper. This image was dumbly processed using a Sobel edge-detector (it&#8217;s builtin to Gimp and I was feeling lazy), Gaussian blur to expand the soon-to-be-resulting mask around the part a little and close any gaps in the edge-detection result, and finally threshold the result to produce the second image. The goal in these steps is to produce a closed-form contour blob for each part that&#8217;s at least as wide as the part, while minimizing stray blobs from random noise \/ dirt specs \/ etc. (internal, fully-enclosed blobs\/noise due to part features\/markings is OK). Finally, opencv&#8217;s FindContours function is run (mode=CV_RETR_EXTERNAL) on the resulting image, returning a vector that contains a polygonal approximation of each external contour found. Each discrete (non-touching) contour blob is returned separately, that is, every part in the frame is now effectively tagged and numbered!<\/p>\n<p>There are a couple noise points identified in the image above. Better-chosen constants for the initial image operations (threshold, blur radius, &#8230;) may help, but I&#8217;ll probably end up having it measure the area of the blobs and throw away any that&#8217;s too small to possibly contain a valid part. Switching to a more advanced edge-detector, e.g. Canny, may help too. In any case, the full image matcher should figure it out eventually :-)<\/p>\n<p><b>Code Demo &#8211; basically ripped straight from the pyopencv examples<\/b><br \/>\n<a href=\"https:\/\/tim.cexx.org\/projects\/pickplace\/segmentation\/segmentation.zip\">Segmentation example<\/a> &#8211; requires Python (2.6) and opencv 2.1.0 \/ pyopencv.<\/p>\n","protected":false},"excerpt":{"rendered":"<p>Quick &#8216;n dirty (but working!) image segmenter for randomly-strewn part identification. About 1 page worth of scripting takes an image of objects on background, determines which part is the background, determines the outside contour of each object and numbers each as a separate object. Now that it&#8217;s known where to look for one specific object, [&hellip;]<\/p>\n","protected":false},"author":1,"featured_media":0,"comment_status":"open","ping_status":"open","sticky":false,"template":"","format":"standard","meta":{"_FSMCFIC_featured_image_caption":"","_FSMCFIC_featured_image_nocaption":"","_FSMCFIC_featured_image_hide":"","iawp_total_views":5,"footnotes":""},"categories":[4,1],"tags":[134],"class_list":["post-745","post","type-post","status-publish","format-standard","hentry","category-geek","category-general","tag-pickplace"],"_links":{"self":[{"href":"https:\/\/tim.cexx.org\/index.php?rest_route=\/wp\/v2\/posts\/745","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/tim.cexx.org\/index.php?rest_route=\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/tim.cexx.org\/index.php?rest_route=\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/tim.cexx.org\/index.php?rest_route=\/wp\/v2\/users\/1"}],"replies":[{"embeddable":true,"href":"https:\/\/tim.cexx.org\/index.php?rest_route=%2Fwp%2Fv2%2Fcomments&post=745"}],"version-history":[{"count":4,"href":"https:\/\/tim.cexx.org\/index.php?rest_route=\/wp\/v2\/posts\/745\/revisions"}],"predecessor-version":[{"id":749,"href":"https:\/\/tim.cexx.org\/index.php?rest_route=\/wp\/v2\/posts\/745\/revisions\/749"}],"wp:attachment":[{"href":"https:\/\/tim.cexx.org\/index.php?rest_route=%2Fwp%2Fv2%2Fmedia&parent=745"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/tim.cexx.org\/index.php?rest_route=%2Fwp%2Fv2%2Fcategories&post=745"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/tim.cexx.org\/index.php?rest_route=%2Fwp%2Fv2%2Ftags&post=745"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}