Result_polygon2 = np.zeros(img.shape + (3, ), np.uint8) Result_polygon1 = np.zeros(img.shape + (3, ), np.uint8) Result_contour = np.zeros(img.shape + (3, ), np.uint8) Using scikit-image's find_contours and approximate_polygon allows you to reduce the number of lines by approximating polygons (based on this example): import numpy as npįrom asure import approximate_polygon, find_contours # prints True: the result is now exactly the same as the original # xor the filled result and the borders to recreate the original image # fill spaces between contours by setting thickness to -1Ĭv2.drawContours(result_fill, contours, -1, 0, -1)Ĭv2.drawContours(result_borders, contours, -1, 255, 1) # the '' is used to skip the contour at the outer border of the imageĬontours = cv2.findContours(img, cv2.RETR_LIST, Result_borders = np.zeros(img.shape, np.uint8) Result_fill = np.ones(img.shape, np.uint8) * 255 Using OpenCV's findContours and drawContours it is possible to first vectorise the lines and then exactly recreate the original image: import numpy as np Anyone know how to fix them? Any tips on how to get the line widths?Īnybody got any suggestions on how to better approach this problem?Įdit edit: here is another test image :, it includes multiple line widths I would like to capture. This results in this image, which is an improvement, however while the problem with the circle can be addressed at a later point the missing parts of the square and the weird artefacts on the other straight lines are more problematic. I also had to draw these images with a preset width while the real width isn't known.Įdit: on the suggestion of I tried the pypotrace by using the following code, currently it largely ignored bezier curves and just tries to act like they are straight lines, I will focus on that problem later, however right now the results aren't optimal either: def TraceLines(img): Img=cv2.line(img,(x1,y1),(x2,y2),(0,255,0),3)Īs you can see it is missing lines that are not axis aligned and if you look closely even the detected lines have been split into 2 lines with some space between them. Lines = cv2.HoughLines(edges,1,np.pi/180,100,minLineLength,maxLineGap) So far I tried the following code using the houghLineTransform: import numpy as npĮdges = cv2.Canny(img,50,150,apertureSize = 3) In addition the line transform does not encode the actual width of the lines leaving me guessing at how to reconstruct the images back (which I need to do as this is a preproccesing step towards training a machine learning algorithm). I have already looked at the HoughLinesTransform, however this does not cover every part of the image and is more about finding the lines in the image rather than fully converting the image to a line representation. In other words I'm trying to vectorize the image to a set of lines. I have a number of black and white images and would like to convert them to a set of lines, such that I can fully, or at least close to fully, reconstruct the original image from the lines.
0 Comments
Leave a Reply. |
AuthorWrite something about yourself. No need to be fancy, just an overview. ArchivesCategories |