Image Analysis: Extend Hough Transform to Circles



I am working on a project in image analysis that is focused (slight pun intended) on textures as opposed to just contrast/gray scale work.  I understand that the Hough transform is useful for detecting lines in images, but I have circles I want to find.  In doing some research it appears that the Hough Transform concepts can be extended to circles and/or ellipses. 

Has anyone done this and have starting code to share?


I can't help with the details, but I suggest an idea that may get you started. With conformal mapping, lines in the complex plane can be mapped to circles by a simple functional transformation (and vice versa). This forms the basis of creating Smith Charts for complex reflection coefficients. Perhaps there is a nugget there that may suggest means of extracting the circle-to-line conversion before doing a Hough Transform.

Supposedly IP 9 has a Smith Chart demo,

DisplayHelpTopic "Smith Chart Procedure"

Unfortunately my (early) version of IP 9 shows the Help File, but not the package procedure to be loaded.

Thanks for the suggestion.

What I have been playing with is a brut force approach used in the Hough transform.

Basically I have a simple function that loops over the possible circles and has the image "vote" on it.

I take the base image and do an edge detection and work on this.  I tweak the edge detection matrix to set edges to 1 and everything else to zero simply by adding 1 to the M_imageEdges taking advantage that it is an 8 bit integer.

I loop over the possible X,Y centers and then use

                ImagelineProfile/RAD={X,Y,radius,width}/IRAD=100 srcWave=modified_edge_detect 

To get an intensity of the circle at that center and radius and since on I am working on the edges, I am getting the number of detected edges.  I then make a map and a sortable table to find centers and radii.  The downside at the moment is that it is not very fast and I usually don't worry about speed certainly at this stage of development, but it it is a bit problematic.  My base image is 4096x4096 to start and I do the edge detection on that.  I then reduce that down via an imageinterpolate pixelate command to 256 x 256.  I am testing the precision of the radius/width combination to see the minimum range I need.

It seems to work on some dummy test images that I made, though my real images not so much so.  I think I need to work on the preprocessing of the image and I have some latitude here because I am interested in large scale structures and textures and not pixel level analysis.