Ask Your Question
0

Generalized Hough Transform (Guil) - Improving Speed

asked 2017-01-17 17:42:40 -0600

jpistorino gravatar image

updated 2017-01-17 18:10:31 -0600

I am using OpenCV 3.1 with VS2012 C++/CLI running on Win10 on a Intel i7 with 16G RAM (in short, a pretty fast rig).

I am trying to utilize the Generalized Hough Transform and the Guil variant (to handle translation, rotation, and scale). I am following this example: https://github.com/Tetragramm/opencv/... My code used all the default settings shown in the example.

To test things out, I used the following image (minus the text) as both the loaded image and the query image: image description

This is a full HD image from which the contours have been extracted. When I ran the GHT-Guil "detect" function, it took 336 seconds (i.e., more than 5 minutes) to return.

Is this expected? Is there anything I can do to speed it up?

I also have an NVidia GTX760 card and can implement the GPU version of the Guil call. Is there any information on what kind of speed up I should expect if I do so?

Thanks for any information.

edit retag flag offensive close merge delete

1 answer

Sort by ยป oldest newest most voted
1

answered 2017-01-17 18:22:37 -0600

Tetragramm gravatar image

Well, the most obvious thing is rotation. The default settings look for any rotation in 1 degree increments. So if you only need ~5 degrees, you can just make the angle-step 5 and you cut your time to about 1/5th.

The other big one is scale. Are your objects all about the same size the same distance from the camera? If so, set the min and max scale closer together. Right now, it searches for anything from half the size to twice the size (0.5 to 2), in steps of 0.05. That's a lot of work.

Really, the best thing to do is read the documentation and start playing with the default parameters.

Last thing. It's fine for examples, but that's my fork of the repository. I don't always keep it up to date. If you're downloading code, make sure to pull it from the main OpenCV repository.

edit flag offensive delete link more

Comments

First, thanks for responding.

In case anyone else is reading this, reducing the scale range to 0.9-1.1 (i.e., 10% smaller or bigger) with a 0.1 step and making the angle step 5 degrees, reduced the time from 336 seconds to ~70 seconds. Further, implementing the GPU version reduced the time to ~0.23 seconds which is good enough for my application. I am happy to read any documentation can't find much of it. Where can I find something that explains more of what the position vector actually is? On my simple image, I am getting position.size() of 7. I am assuming that means that there are seven points in the query image that are believed to match the query image at the scale (3rd element in the vector) and rotation (4th element in the vector). Is there anything on what the votes are?

jpistorino gravatar imagejpistorino ( 2017-01-21 00:47:07 -0600 )edit

I have the same problem with the frustrating documentation: This post from tetragramm has the most detailed information i have found so far...

I would interpret the positions the way you do (3rd=scale; 4th=rotation), because i have an 0 as the 4th entry (so this can't be scale).

Franz Kaiser gravatar imageFranz Kaiser ( 2017-06-20 02:52:55 -0600 )edit

@jpistorino did you write your own implementation for the GPU version, or simply compiled OpenCV with CUDA capabilities ? In any case, can you use it from python?

Ciprian Tomoiaga gravatar imageCiprian Tomoiaga ( 2019-06-04 02:10:44 -0600 )edit

Question Tools

1 follower

Stats

Asked: 2017-01-17 17:42:40 -0600

Seen: 1,692 times

Last updated: Jan 17 '17