Ask Your Question
1

An advice for OpenCV : (outdoor) stereo vision, distance estimation and object tracking

asked 2016-02-01 16:44:18 -0600

hectork gravatar image

Dear friends,

We are working in the development of a rover that will perform autonomous farming tasks such as pesticides spraying, and seed planting. For this, we are also developing a CV-perception module, for two specific tasks:

  • Object detection (for example leaf-types or plague detection).
  • Distance estimation (to identify the distance to a previously identified object, and for colission-avoiding manoeuvring).

I would like to ask you for suggestions on how to improve the work we have already have done. In particular there are two problems we are looking how to address:

  1. We are trying to estimate the distance of an object by using stereoscopic vision (using two cameras) with a 'depth map' algorithm (StereoSGBM). With this algorithm both images are processed and a grey-scale map is generated, where a lighter point means a closer distance. However, (1) the generated map doesn't correspond to the captured objects (a basic test with several objects at different distances was performed), and we haven't found how to calibrate the algorithm's parameters to improve its precision. (2) we haven't found a good approach to extrapolate an approximate distance based on the grey tone of a certain point of the map (the robot will require approximate measures in centimetres).

  2. We are also using ORB (Oriented FAST and Rotated BRIEF) to track objects. It worked well inside our lab, but performed poorly on the outside, given the changing light conditions. We tried an image pre-processing with histogram equalisation, but it didn't work well.

Any advice about these problems, or any suggestions to perform the same task with different approaches will be really appreciated.

Sincerely yours,

Héctor

edit retag flag offensive close merge delete

1 answer

Sort by » oldest newest most voted
0

answered 2016-02-02 15:10:43 -0600

Hi Hector

what you get from the StereoSGBM algorithm is a disparity map, where the intensity of a pixel is proportional to the inverse scene depth. Hence you cannot simply interpret the pixel intensities as distance values. In fact, you have to take the reciprocal of a pixel value and scale it with a constant that depends on your camera calibration data.

A short description of disparity maps and their projection to a 3D point cloud can e.g. be found in this document on page 11: http://nerian.com/support/documentati...

To get meaningful measures from a disparity map, you should look at the OpenCV function reprojectImageTo3D(). The required Q matrix is computed by stereoRectify() when you compute your rectification transformation. The 3d coordinates that you receive will be measured according to the units that you used when specify the object points during camera calibration.

Regards, Steve

edit flag offensive delete link more

Question Tools

2 followers

Stats

Asked: 2016-02-01 16:44:18 -0600

Seen: 1,524 times

Last updated: Feb 01 '16