Ask Your Question
0

Measuring lengths of objects

asked 2013-07-20 18:31:13 -0600

noviceCVer gravatar image

Hi All,

I apologize in advance for my complete inability to really use this stuff so bare with me.

I think I have a simple things compared to all the really amazing things you all are doing in here. Alli want to do is measure the length of objects using stereo-video cameras. This is for use for wildlife so the animals and cameras are oftening moving. I know I need to calibrate the cameras, both as individuals (distortion, focal length, etc) and as a pair (i.e, distance between etc, rotation and angle). I assume a checker board is ok for that? By the way these need to be really accurate measurements within a few mm if not one or two mm.

Then i need to figure out how clicking on the ends of the objects on the left and right cameras on a computer image in 2d space is converted to 3d real-world space.

Is this easy to do? There is no automation required, no tracking, no auto identification of the ends of the objects etc. The user will play the footage, then pause it and click on the two ends of the object/animal in the left and right camera, and i want its length returned to me. I guess also with that it would give me distance from cameras, and position relative to cameras given that I think the math is based around trigonometry?

If anyone can help point me in the right direction that would be awesome. I plan on using some high level cameras such as sony or canon consumer level HD cams. I've been told that gopros wouldn't be any good because they are so wide? Thats a different issue however.

Thanks all.

Your friendly novice

RD

edit retag flag offensive close merge delete

1 answer

Sort by ยป oldest newest most voted
1

answered 2013-07-21 17:47:21 -0600

You right, you have to configure your cameras.

  1. Just to be sure your're okay with the calibration basis, see this tutorial.
  2. After that, calibrate your camera with stereoCalibrate function. See the sample in cpp/stereo_calib.cpp and eventually the book chapter associated. If you want to see the point cloud associated with the disparity maps, see the sample cpp/stereo_match.cpp (or python2/stereo_match.py)
  3. To select points on the image, see the SetMouseCallback function, and some samples, like cpp/filldemo.cpp or cpp/LKdemo.cpp
  4. Reproject these points in 3D with triangulatePoints function.
  5. Compute distances (euclidean I assume) between 3D points...

For the expected precision, it's related to your calibration process (remember, you must calibrate your cameras when they are attached together!), the size of your object to measure, and the resolution available. Don't expected to obtain millimeter accuracy if your pixels are bigger than that...

I you need a real interface for your users, look at the QT functionalities with OpenCV.

edit flag offensive delete link more

Comments

Also as a little sidenode, openCV is programmed to get realtime performance, giving rounding errors by limiting the accurary of measurements. If you really want to get higher accuracy, you will probably have to use an other implementation. The idea behind openCV is, if it works fast and robust, we don't mind if it is some mm off the real distance, basically.

StevenPuttemans gravatar imageStevenPuttemans ( 2013-07-22 04:52:11 -0600 )edit

Hi Mathieu and Steve. Thanks for your input. I understand that mm accuracy may not be possible, that is OK, as the objects will range from a few cm to up to +10 m. More important is consistency.

I noticed that in the code there is no place for inputting pixel size of the sensor. is this roughly calculated by the censor length x width inputs and resolution dimensions (ie. 1920 x 1080) for the calibration code? Seems as if this is a crucial part of obtaining accurate legths as it is your only known dimension. I guess aside form the checker board. But if pixel size is wrong it would also change focal length values.

Ill see how i go this week, and hopefully my non-programmers brain will hack its way through the OpenCV language.

really appreciate your help.

noviceCVer gravatar imagenoviceCVer ( 2013-07-22 10:15:14 -0600 )edit

Hi Mathieu,

Quick question when running a stereo calibraiton after returning intrinsic properties from calibration. Do the images for stereo need to be from the same point in time in space?

noviceCVer gravatar imagenoviceCVer ( 2013-07-24 07:52:12 -0600 )edit

Not sure of what you mean, but for stereo the basis are: 2 views of the same subject with slightly different position (or 2 cameras, like your eyes). Therefore you could handle the 3D. Then, you could take two pictures with the same camera if: 1) you move it between the shots 2) your object is static 3) you are able to calibrate the camera after the movement (or use calibration-free methods).

Mathieu Barnachon gravatar imageMathieu Barnachon ( 2013-07-24 17:08:20 -0600 )edit

Yes sorry for my bad wording. I mean that using two video cameras recording simultaneously (obviously) if the calibration object moves (which it needs to do calibration right, different orientations) then the left and right cameras have to be synchronized to the same point in time? I guess Ill just do that as it cant hurt to anyway. Thanks for your help.

noviceCVer gravatar imagenoviceCVer ( 2013-07-26 13:35:13 -0600 )edit
1

Synchronization is the worst part, try if you could do without. Otherwise, it exists some devices to put in the scene to show time and synchronize video stream with it. Usually, if your object is not moving to fast and your PC not overloaded, you won't have problem. Try to record the stream, and make the stereo matching offline if it is possible.

Mathieu Barnachon gravatar imageMathieu Barnachon ( 2013-07-26 18:50:28 -0600 )edit

Question Tools

Stats

Asked: 2013-07-20 18:31:13 -0600

Seen: 4,878 times

Last updated: Jul 21 '13