Ask Your Question

Revision history [back]

click to hide/show revision 1
initial version

Computing disparity of identified object, not whole depth map

I'm building a turret that will need minimum 20 FPS at at least 640x480px and the vision system will be using two USB cameras as stereo vision.

I plan to use OpenCV's stereo modules throughout this process, but I will be running this program on a raspberry pi and need the script and vision system to be lightweight.

From my current knowledge, OpenCV's stereo.compute() function will take two frames and return a depth map(of course, the stereo object is created beforehand with stereocalibrate among others).

Computing a whole depth map over and over is a waste, and I really just need to locate one object, and I plan to use color filtering or some form of detectMultiScale() on each frame to identify the target.

My original math involved finding the angle of the targeted object in each frame from the center of the camera, and use a math function to take those two angles and spit out the depth of the object. Only issue is that if I do the math manually, I can't use any of OpenCV's calibrate functions, which are basically crucial to having accurate stereo vision. The advantage of doing the math manually is that it's super light weight and quick.

Is there any way to take best of both worlds, OpenCV's stereo calibrate features and simple quick disparity calcs from one point?

Computing disparity of identified object, not whole depth map

I'm building a turret that will need minimum 20 FPS at at least 640x480px and the vision system will be using two USB cameras as stereo vision.

I plan to use OpenCV's stereo modules throughout this process, but I will be running this program on a raspberry pi and need the script and vision system to be lightweight.

From my current knowledge, OpenCV's stereo.compute() function will take two frames and return a depth map(of course, the stereo object is created beforehand with stereocalibrate among others).

Computing a whole depth map over and over is a waste, and I really just need to locate one object, and I plan to use color filtering or some form of detectMultiScale() on each frame to identify the target.

My original math involved finding the angle of the targeted object in each frame from the center of the camera, and use a math function to take those two angles and spit out the depth of the object. Only issue is that if I do the math manually, I can't use any of OpenCV's calibrate functions, which are basically crucial to having accurate stereo vision. The advantage of doing the math manually is that it's super light weight and quick.

Is there any way to take best of both worlds, OpenCV's stereo calibrate features and simple quick disparity calcs from one point?

Computing disparity distance of identified object, object StereoCalibrate, not whole depth map

I'm building a turret that will need minimum 20 FPS at at least 640x480px and the vision system will be using two USB cameras as stereo vision.

I plan to use OpenCV's stereo modules throughout this process, but I will be running this program on a raspberry pi and need the script and vision system to be lightweight.

From my current knowledge, OpenCV's stereo.compute() function will take two frames and return a depth map(of course, the stereo object is created beforehand with stereocalibrate among others).

Computing a whole depth map over and over is a waste, and I really just need to locate one object, and I plan to use color filtering or some form of detectMultiScale() on each frame to identify the target.

My original math involved finding the angle of the targeted object in each frame from the center of the camera, and use a math function to take those two angles and spit out the depth of the object. Only issue is that if I do the math manually, I can't use any of OpenCV's calibrate functions, which are basically crucial to having accurate stereo vision. The advantage of doing the math manually is that it's super light weight and quick.

Is there any way to take best of both worlds, OpenCV's stereo calibrate features and simple quick disparity calcs from one point?