Ask Your Question
0

How to measure distance between 2 objects in a video? - Edited

asked 2017-11-06 08:42:12 -0600

pipeecs780 gravatar image

updated 2017-12-01 11:28:32 -0600

Hello My name is Felipe and I'm from Chile. I'm currently making a project with 2 points and measuring the dinstance between them in real time. I know it can be done with images, but I can't find a real solution with videos. I made a mask and then erotionate to recognize the colours that I need. I think that the next step is labeling them, and then get the distance. The two objects I will move them up and down, and I need to measure the distance between them in real time. Here is an image to show how it shows until now.

Pre-Labeling

-------------------------EDIT----------------------------- Now i can recognize an measure "distance" beetween the 2 points, but the values that I get appears to be in pixel values not in cm or inches. I dont know if somebody con confirmate that. MARCADORES Now I have to export that values to a txt or csv file, because I need to send it in real time trough google drive in a spreedsheet. So my questions now if you can help me with export that values in a file, and if is possible to show the values more slow, i mean like one value per second, because it shows so fast, and propably it will be so fast to the spreadsheet. (I hope you can understand my english hahaha) Thank you again to everyone who respond. my code is currently this

import cv2   
import numpy as np

#Captura de video a traves de la webcam
cap=cv2.VideoCapture(0)

while(1):
    d=0.1
    centers=[]
    _, img = cap.read()

    hsv=cv2.cvtColor(img,cv2.COLOR_BGR2HSV) #Se obtiene un histograma basada en las saturaciones de colores.

    blue_lower=np.array([80,150,100],np.uint8)
    blue_upper=np.array([150,255,255],np.uint8)

    blue=cv2.inRange(hsv,blue_lower,blue_upper) #Se crea una mascara utilizando intervalos de color azul.

    kernal = np.ones((5 ,5), "uint8") #Crea una matriz de 5x5 la cual recorrera el video,

    blue=cv2.erode(blue,kernal, iterations=1) #Se erosiona utilizando el kernel sobre la mascara.
    res1=cv2.bitwise_and(img, img, mask = blue) #La nueva imagen reemplazara a blue.


    (_,contours,hierarchy)=cv2.findContours(blue,cv2.RETR_TREE,cv2.CHAIN_APPROX_SIMPLE) #Encuentra los contornos de los objetos que se ven en el filtro

    for pic, contour in enumerate(contours):
        area = cv2.contourArea(contour) #funcion de opencv que obtiene los contornos
        if(area>300):
            x,y,w,h = cv2.boundingRect(contour) #Encuentra coordenadas de los contornos.
            img = cv2.rectangle(img,(x,y),(x+w,y+h),(255,0,0),2)
            cv2.putText(img,"Marcador",(x,y),cv2.FONT_HERSHEY_SIMPLEX, 0.7, (255,0,0))


            M = cv2.moments(contour) #Se obtiene el centro de masa de los marcadores enconrados.
            cx = int(M['m10'] /M['m00'])
            cy = int(M['m01'] /M['m00'])
            centers.append([cx,cy])
            cv2.circle(img, (cx, cy), 7, (255, 255, 255), -1)

        if len(centers)==2:
            D = np.linalg.norm(cx-cy) #Se aplica distancia euclidiana para encontrar la distancia entre los centros de ...
(more)
edit retag flag offensive close merge delete

Comments

I did something like this a little while ago. What I did was loop through the image's pixels and flood filling any white pixels (sections) using a unique colour. From there I calculated the centre of each coloured section (that is, the centre is in terms of pixels), and the distance calculation is done between the centres. Getting the rays from the camera to the card centres is pretty simple trigonometry, based on the distance of the cards from the camera; this would give you the distance in real life. Here's some code to convert a pixel location to a point along an image plane 1 unit of distance in front of the camera:

https://www.gamedev.net/forums/topic/...

Are you familiar with posting code on GitHub?

sjhalayka gravatar imagesjhalayka ( 2017-11-06 09:44:50 -0600 )edit

P.S. I'm considering the video to be merely a bunch of still images, and it's on these images that I calculate the centres. So, it doesn't really matter that it's a video versus an image, right?

sjhalayka gravatar imagesjhalayka ( 2017-11-06 10:38:56 -0600 )edit

I made a code that marks the centres of each section in the image:

https://github.com/sjhalayka/mask_dis...

Note that there's 3 sections... so you would need to find the centres from the two largest sections. Fortunately, the code creates a std::multiset that sorts the sections in terms of size, so that's always a bonus.

sjhalayka gravatar imagesjhalayka ( 2017-11-06 13:50:57 -0600 )edit

@sjhalayka Why such a complicated solution? Couldn't him finding the contours, then calculate their moments along with their mass centers then find the euclidean distance between them accomplish what he wants without any camera calibrations and stuff? I am just curious why you did it that way.

eshirima gravatar imageeshirima ( 2017-11-06 16:42:02 -0600 )edit

Inexperience, mostly. I'll work on a code that does what you describe.

sjhalayka gravatar imagesjhalayka ( 2017-11-06 16:46:56 -0600 )edit

So basically you want to measure the distance in space between the two cards? Do you have your camera matrices? You'd likely need to have the depth map of the cards. Does your camera give you a depth map?

I don't know how it all works in OpenCV yet, but with the OpenGL camera it is pretty easy to solve to the distance problem once the Z-buffer is used: you just trace rays.

sjhalayka gravatar imagesjhalayka ( 2017-12-01 15:50:23 -0600 )edit

Yes, I would like to measure the distance between the two centers in cm, but I don't know how to get the depth map of my camera, any idea to do it in opencv?

pipeecs780 gravatar imagepipeecs780 ( 2017-12-04 08:02:17 -0600 )edit

Do you have a Kinect? Perhaps your camera just simply doesn't get depth maps because it doesn't have a depth sensor.

sjhalayka gravatar imagesjhalayka ( 2017-12-04 09:02:39 -0600 )edit

Dear Felipe, My name is Nam, and I'm working on a project, in which I want to measure the distance between 2 objects in a 30 seconds video. I would like to learn about this software. If you can please contact me at this email: [email protected]. Thank you so much for your time.

Nam Nguyen gravatar imageNam Nguyen ( 2017-12-12 16:55:37 -0600 )edit

2 answers

Sort by ยป oldest newest most voted
0

answered 2017-11-06 16:58:25 -0600

updated 2017-11-07 08:51:33 -0600

If I understood your question correctly and since you said nothing about camera calibrations, if you just want to find the distance between those two objects then simply

  1. Find the respective contours of your masked objects. Python, C++

  2. Calculate the moment of the detected contours. Python, C++

  3. Extract the mass centers (refer to the links posted above)

  4. Then calculate the euclidean distance between the mass centers.

Try out the solution proposed by @sjhalayka as well.

EDIT

Most of the heavy lifting was done by @sjhalayka and I just simplified the logic. The original code can be found here in without_contours.cpp

A bulk of the change was getting rid of his/her use of sets and maps just to keep track of the two largest contours. Other minor edits have been made as well and stated in the comments.

Disclaimer: The logic shown below was tailored to OP's current question. Should he add an extra object and want to find distances of all three, the same logic shown below can be used but its implementation details have been left out as a challenge for him to tackle.

#include <opencv2/opencv.hpp>
#include <iostream>

using namespace cv;
using namespace std;

// since its just two elements, then a tuple return should suffice.
tuple<size_t, size_t> getTheTwoLargestContours(const vector<vector<Point>>& contours)
{
    size_t largestLoc = 0, secondLargestLoc = 0;
    double largestArea = 0, secondLargestArea = 0;

    for(size_t index = 0; index < contours.size(); ++index)
    {
        double area = contourArea(contours[index]);

        // first check to see if this area is the largest
        if(area > largestArea)
        {
            largestArea = area;
            largestLoc = index;
        }
        else if(area > secondLargestArea) // if not then its probably the second largest, maybe?
        {
            secondLargestArea = area;
            secondLargestLoc = index;
        }
    }

    return make_tuple(largestLoc, secondLargestLoc);
}

int main(void)
{
    // OpenCV is shying away from using the CV prefix
    Mat frame = imread("cards.png", IMREAD_GRAYSCALE);

    threshold(frame, frame, 127, 255, THRESH_BINARY);

    imshow("f", frame);

    Mat flt_frame(frame.rows, frame.cols, CV_32F);

    for (int j = 0; j < frame.rows; j++)
        for (int i = 0; i < frame.cols; i++)
            flt_frame.at<float>(j, i) = frame.at<unsigned char>(j, i) / 255.0f;

    vector<vector<Point> > contours;
    vector<Vec4i> hierarchy;

    findContours(frame, contours, hierarchy, CV_RETR_TREE, CV_CHAIN_APPROX_SIMPLE, Point(0, 0));

    if(contours.size() < 2)
    {
        cout << "Error: must have 2 or more contours." << endl;
        return 0;
    }

    tuple<size_t, size_t> locations = getTheTwoLargestContours(contours);

    /*
     std::get<index>(tuple) is how you access tuple elements.
     Read more here: http://www.geeksforgeeks.org/tuples-in-c/

     Index 0 of locations is where the largest contour is located.
     So I calculate its moment along with its center.
     This logic can be converted into a loop should more contours be needed.
     */
    Moments mu = moments(contours[get<0>(locations)], false);
    Point2d largestMassCenter = Point2d(mu.m10 / mu.m00, mu.m01 / mu.m00);

    // do the same thing for the second largest
    mu = moments(contours[get<1>(locations)], false);
    Point2d secondLargestMassCenter = Point2d(mu.m10 / mu.m00, mu.m01 / mu.m00);

    // OpenCV has a norm function for calculating the Euclidean distance
    double distance = norm(largestMassCenter - secondLargestMassCenter);

    cout << "Distance (in pixels): " << distance << endl;

    Mat output ...
(more)
edit flag offensive delete link more

Comments

I placed a new file called "with_contours.cpp" to the GitHub repository (https://github.com/sjhalayka/mask_dis...).

The code is about one half of the size of the file called "without_contours.cpp".

sjhalayka gravatar imagesjhalayka ( 2017-11-06 17:48:46 -0600 )edit

@sjhalayka I refracted your code even further. I tried adding comments explaining the logic. I thought your use of sets was an overkill especially because it was just to locate the two largest contours. Granted, my code isn't robust and it'd fail should he add other objects but it shouldn't be hard to make it more dynamic

eshirima gravatar imageeshirima ( 2017-11-07 08:41:22 -0600 )edit

Cool cool. :)

sjhalayka gravatar imagesjhalayka ( 2017-11-07 10:57:21 -0600 )edit

What is mu.m11 / mu.m00?

sjhalayka gravatar imagesjhalayka ( 2017-11-07 19:36:50 -0600 )edit

I have no idea what m11 is but I know m00 is the contour area. You can read more on moments on this blog but here's the actual paper implemented by OpenCV. You might have some luck finding out what the other spatial moments variables represent.

eshirima gravatar imageeshirima ( 2017-11-08 07:48:43 -0600 )edit
0

answered 2017-12-01 11:01:32 -0600

pipeecs780 gravatar image

Thank so much for the response! I know its been a while but it has been a hard work semester. I can measure the distance beetwen 2 ponts now, but get a value that I think its pixels values, not the distance. I used np.linalg.norm

import cv2   
import numpy as np

#Captura de video a traves de la webcam
cap=cv2.VideoCapture(0)

while(1):
    d=0.1
    centers=[]
    _, img = cap.read()

    hsv=cv2.cvtColor(img,cv2.COLOR_BGR2HSV) #Se obtiene un histograma basada en las saturaciones de colores.

    blue_lower=np.array([80,150,100],np.uint8)
    blue_upper=np.array([150,255,255],np.uint8)

    blue=cv2.inRange(hsv,blue_lower,blue_upper) #Se crea una mascara utilizando intervalos de color azul.

    kernal = np.ones((5 ,5), "uint8") #Crea una matriz de 5x5 la cual recorrera el video,

    blue=cv2.erode(blue,kernal, iterations=1) #Se erosiona utilizando el kernel sobre la mascara.
    res1=cv2.bitwise_and(img, img, mask = blue) #La nueva imagen reemplazara a blue.


    (_,contours,hierarchy)=cv2.findContours(blue,cv2.RETR_TREE,cv2.CHAIN_APPROX_SIMPLE) #Encuentra los contornos de los objetos que se ven en el filtro

    for pic, contour in enumerate(contours):
        area = cv2.contourArea(contour) #funcion de opencv que obtiene los contornos
        if(area>300):
            x,y,w,h = cv2.boundingRect(contour) #Encuentra coordenadas de los contornos.
            img = cv2.rectangle(img,(x,y),(x+w,y+h),(255,0,0),2)
            cv2.putText(img,"Marcador",(x,y),cv2.FONT_HERSHEY_SIMPLEX, 0.7, (255,0,0))


            M = cv2.moments(contour) #Se obtiene el centro de masa de los marcadores enconrados.
            cx = int(M['m10'] /M['m00'])
            cy = int(M['m01'] /M['m00'])
            centers.append([cx,cy])
            cv2.circle(img, (cx, cy), 7, (255, 255, 255), -1)

        if len(centers)==2:
            D = np.linalg.norm(cx-cy) #Se aplica distancia euclidiana para encontrar la distancia entre los centros de masa.
            print(D)


    cv2.imshow("Color Tracking",img)
    if cv2.waitKey(10) & 0xFF == ord('q'):
        cap.release()
        cv2.destroyAllWindows()
        break

Probably I have to use another function than np.linalg.norm to measure distance exactly

edit flag offensive delete link more

Comments

Can u mark one of the responses as your final answer then? This allows for your question to be closed.

eshirima gravatar imageeshirima ( 2017-12-01 14:52:45 -0600 )edit

Question Tools

1 follower

Stats

Asked: 2017-11-06 08:41:47 -0600

Seen: 14,982 times

Last updated: Dec 01 '17