OpenCV real-time tracking only QrCode
Hello everyone, I'm a newbie in this fantastic world of the computer vision.
A part of my project is like a "in-out people counter" but only detecting QRCodes. For that, I started by using the pyzbar library, in order to detect and decode the QRCode. I'm passing frames getting from a webcam and for each of them decoding with pyzbar, then drawing a rectangle around it and finally show with cv2. By doing that, each time a frame is passed, the decode function detect the qrcode as new one (obviously).
What I want to do is to track only and exclusively the QRCode, by ignoring everything else in the environment, identifying and tracking it so that it can only be decoded once as long as it is visible from the camera. The webcam works like a scanner. At the end what I would like to achieve as a result is:
- When a qrcode appears at the top of the webcam images, it is identified (once);
- when the qrcode crosses the middle line, a counter is increased; (I saw a tutorial that uses frame subtraction)
- when the qrcode comes out of the bottom of the images, the program is ready to detect a new qrcode.
The idea is that there will only be one qrcode at a time in the image.
def decodeAndDraw(im):
cv2_im = cv2.cvtColor(im,cv2.COLOR_BGR2GRAY)
image = Image.fromarray(cv2_im)
draw = ImageDraw.Draw(image)
for barcode in decode(image):
rect = barcode.rect
draw.rectangle(
(
(rect.left, rect.top),
(rect.left + rect.width, rect.top + rect.height)
),
outline='#0080ff'
)
draw.polygon(barcode.polygon, outline='#e945ff')
image_data = np.asarray(image)
return image_data
Can you point me in the right direction? Many thanks!
To track barcode, you can use optical flow or homography. I think this will solve your problem.
Hi ak1, thanks for you reply. I took a look at homography but but I have seen that this method maps the points in one image to the corresponding points in the other image, so comparing two images as a base. Assuming one images is from the stream of the webcam capturing the qrcode I need, however, as a base another static image of qrcode that I must recognize and this is not good because in my project is expected to recognize and decode more qrcode (at run-time). For that reason I think that this solution you provided me is not good for solving my problem. Am I in wrong? Thank you.
It is not necessary that you should compare it with static frame(ground truth). See Take first frame as a reference where barcode detection is done efficently. Now use that bounding rect of detected barcode in next frame using optical flow or homography. After some x frames again detect the barcode and track it further. So algo will be like this.
Note: Use homography or optical flow based on your assumptions.
@minimanimo You can do this in real time using above algo. Sorry for late reply.
@ak1 Thanks for this explanation. I have 2 question for you:
1) Does this approach work even if the qrcode is placed on a moving object in the scene? I ask this question because this way the moving mass is greater and therefore also the area that is identified between a frame and the reference one
2) What is the reason for discarding some frames, as well as (I suppose) from a performance point of view (less frame to process)?
@minimanimo I will first answer que 2. ans2: I am not discarding some frames. I am saying, here we are detecting 1st frame then we are tracking some next x frames. I am asking to detect again x+2 th frame and do further tracking. We are doing this for following 2 reasons, 1)There might be a case where barcode is not in fov or partially occluded by fov.
2)There might be some errors while tracking.
Now for question 1. ans1: I have a doubt. Is your camera is also moving with moving object?. If camera is stationary then this will work perfectly. If your camera is moving along with moving object, then I think you have to remove camera ego motion (I am sure about that).
@ak1 This is what I've managed to achieve at the moment: https://youtu.be/edAWlJ-Dw0M
Following you advices, and the example of optical flow in opencv documentation as a base:
Init: wait for a qrcode in the scene
Get rect boundaries, save points and draw circles
If x+2 frames
--> try to decode qrcode and get new boundaries and draw circles on it
--> else (if no qrcode is detected) estimate optical flow, so get points and draw
repeat*
I'm in a small room, the lamp is bright and positioned above it; I'm using the camera of my smartphone OnePlus One; As you can see from the video tracking is not well done and also the qrcode is not detected by pyzbar if it is moving.How can I improve the situation?
@ak1 How can I improve the situation? I need more reliability of the points because I will have to calculate the centroids and use those for counting like as "in-out people counter". My settings are:
Is maxLevel too high? In the tutorial is used a value = 2
@minimanimo.sorry for late reply. I was busy with my convocation. You can do like this, link: https://jayrambhia.com/blog/lucas-kan... Yeah you have to set the parameters properly. For optical flow you need to maintain 2 assumptions, 1) Brightness constancy 2) change should small. If change is large(fast motion) use pyramids to tackle this. I think the parameters used by jayrambhia (above link) should work for you. (as per observation from your video). You can also follow above blog to achieve your aim.