Ask Your Question
-1

How to get the number of centroids in a rectangle?[SOLVED]

asked 2020-01-28 06:55:35 -0600

whale gravatar image

updated 2020-02-01 21:10:44 -0600

supra56 gravatar image

Current if statement checks if centroid coordinates (x, y) are inside the red rectangle and outputs the coordinates. Instead of getting coordinates of each centroid inside the red rectangle how can I get the the total number of centroids inside the red rectangle?

My while loop:

while True:
    frame = vs.read()
    frame = imutils.resize(frame, width=720)

    cv2.rectangle(frame, (box.top), (box.bottom), color, 2)

    (h, w) = frame.shape[:2]
    blob = cv2.dnn.blobFromImage(cv2.resize(frame, (300, 300)), 0.007843, (300, 300), 127.5)

    net.setInput(blob)
    detections = net.forward()

    for i in np.arange(0, detections.shape[2]):
        confidence = detections[0, 0, i, 2]

        if confidence > args["confidence"]:
            idx = int(detections[0, 0, i, 1])

            if CLASSES[idx] != "car":
                continue

            box = detections[0, 0, i, 3:7] * np.array([w, h, w, h])
            (startX, startY, endX, endY) = box.astype("int")

            label = "{}: {:.2f}%".format(CLASSES[idx], confidence * 100)
            cv2.rectangle(frame, (startX, startY), (endX, endY), COLORS[idx], 2)

            center = ((startX+endX)/2, (startY+endY)/2)
            x = int(center[0])
            y = int(center[1])
            cv2.circle(frame, (x, y), 5, (255,255,255), -1)

            if ((x > box.top[0]) and (x < box.bottom[0]) and (y > box.top[1]) and (y < box.bottom[1])):
                print(x, y)

    cv2.imshow("Frame", frame)
    key = cv2.waitKey(1) & 0xFF

Output picture:

image description

Current output (x and y coordinates of each centroid inside red rectangle):

111 237
532 247
307 249

Desired output (total number of centroids inside red rectangle):

3
edit retag flag offensive close merge delete

Comments

deep learning is easier than math?

LBerger gravatar imageLBerger ( 2020-01-28 10:48:21 -0600 )edit

2 answers

Sort by ยป oldest newest most voted
1

answered 2020-02-01 14:48:21 -0600

supra56 gravatar image

updated 2020-02-01 14:50:59 -0600

The problem has been solved. Unfortunately, un-resizing and using image only.

#!/usr/bin/python3.7
#OpenCV 4.2, Raspberry py 3/3b/4b, Buster ver 10
#Date: 2nd February, 2020.

import numpy as np
import argparse
import cv2

# construct the argument parse and parse the arguments
ap = argparse.ArgumentParser()
ap.add_argument("-i", "--image", required=True,
    help="path to input image")
ap.add_argument("-p", "--prototxt", required=True,
    help="path to Caffe 'deploy' prototxt file")
ap.add_argument("-m", "--model", required=True,
    help="path to Caffe pre-trained model")
ap.add_argument("-c", "--confidence", type=float, default=0.2,
    help="minimum probability to filter weak detections")
args = vars(ap.parse_args())

CLASSES = ["background", "aeroplane", "bicycle", "bird", "boat",
    "bottle", "bus", "car", "cat", "chair", "cow", "diningtable",
    "dog", "horse", "motorbike", "person", "pottedplant", "sheep",
    "sofa", "train", "tvmonitor"]

COLORS = np.random.uniform(0, 255, size=(len(CLASSES), 3))

# load our serialized model from disk
print("[INFO] loading model...")
net = cv2.dnn.readNetFromCaffe(args["prototxt"], args["model"])

image = cv2.imread(args["image"])
(h, w) = image.shape[:2]
blob = cv2.dnn.blobFromImage(cv2.resize(image, (300, 300)),
                             0.007843, (300, 300), 127.5)

print("[INFO] computing object detections...")
net.setInput(blob)
detections = net.forward()

counter = 0
# loop over the detections
for i in np.arange(0, detections.shape[2]):
    confidence = detections[0, 0, i, 2]

    if confidence > args["confidence"]:
        idx = int(detections[0, 0, i, 1])

        if CLASSES[idx] is not "car":
            continue

        box = detections[0, 0, i, 3:7] * np.array([w, h, w, h])
        (startX, startY, endX, endY) = box.astype("int")

        label = "{}: {:.2f}%".format(CLASSES[idx],
                                     confidence * 100)
        cv2.rectangle(image, (startX, startY),
                      (endX, endY), COLORS[idx], 2)


        center = ((startX+endX)/2, (startY+endY)/2)
        x = int(center[0])
        y = int(center[1])
        cv2.circle(image, (x, y), 5, (255,255,255), -1)
        print(x, y)

        y = startY - 15 if startY - 15 > 15 else startY + 15
        cv2.putText(image, label, (startX, y),
                   cv2.FONT_HERSHEY_SIMPLEX, 0.5, COLORS[idx], 2)

        counter += 1      
print(f'counter', counter)
# show the output image
cv2.imshow("Output", image)
cv2.waitKey(0)

[INFO] loading model...
[INFO] computing object detections...
636 542
986 518
272 542
counter 3

Output:

image description

[INFO] loading model...
[INFO] computing object detections...
687 411
412 403
237 260
291 384
238 90
543 407
735 97
153 379
106 318
143 75
349 95
counter 11

output:

image description

edit flag offensive delete link more
0

answered 2020-01-28 22:36:14 -0600

updated 2020-01-29 02:06:48 -0600

Each rectangle has only 1 centroid, the coordinates of which you are already obtaining. The number of rectangles inside the red rectangle is also the number of centroids. You have 3 elements in for loop, that means 3 centroids.

edit flag offensive delete link more

Comments

But how can I obtain the number of centroids, because I can only get the coordinates? Can I put them in a list or increment some counter?

whale gravatar imagewhale ( 2020-01-30 02:55:25 -0600 )edit

What prototxt and model are you using? Put increment counter just before cv2.imshow.

supra56 gravatar imagesupra56 ( 2020-01-30 14:33:19 -0600 )edit

Btw, I couldn't find prototxt and model files.

supra56 gravatar imagesupra56 ( 2020-01-30 14:39:23 -0600 )edit
1

MobileNetSSD_deploy.caffemodel and MobileNetSSD_deploy.prototxt

whale gravatar imagewhale ( 2020-01-31 03:17:16 -0600 )edit

Do you have same argparse as mine?

# construct the argument parse and parse the arguments
ap = argparse.ArgumentParser()
ap.add_argument("-p", "--prototxt", required=True,
    help="path to Caffe 'deploy' prototxt file")

ap.add_argument("-m", "--model", required=True,
    help="path to Caffe pre-trained model")

ap.add_argument("-c", "--confidence", type=float, default=0.2,
    help="minimum probability to filter weak detections")
args = vars(ap.parse_args())
supra56 gravatar imagesupra56 ( 2020-01-31 07:38:23 -0600 )edit
1

Yeah, the same.

whale gravatar imagewhale ( 2020-01-31 08:15:23 -0600 )edit

Are you using cv2.VideoCapture or cv2.imread?

supra56 gravatar imagesupra56 ( 2020-01-31 09:11:24 -0600 )edit
1

cv2.imread for simple testing.

whale gravatar imagewhale ( 2020-01-31 09:27:12 -0600 )edit

Can you post original image? I got code working.

supra56 gravatar imagesupra56 ( 2020-02-01 13:15:04 -0600 )edit
1

Question Tools

1 follower

Stats

Asked: 2020-01-28 06:55:35 -0600

Seen: 1,809 times

Last updated: Feb 01 '20