Ask Your Question
1

Faster RCNN no results

asked 2019-04-12 06:44:42 -0600

tobix10 gravatar image

updated 2019-04-12 07:38:13 -0600

I am trying to run Faster-RCNN Inception v2 model in OpenCV 3.4.6 (c++) using object_detection.cpp sample.

Model doesn't work. DetectionOutput layer returns one detection with empty *data.

This is how I run the app (following https://github.com/opencv/opencv/tree...)

./object-detection --model=../models/faster_rcnn_inception.pb  --config=../models/faster_rcnn_inception.pbtxt --classes=../models/coco.classes --width=300 --height=300 --scale=0.00784 --rgb --mean="127.5 127.5 127.5" --input=video.mp4 --nms 0.01
edit retag flag offensive close merge delete

Comments

yolo classes with rcnn ? no. (however, it won't change anything, i guess)

berak gravatar imageberak ( 2019-04-12 07:12:45 -0600 )edit
1

That is the name of the file, because I tested yolo first. Those are coco class names. It doesn't change anything, those are labels only.

tobix10 gravatar imagetobix10 ( 2019-04-12 07:37:57 -0600 )edit

How we can reproduce it? Is it a public model? Can you provide at least an image sample?

dkurt gravatar imagedkurt ( 2019-04-12 12:29:41 -0600 )edit

I use the model that is linked. I think that input is not a problem here e.g for such image it doesn't work either. Yolo model works fine.

tobix10 gravatar imagetobix10 ( 2019-04-15 03:29:56 -0600 )edit

1 answer

Sort by ยป oldest newest most voted
2

answered 2019-04-15 04:27:05 -0600

dkurt gravatar image

@tobix10, Thanks for pointing that issue!

The https://github.com/opencv/opencv/blob... is a bit outdated due preprocessing parameters such scale and mean already in the graph:

node {
  name: "image_tensor"
  op: "Placeholder"
  attr {
    key: "dtype"
    value {
      type: DT_UINT8
    }
  }
node {
  name: "Preprocessor/mul"
  op: "Mul"
  input: "image_tensor"
  input: "Preprocessor/mul/x"
}
node {
  name: "Preprocessor/sub"
  op: "Sub"
  input: "Preprocessor/mul"
  input: "Preprocessor/sub/y"
}
node {
  name: "FirstStageFeatureExtractor/InceptionV2/InceptionV2/Conv2d_1a_7x7/separable_conv2d/depthwise"
  op: "DepthwiseConv2dNative"
  input: "Preprocessor/sub"
...

So the right arguments are

./example_dnn_object_detection --model=frozen_inference_graph.pb --config=faster_rcnn_inception_v2_coco_2018_01_28.pbtxt --width=450 --height=258

image description

Note: you may vary input's width and height to achieve better accuracy.

edit flag offensive delete link more

Comments

1

Nice thanks, but it seems that this network doesn't output classes. Is this format [batchId, classId, confidence, left, top, right, bottom] general for "DetectionOutput" layers?

tobix10 gravatar imagetobix10 ( 2019-04-15 08:32:32 -0600 )edit
1

It has. The sample just prints confidences without class names. Try to pass a file object_detection_classes_coco.txt.

dkurt gravatar imagedkurt ( 2019-04-15 09:40:30 -0600 )edit
1

Ok, I see that there is a problem with class index. Person is a first class in a file, but 1 is subtracted from index in the code. It is working now. Thanks.

tobix10 gravatar imagetobix10 ( 2019-04-15 10:13:16 -0600 )edit

Question Tools

1 follower

Stats

Asked: 2019-04-12 06:44:42 -0600

Seen: 1,185 times

Last updated: Apr 15 '19