Ask Your Question
0

DNN/Tensorflow API works in python but not c++

asked 2019-01-10 14:18:34 -0600

richardb gravatar image

updated 2019-01-10 16:02:52 -0600

Hi, I'm fairly new to training my own NN but I have gotten it to work but only partially. For some reason, I can only detect objects in python but not in c++. In python (3.6) this will detect objects as intended:

import cv2 as cv

cvNet = cv.dnn.readNetFromTensorflow('frozen_inference_graph.pb', 'ssd_graph.pbtxt')

img = cv.imread('image2.jpg')
rows = img.shape[0]
cols = img.shape[1]
cvNet.setInput(cv.dnn.blobFromImage(img, size=(300, 300), swapRB=True, crop=False))
cvOut = cvNet.forward()

for detection in cvOut[0,0,:,:]:
    score = float(detection[2])
    if score > 0.3:
        left = detection[3] * cols
        top = detection[4] * rows
        right = detection[5] * cols
        bottom = detection[6] * rows
        cv.rectangle(img, (int(left), int(top)), (int(right), int(bottom)), (23, 230, 210), thickness=2)

cv.imshow('img', img)
cv.waitKey()

However, a very similar program in c++ runs without errors but does not return any results:

#include <opencv2/dnn.hpp>
#include <opencv2/imgproc.hpp>
#include <opencv2/highgui.hpp>

#include <fstream>
#include <sstream>
#include <iostream>

using namespace cv;
using namespace dnn;
using namespace std;

int main()
{
    String modelConfiguration = "ssd_graph.pbtxt";
    String modelWeights = "frozen_inference_graph.pb";
    Mat blob;

    Net net = readNetFromTensorflow(modelWeights, modelConfiguration);

    Mat img = imread("image2.jpg");
    int rows = img.rows;
    int cols = img.cols;

    //blobFromImage(frame, blob, 1 / 255.0, Size(inpWidth, inpHeight), Scalar(0, 0, 0), true, false);
    blobFromImage(img, blob, 1 / 127.5, Size(299, 299), Scalar(127.5, 127.5, 127.5), true, false);

    //Sets the input to the network
    net.setInput(blob);

    // Runs the forward pass to get output of the output layers
    vector<Mat> outs;
    net.forward(outs); //, getOutputsNames(net)

    for (int i=0; i < outs.size(); i++) {
        Mat detection;
        detection = outs[i];
        float* data = (float*)outs[i].data;
        float score = float(detection.data[2]);
        if (score >= 0.0) {
            int left = detection.data[3] * cols;
            int top = detection.data[4] * rows;
            int right = detection.data[5] * cols;
            int bottom = detection.data[6] * rows;
            rectangle(img, Point(int(left), int(top)), Point(int(right), int(bottom)), (23, 230, 210), 2);          // img, (int(left), int(top)), (int(right), int(bottom)), (23, 230, 210), thickness = 2);
            cout << detection.data[1] << endl;; //Detection[1] is name label
        }
    }

    imshow("img", img);
    waitKey();

    return 0;
}

I'm using python 3.6, trained the model using tensorflow 1.12 using the SSD_Inception_V2_coco pre-trained model, opencv 4.0.0. Can anyone point me in the right direction? Thanks!

Update: I tried following the advise here but optimize_for_inference.py gives an error [KeyError: "The following input nodes were not found: {'Mul'}\n"]

edit retag flag offensive close merge delete

1 answer

Sort by ยป oldest newest most voted
0

answered 2019-01-10 23:14:25 -0600

dkurt gravatar image

a very similar program in c++

cv.dnn.blobFromImage(img, size=(300, 300), swapRB=True, crop=False)

and

blobFromImage(img, blob, 1 / 127.5, Size(299, 299), Scalar(127.5, 127.5, 127.5), true, false);

Deep learning models are mathematical objects.They are trained for a specific distribution of input data.

edit flag offensive delete link more

Comments

Thanks for pointing me in the right direction. Staring at the same code for long makes me miss some obvious mistakes! I changed the c++ code but still have no luck detecting anything:

Mat blob = blobFromImage(img, 1.0, Size(300, 300), Scalar(0, 0, 0), true, false);

I'm pretty sure the first 3 parameters and the last 2 are correct so either the issue is with my mean scalar. Any other ideas?

richardb gravatar imagerichardb ( 2019-01-11 08:11:31 -0600 )edit

Because of wrong postprocessing. Try to use OpenCV's sample for reference: https://github.com/opencv/opencv/blob...

dkurt gravatar imagedkurt ( 2019-01-11 08:21:11 -0600 )edit

After re-reading the blobFromImage documentation (stating mean is a multiplier) I also tried Mat blob = blobFromImage(img, 1.0, Size(300, 300), Scalar(1, 1, 1), true, false); Still no results unfortunately.

richardb gravatar imagerichardb ( 2019-01-11 08:24:30 -0600 )edit

@richardb, Drawing loop from the code snippet in the question is wrong. Please try to run you model with OpenCV's sample at first. If it works - you need to fix your code then.

dkurt gravatar imagedkurt ( 2019-01-11 08:29:44 -0600 )edit

@dkurt: Thanks a lot! I got it working by changing the code based on the example. I'm new here so I can't share the code but will post the working code tomorrow.

richardb gravatar imagerichardb ( 2019-01-11 09:43:34 -0600 )edit

@richardb: can you share your changed Code pleas, i have the same problems. Thank you

Andi1993 gravatar imageAndi1993 ( 2019-05-26 03:55:04 -0600 )edit

for me the exact same simple few lines of code are working in python but not in c++. It is just that the forward method is returning empty mat all the time. I am wondering what is the solution you figured out. My Code: c++:

cv::Mat query_img = cv::imread(image_path)
cv::dnn::Net dnn = cv::dnn::readNetFromTensorflow(transformed_graph_path,pbtxt_path);
cv::Mat blob;
cv::dnn::blobFromImage(query_img, blob, 1.0, cv::Size(512, 512));
dnn.setInput(blob); 
cv::Mat response=  dnn.forward();

python :

image = cv2.imread(image_path)
net = cv2.dnn.readNetFromTensorflow(transformed_graph_path,pbtxt_path) 
blob = cv2.dnn.blobFromImage(image, 1,(512,512))
 net.setInput(blob)
 response = net.forward()

The c++ returns nothing,the python works fine

Raniem gravatar imageRaniem ( 2019-12-02 08:09:37 -0600 )edit

I have used dnn in c++ before and it was working fine. This model has its last layer as a Convolution layer while in my previous models the output layer was a classification dense layer..

I am wondering how you can get the output of a convolution layer in this case.

Raniem gravatar imageRaniem ( 2019-12-02 08:23:49 -0600 )edit

I had same problem, error was in postprocessing, just read closely example here some different methods of data extraction.

MoscowskyAD gravatar imageMoscowskyAD ( 2020-05-20 10:53:56 -0600 )edit

Question Tools

1 follower

Stats

Asked: 2019-01-10 14:18:34 -0600

Seen: 3,863 times

Last updated: Jan 10 '19