How to use blobFromImages in c++
System information (version)
- OpenCV => :3.4.5
- Operating System / Platform => windows 7 windows 10:
- Compiler => :microsoft vs2019:
- C++
Detailed description
I am using dnn module very successfuly with yolo v3 and ssd mobilenet, with single image process - blobFromImage I want to process few images in parallel using blobFromImages. I wrote: (yolov3 net)
frame1 = imread("img1.jpg");
frame2 = imread("img2.jpg");
std::vector<cv::Mat> inputs;
inputs.push_back(frame1);
inputs.push_back(frame2);
blobFromImages(inputs, blob, 1 / 255.F, inpSize, mean, true, false);
std::vector<Mat> outs;
net.forward(outs, getOutputsNames(net));
postprocess(frame, outs, net);
When I am loading only the one image it works fine. When I am lodaing 2 images the outs matrices are empty. Waht am I doing wrong?
I have a 2 images sample program with tensorflow model that works just fine. I try to migrate it using my darknet model. It seem like the output matrix is different when using one image versus using 2 images. The output matrix when entering one image is making sense 2028 x 85 The output matrix when entering 2 images is -1 x -1. Is the output format in the 2 cases different? When using tensorflow model the output matrix size is -1x-1 but the way the output is taken is different from when darknet model is take out.
I think that opencv with darknet model is not working with multiple images !!!! Can someone confirm it please.
In regards to the output being -1 x -1, in case the number of dimensions of a cv::Mat is greater than 2, you have to get the size using size[n]. So in your code if you check outs.size[n] where n<3, you will get the dimensions of the outs Mat.