Halide do not leverage gpu at all(opencv dnn)
os : ubuntu16.04.1
Trying to do object detection by Halide, before I set Halide as backend, fps is around 22~23, cpu rate is around 70~80%, memory usage of gpu ram is around 431MiB(monitor by nvidia-smi)
After I turned on Halide(every codes remain the same)
net.setPreferableBackend(cv::dnn::DNN_BACKEND_HALIDE);
fps drop down to 6~10fps, usage of cpu and gpu ram still similar to the case without Halide.
At first I think it is because the network and batch size are too small, so I increase the batch size from 1 to 32. However, the usage of gpu ram do not increase at all even I increase the batch size to 32, instead cpu usage raise over 90%, looks like Halide backend never use gpu at all.
I did a similar test with the sample 'resnet_ssd_face.cpp'. In my case the cuda depencency was not included, so the gpu will never be used by halide, but the performance of the sample drops to the half of the original framerate (without halide reaches ~24fps, and with halide ~10fps). Is this a bug or is something missed in the compilation?
How could you get the dnn module to use the nvidia graphics card? Would you please explain? Do I need to compile opencv with both Halide and Cuda?