get errors while execute forward step with torch model
I have trained a torch model and it works well in torch/lua. And after I import it into opencv and excute the forward step, the following error occured: "Incorrect size of input array<inconsistent shape for Concatlayer> in cv::dnn::ConcatLayerImpl::getMemoryShapes " file : dnn/src/layers/concat_layer.cpp line 94.
All codes can be found in here fan2.lua is the file to create the torch model and main.cpp is the file to load the model and execute the forward step.
@BigMao Chen, Please attach a serialized model to let people reproduce a error easily without torch installation.
@dkurt Sorry I have some trouble in uploading my model, here is the model and I update the new custom layer information(you provided) in github.Thank you ~
@dkurt hi, sorry to bother you again. I found that this error appears when excute forward step in concatlayer, whose inputs are in size 4(1 * 256 * 64 * 64,1 * 128 * 32 * 32,1 * 64 * 32 * 32,1 * 64 * 32 * 32) corresponding to the outputs are in size 1(1 * 256 * 64 * 64) could you please give me some advice about why this would happen because I dont understand how concatlayer works in opencv:( the input of my model is 1 * 3 * 256 * 256 and I just use concatlayer to divid inputs into two flow at most, So why could the inputs of concatlayer above be 4. Thanks for you patience.
@BigMao Chen, That's OK, don't worry. Concat layer concatenates multidimensional blobs into the one. In example, you can concatenate several images by columns into a single row. This way resulting image will has a number of columns equals a sum of images' widths. If images have different heights we can add just zeros to fit a maximum height. So concat layer do the same but for 3- or 4-dimensional objects.
@dkurt is that means I can fix the error above by adding zero pading steps by myself in
class ConcatLayerImpl
so that the 3rd and 4th dimensional size of inputs can be equal to outputs, and this stepwill never excute
@BigMao Chen, I think this is a bug because only nn.DepthConcat adds zero padding. So we need to reproduce your experiment carefully to figure out if the problem is in importer or in a custom layer you created. Anyway a PR with custom layers is not merged yet so this issue has less priority for now rather ones connected with current master branch.
@BigMao Chen, I found that problem is in
CAddTable
/JoinTable
layers import. They are connected to an every unconnected blob. In case of embedded residual connections that's wrong strategy. We're going to fix it. Thanks!@BigMao Chen, Could you please test the changes from a pull request https://github.com/opencv/opencv/pull... ?
Oh sorry to reply you so late. I will test it as soon as possible, thanks!
@dkurt I have checked that the forward function can work correctly. But I havent finished my code which transfer the forward result to final result so it will take me some times to examine whether the result produced by opencv can match the one produced by torch. Thanks for your help again!! And I will tell you the test result once I finished them.