Hey,
i am using c++ OpenCV 3.1 on Ubuntu Xenial. I think i found a bug in OpenCV but want to reassure that I am not missing something.
I have code that first trains my ANN_MLP with training data in one method. Then, on user input a new sample is generated and forwarded to my predict() wrapping method.
This is how a snippet looks
Mat_<float> responses = Mat::zeros(data.rows, categories.size(), data.type());
(...) //fillup responses
Mat_<int> layer_sizes(1, 3); //gets filled up
(...) //fillup layer_sizes
mlp = ml::ANN_MLP::create(); // mlp is a class variable
mlp->setActivationFunction(ml::ANN_MLP::SIGMOID_SYM, 0.1, 0.1);
mlp->setTrainMethod(ml::ANN_MLP::BACKPROP, 0.1, 0.1);
mlp->setLayerSizes(layer_sizes);
traindata = ml::TrainData::create(data, ml::ROW_SAMPLE, responses); //traindata is a class variable
mlp->train(traindata);
training works well here! Now when I go to predict within another method of the same class and I take a row from the trained set I get a segmentation fault except I do it this way
mlp->setActivationFunction(ml::ANN_MLP::SIGMOID_SYM, 0.1, 0.1);
mlp->train(traindata);
mlp->predict(feature, results);
As you see I have to set the activation function again and train again. This is really annoying. It may be somewhat related to the following bug: http://code.opencv.org/issues/4251 Because when I run the program with gdb I get a message containing
icv_y8_ownMinMaxIndx_32f_C1R_U8_2
related to an error. Now I think that ANN_MLP internally misses to internally store the activation function as well as the trained data (or parts of this).
What do you think?