KNearest slow? C++
Hi guys, i need to use the KNearest from OpenCV in C++, i read the Doc but i did't undertand it well.
I have confidence map (Mat_float) and a set of confidence features (Mat_float[40]) associated with this map. So each value of confidence map is associated with 40 values.
I need to find the mean value of the KNN of each pixel of the confidence map and i need to use the confidence features as weight for the knn search.
I tryied to implement this in c++, it works in some ways but it is very slow. (maybe 1/2 hours to complete the run)
Ptr<ml::KNearest> knn(ml::KNearest::create());
knn->setIsClassifier(false);
int num_samples = rows*cols;
Mat samples; //Each row has 40 features and represents a pixel
samples.create(num_samples, 40, CV_32F);
Mat response; //The i-row of the sample is associated with te (0, i) value of the array (or not?)
response.create(1, num_samples, CV_32F);
//Here i just fill the samples and response
num_samples = 0;
for(int r = 0; r < rows; r++){
for(int c = 0; c < cols; c++){
response.at<float>(0, num_samples) = _confidence_map.at<float>(r, c);
for(int k = 0; k < 40; k++){
samples.at<float>(num_samples, k) = feature_array[k].at<float>(r,c);
}
num_samples++;
}
}
//The train is very fast
knn->train(samples, cv::ml::ROW_SAMPLE, response);
Mat result;
knn->findNearest(samples, 20, result);
Am i doing something wrong? Thanks for the help.
maybe this needs some more explanation ? do you mean colors ? positions ?
what are you trying to achieve, in general here ?
also, we need some numbers. how many train samples do you have ? and how many classes ? and why the 20 ?
(e.g.: if you only have 3 samples per class, using a K of 20 won't work)
Yea sorry, i wans't very clear. I edited the question, i hope it will be more clear.
I'm trying to implement this from a paper that is not very clear so i'm a bit confused sorry.
somewhat better, but hmmm.
do you have a link to the paper, maybe ? (so we can see for ourselves ?)
KNearest is a classification mechanism, so, what are your classes/responses/labels ?
(i'm somewhat thinking, you wanted (unlabelled) clustering instead, but no idea without further explanation)
I'm sorry but it is hidden, however i'll leave here the link: https://ieeexplore.ieee.org/document/...
this is your problem ?
Yes, the part where it says that the confidence map is further improved. The KNN is used to compute the filtered confidence map Qi (pixel level) and Qn(superpixel level).
The next step is just a simple weighted sum between Qi.at(x,y) and Qn.at(x,y) to get the final value.
I edited with some code and better explanation i hope it is clear now