Hello,
I've got a question about masks in feature detection.
Using the AKAZE "detectandcompute" method with a very restrictive mask will the mask limit just the possible center pt for the feature detection, or does it also limit the contributing pixels for the descriptor computation? i.e. will the descriptor compute the features for the pixels within the unmasked region USING or NOT USING pixels from the masked region?
(My bigger picture problem is that I would like to extract AKAZE decriptors from features detected with GFTT at a specific pixel location. I know that descriptors must match detectors.. which is exactly what I'm trying to skirt around)
//setup Mat desc0; std::vector<keypoint> key0; m_akaze = AKAZE::create(...params...)
//make a very tight mask Mat mask = Mat::zeros(240,320,CV_8U); mask(Rect(100,100,5,5))=1;
m_akaze->detectAndCompute(img,mask,key0,desc0);
Any help would be greatly appreciated. thank you kindly.