goodFeaturesToTrack vs cornerHarris
What is the difference between using goodFeaturesToTrack()
with useHarrisDetector=true
and simply using cornerHarris()
?
I assumed that they were using the same code under the hood, but upon an implementation test with live streaming video, I found that goodFeaturesToTrack( ... , useHarrisDetector=true)
gave more stable, noiseless corners while cornerHarris
detected the same/similar corners but was much less robust. My ultimate goal is to use the cornerSubPix()
method for sub-pixel accuracy.
Using OpenCV 3.1.0.
Code:
goodFeaturesToTrack
// GFTT Settings
int maxCorners = 30;
double qualityLevel = 0.05;
double minDistance = 2.0;
int blockSize = 3;
bool userHarrisDetector = true;
double k = 0.04;
// A place to put the returned corners.
// each element is an (x, y) coord of the corner
std::vector<cv::Point2f> corners;
corners.reserve(maxCorners);
// Use GFTT to find Harris corners
goodFeaturesToTrack(gray, corners, maxCorners, qualityLevel,
minDistance, Mat(), blockSize, userHarrisDetector, k);
// Draw the corners
for (int i = 0; i < corners.size(); i++) {
circle(frame, corners[i], 2, Scalar(0, 0, 255), -1, 8, 0);
}
cornerHarris
// Corner Harris Settings
int blockSize = 2; // neighborhood size
int ksize = 3; // Sobel aperture
double k = 0.04; // harris detector free parameter
Mat dst;
cornerHarris(gray, dst, blockSize, ksize, k);
// dilate(gray, gray, Mat());
// Normalizing
normalize(dst, dst, 0, 255, NORM_MINMAX, CV_32FC1);
// convertScaleAbs(dst, dst);
// Draw the corners
int thresh = 175;
for (int j = 0; j < dst.rows; j++) {
for (int i = 0; i < dst.cols; i++) {
if ((int)dst.at<float>(j,i) > thresh) {
circle(frame, Point(i, j), 2, Scalar(0, 0, 255), -1, 8, 0 );
}
}
}