I can see there are implementations of feature descriptors like SIFT, SURF in OpenCV but they extract sparse key points on the images. I want to do a dense feature matching in two images.
My approach: I can fetch the keypoints on the images this way and now I need to compute the descriptors of these keypoints. step_size = 2 keypoint = [cv2.KeyPoint(x, y, step_size) for y in range(0, img1.shape[0], step_size) for x in range(0, img1.shape[1], step_size)]