Ask Your Question
0

Python: Running estimateRigidTransform in opencv/python; 8uC1 or 8uC3 error

asked 2014-06-21 14:25:05 -0600

angel.spatial gravatar image

I currently have two matching point sets built into a numpy array of float32:

points1 = 
[[  346.70220947  9076.38476562]
 [  922.99554443  9096.4921875 ]
 [  776.96466064  9108.79101562]
 [  449.0173645   9080.61816406]
 [ 2843.19433594  1226.93212891]
 [  779.95275879  9094.76855469]
 [  451.46853638  9092.5078125 ]
 [ 3981.4621582   1237.50964355]
 [  132.38700867  9086.7890625 ]
 [  819.10943604  8286.74023438]
 [ 1963.64025879  1220.06921387]
 [ 1253.79321289  9095.75292969]]

points2 = 
[[ 55110.36328125   9405.07519531]
 [ 55686.71875      9423.63574219]
 [ 55540.8515625    9435.80078125]
 [ 55212.58203125   9408.00585938]
 [ 57598.76171875   1551.92956543]
 [ 55543.78125      9421.88769531]
 [ 55214.40625      9420.46972656]
 [ 58737.41796875   1561.14831543]
 [ 54895.9296875    9414.58203125]
 [ 55581.87109375   8613.87011719]
 [ 56718.76953125   1546.02197266]
 [ 56017.8125       9422.52050781]]

and I'm trying to run:

affine = cv2.estimateRigidTransform(points2,points1,True)
print affine

so that I can generate an affine matrix that can then be translated into a world file (.tfw). The world file is for GIS software that will project these on-the-fly.

At the moment I am getting an error:

Both input images must have either 8uC1 or 8uC3 type in function cvEstimateRigidTransform

I'm not really sure what's going on here. I thought I could use two points sets as parameters as long as I have 6 or more pairs.

Any thoughts or recommendations would be much appreciated!

edit retag flag offensive close merge delete

Comments

8uC1 == unsigned char one channel, 8uC3 = unsigned char 3 channels. Why don't you try converting the values to uchar? Not sure about it though

GilLevi gravatar imageGilLevi ( 2014-06-21 16:34:07 -0600 )edit

Hi, I'm having a very similar problem. Do you manage to find a solution?

TomLoveluck gravatar imageTomLoveluck ( 2015-07-14 07:07:05 -0600 )edit

2 answers

Sort by ยป oldest newest most voted
0

answered 2017-09-13 07:17:13 -0600

updated 2017-09-14 10:19:55 -0600

I had the same weird error but in Java. In my case, It seemed that estimateRigidTransform couldn't recognize that the two Mat images I was giving where actually 2D Point Sets. So I applied a workaround in order to convert my match points from MatOfKeyPoint to MatOfPoint2f type.

UPDATE: Filtering your matches is important, cause if you don't you may get an empty array as a result of the transform.

Here is the complete Java code (It's not Python, but maybe it will help you):

FeatureDetector detector = FeatureDetector.create(FeatureDetector.ORB);
DescriptorExtractor descriptor = DescriptorExtractor.create(DescriptorExtractor.ORB);
DescriptorMatcher matcher = DescriptorMatcher.create(DescriptorMatcher.BRUTEFORCE_HAMMING);

// Load First Image
Mat img1 = Imgcodecs.imread("img1_path", Imgcodecs.IMREAD_GRAYSCALE);
Mat img1_descriptors = new Mat();
MatOfKeyPoint img1_keypoints_mat = new MatOfKeyPoint();

// Detect KeyPoints
detector.detect(img1, img1_keypoints_mat);
descriptor.compute(img1, img1_keypoints_mat, img1_descriptors);

// Load Second Image
Mat img2 = Imgcodecs.imread("img2_path", Imgcodecs.IMREAD_GRAYSCALE);
Mat img2_descriptors = new Mat();
MatOfKeyPoint img2_keypoints_mat = new MatOfKeyPoint();

// Detect KeyPoints
detector.detect(img2, img2_keypoints_mat);
descriptor.compute(img2, img2_keypoints_mat, img2_descriptors);

// Match KeyPoints
MatOfDMatch matOfDMatch = new MatOfDMatch();
matcher.match(img1_descriptors, img2_descriptors, matOfDMatch);

// Filtering the matches
List<DMatch> dMatchList = matOfDMatch.toList();
Double max_dist = 0.0;
Double min_dist = 100.0;

for(int i = 0; i < img1_descriptors.rows(); i++){
    Double dist = (double) dMatchList.get(i).distance;
    if(dist < min_dist) min_dist = dist;
    if(dist > max_dist) max_dist = dist;
}
LinkedList<DMatch> good_matches = new LinkedList<>();
for(int i = 0; i < img1_descriptors.rows(); i++){
    if(dMatchList.get(i).distance < 3*min_dist){
        good_matches.addLast(dMatchList.get(i));
    }
}

// Converting to MatOfPoint2f format
LinkedList<Point> img1_points_list = new LinkedList<>();
LinkedList<Point> img2_points_list = new LinkedList<>();

List<KeyPoint> img1_keyPoints_list = img1_keypoints_mat.toList();
List<KeyPoint> img2_keyPoints_list = img2_keypoints_mat.toList();

int limit = good_matches.size();
for(int i = 0; i < limit; i++){
    img1_points_list.addLast(img1_keyPoints_list.get(good_matches.get(i).queryIdx).pt);
    img2_points_list.addLast(img2_keyPoints_list.get(good_matches.get(i).trainIdx).pt);
}

MatOfPoint2f img1_point2f_mat = new MatOfPoint2f();
img1_point2f_mat.fromList(img1_points_list);

MatOfPoint2f img2_point2f_mat = new MatOfPoint2f();
img2_point2f_mat.fromList(img2_points_list);

// Draw match points
Mat output = new Mat();
Features2d.drawMatches(img1, img1_keypoints_mat, img2, img2_keypoints_mat, matOfDMatch, output);
Imgcodecs.imwrite("output.png", output);

Mat result = Video.estimateRigidTransform(img1_point2f_mat, img2_point2f_mat, true);
printMat(result); // Printing the optimal affine transformation 2x3 array

// The following variables correspond to the estimateRigidTransform result as shown here: https://stackoverflow.com/a/29511091/5165833
double a = result.get(0,0)[0];
double b = result.get(0,1)[0];
double d = result.get(1,1)[0];
double c = result.get(1,0)[0];

// Solving for scale as shown in the link above
double scale_x = Math.signum(a) * Math.sqrt( (a*a) + (b*b) );
double scale_y = Math.signum(d) * Math.sqrt( (c*c) + (d*d) );

System.out.println("a = "+a);
System.out.println("b = "+b);
System.out.println("scale_x = "+scale_x);
System.out.println("scale_y = "+scale_y);

}

public static void printMat(Mat m) { for (int x=0; x < m.height(); x++) { for (int y=0; y < m.width(); y++) { System.out.printf("%f",m.get(x,y)[0]); System.out.printf("%s"," "); } System.out.printf("\n"); } }

edit flag offensive delete link more
0

answered 2015-12-02 08:29:05 -0600

i have the same issue, it seems that opencv's estimateRigidTransform tries to parse the both input point sets as images, but they are in fact 2 point sets.

did not find a solution for this, yet. We have to somehow tell it to interpret the inputs as point vectors, as in the underlying C++ architecture via providing std::vector<point>. :/

EDIT: This solved my problem: Option 1) To let opencv know that the provided data structures are no images, simply transpose both input vector, i.e. let's say we have 2 point vectors with 100 points provided. simply transpose the vectors from 100x2 into vector 2x100. Option 2) Update your opencv version. This issue occured for me in 2.4.11, but after updating to 3.0.0 it was resolved.

edit flag offensive delete link more

Question Tools

Stats

Asked: 2014-06-21 14:25:05 -0600

Seen: 3,806 times

Last updated: Sep 14 '17