Following scenario: I've taken an photo for further analysis. The photo contains a sheet of paper. First of all I'm trying to detect the corners of the image. Once I've got them I want to stretch/transform the image so that its corners fit a new Mat
's corners. (Like as if I had scanned the image.)
Reading the documentation on the above mentioned functions I'm not quite sure what's right for my needs. getAffineTransform
seems to only take three point pairs (which works quite well, but leaves the lower right corner untouched).
getPerspectiveTransform
should use four point pairs and findHomography
even more, right? So I guess that one of those would be the one I should go for. For now I did not manage to get it working, though. I'm using vector<Point2f> sourcePoints, destinationPoints;
, fill them with the found corners and my calculated new points (which are basically [width, 0]
, [0, 0]
, [0, height]
and [width, height]
of the new Mat
). After creating the two vectors I would create the transformation matrix using either getPerspectiveTransform
or findHomography
to finally pass it over to warpPerspective
. The last step is the one that crashes my application with
OpenCV Error: Assertion failed (dims == 2 && (size[0] == 1 || size[1] == 1 || size[0]*size[1] == 0)) in create, file /Users/Aziz/Documents/Projects/opencv_sources/trunk/modules/core/src/matrix.cpp, line 1310
libc++abi.dylib: terminate called throwing an exception
.
Since I'm not sure if it even is the right approach I'm trying, I would love to hear your opinion on this before I try to fix the error.
Thanks a lot!
–f