Ask Your Question

Revision history [back]

more accuracy getAffineTransform()

Hi all,

The getAffineTransform and invertAffineTransform output the transformation matrix with dtype='float64'. Is there anyway to make it output more accuracy? say dtype='float128'? I need more accuracy in some applications.

In my application, I use three gps locations and three xy-points in an image to compute the matrix. Then, compute the inverse_matrix, so that I can covert any xy-point to gps location back.

Any idea?

more accuracy getAffineTransform()

Hi all,

The getAffineTransform and invertAffineTransform output the transformation matrix with dtype='float64'. Is there anyway to make it output more accuracy? say dtype='float128'? I need more accuracy in some applications.

In my application, I use choose three points with gps locations and three xy-points in an the image to compute the matrix. and, compute the inverse_matrix. Then, I use the same three points, input the xy and compute the inverse_matrix, so them back to the gps locations. The largest error between the computed gps location and measured gps location is about
3.04283202e-06 which is not bad. (about 30cm )

M2 = np.array([51, 788, 1.0])
M2 = M2.reshape(3, 1)
result = np.matmul(inv_M, M2)
# p = inv_M * p'  
diff = result - np.array([23.90368083, 121.53650361]).reshape(2,1)
print diff
[[-2.86084081e-08]
 [ 3.04283202e-06]]

But for other test points, the error is too much. For example, let's see the bottom point (gps 23.90377194, 121.53645972). The error is 0.00021149 in longitude which is too much (about 21m).

M2 = np.array([910, 958, 1.0])
M2 = M2.reshape(3, 1)
result = np.matmul(inv_M, M2)   # p = inv_M * p'  
diff = result - np.array([23.90377194, 121.53645972]).reshape(2,1)   //compare to the measured gps location
print diff
[[0.00015057]
 [0.00021149]]

Here is my ipynb link text

original image // you can use mouse to get the xy values of feature points. link text

feature points with gps locations // note that I can covert any xy-point to gps location back.this is a resized diagram, the xy value is meanless. link text

Any idea?

more accuracy getAffineTransform()

Hi all,

The getAffineTransform and invertAffineTransform output the transformation matrix with dtype='float64'. Is there anyway to make it output more accuracy? say dtype='float128'? I MAY need more accuracy in some my applications.

In my application, I choose three points with gps locations and xy-points in the image to compute the matrix. and, matrix and I compute the inverse_matrix. inverse_matrix too. Then, I use the same three points, points and the inverse_matrix, input the xy values and compute them back to the gps locations. The largest error between the computed gps location and measured gps location is about
3.04283202e-06 which is not really bad. (about 30cm )

M2 = np.array([51, 788, 1.0])
M2 = M2.reshape(3, 1)
result = np.matmul(inv_M, M2)
# p = inv_M * p'  
diff = result - np.array([23.90368083, 121.53650361]).reshape(2,1)
print diff
[[-2.86084081e-08]
 [ 3.04283202e-06]]

But for other test points, the error is errors are too much. For example, let's see the bottom point (gps 23.90377194, 121.53645972). The error is 0.00021149 in longitude which is too much (about 21m).

M2 = np.array([910, 958, 1.0])
M2 = M2.reshape(3, 1)
result = np.matmul(inv_M, M2)   # p = inv_M * p'  
diff = result - np.array([23.90377194, 121.53645972]).reshape(2,1)   //compare to the measured gps location
print diff
[[0.00015057]
 [0.00021149]]

Here is my ipynb link text

original image // you can use mouse to get the xy values of feature points. link text

feature points with gps locations // note that this is a resized diagram, the xy value is meanless. link text the gps locations were provided by vendor and they claimed the errors should be <= 30cm...

Any idea?

more accuracy getAffineTransform()

Hi all,

The getAffineTransform and invertAffineTransform output the transformation matrix with dtype='float64'. Is there anyway to make it output more accuracy? say dtype='float128'? I MAY need more accuracy in my applications.

In my application, I choose three points with gps locations and xy-points in the image to compute the matrix and I compute the inverse_matrix too. Then, I use the same three points and the inverse_matrix, input the xy values and compute them back to the gps locations. The largest error between the computed gps location and measured gps location is about
3.04283202e-06 which is not really bad. (about 30cm )

M2 = np.array([51, 788, 1.0])
M2 = M2.reshape(3, 1)
result = np.matmul(inv_M, M2)
# p = inv_M * p'  
diff = result - np.array([23.90368083, 121.53650361]).reshape(2,1)
print diff
[[-2.86084081e-08]
 [ 3.04283202e-06]]

But for other test points, the errors are too much. For example, let's see the bottom point (gps 23.90377194, 121.53645972). The error is 0.00021149 in longitude which is too much (about 21m). (about 21 meters).

M2 = np.array([910, 958, 1.0])
M2 = M2.reshape(3, 1)
result = np.matmul(inv_M, M2)   # p = inv_M * p'  
diff = result - np.array([23.90377194, 121.53645972]).reshape(2,1)   //compare to the measured gps location
print diff
[[0.00015057]
 [0.00021149]]

Here is my ipynb link text

original image // you can use mouse to get the xy values of feature points. link text

feature points with gps locations // note that this is a resized diagram, the xy value is meanless. link text the gps locations were provided by vendor and they claimed the errors should be <= 30cm...

Any idea?

more accuracy getAffineTransform()

Hi all,

The getAffineTransform and invertAffineTransform output the transformation matrix with dtype='float64'. Is there anyway to make it output more accuracy? say dtype='float128'? I MAY need more accuracy in my applications.

In my application, I choose three points with gps locations and xy-points in the image to compute the matrix and I compute the inverse_matrix too. Then, I use the same three points and the inverse_matrix, input the xy values and compute them back to the gps locations. The largest error between the computed gps location and measured gps location is about
3.04283202e-06 which is not really bad. (about 30cm )

M2 = np.array([51, 788, 1.0])
M2 = M2.reshape(3, 1)
result = np.matmul(inv_M, M2)
# p = inv_M * p'  
diff = result - np.array([23.90368083, 121.53650361]).reshape(2,1)
print diff
[[-2.86084081e-08]
 [ 3.04283202e-06]]

But for other test points, the errors are too much. For example, let's see the bottom point (gps 23.90377194, 121.53645972). The error is 0.00021149 in longitude which is too much (about 21 meters).

M2 = np.array([910, 958, 1.0])
M2 = M2.reshape(3, 1)
result = np.matmul(inv_M, M2)   # p = inv_M * p'  
diff = result - np.array([23.90377194, 121.53645972]).reshape(2,1)   //compare to the measured gps location
print diff
[[0.00015057]
 [0.00021149]]

Here is my ipynb link text

original image // you can use mouse to get the xy values of feature points. link text

feature points with gps locations // note that this is a resized diagram, the xy value is meanless. link text the gps locations were provided by vendor and they claimed the errors should be <= 30cm...

To check if the gps locations is trustable or not, I tried to plot those gps locations to ROS rviz and check their relative locations to the labeled image. Finally, I think the gps locations is trustable,

here is the png for checking gps locations link text

Any idea?