Dear @JarnoRalli ,
Thanks for sharing your great work. I would like to ask for help, so thanks so much for your time.
Given camera Intrinsic (K matrix), I try to obtain Extrinsic (Rotation matrix and translation vector) to form a camera projection matrix. I have tried your lib, but I am not successful yet.
Here is the procedure
- 4 corresponding points pairs
- Find homo
- Obtain R and t
If I use forwardprojectK to obtain uv_, this works, the obtained uv_ is similar to the predefined uv.
However, if I use forwardprojectP to obtain uv_, this does not work the obtained uv_ is very different from the predefined `uv'.
And I do not know why the later method does not work, the obtained points3D_cam of the two methods are very different.
points3D_cam = R @ points3D_obj + t
R = np.transpose(R)
world2cam = np.hstack((R, np.dot(-R, t).reshape(3,-1)))
points3D_h = np.vstack([points3D_h, np.ones([1, 4], dtype=np.float32)])
points3D_cam = world2cam@points3D_h
Here is the full src for reproducing.
import numpy as np
from python_camera_library import rectilinear_camera as cam
from python_camera_library import homography as homog
from python_camera_library import utils
K = np.array([[9.32545185e+02, 0.00000000e+00, 1.10520066e+03],
[0.00000000e+00, 9.32545185e+02, 4.84232068e+02],
[0.00000000e+00, 0.00000000e+00, 1.00000000e+00]], dtype=np.float32)
uv = np.array([[432, 1167, 550, 1524],
[593, 495, 873, 571],
[1, 1, 1, 1]], dtype=np.float32)
# mm
points3D_obj = np.array([[0, 15240, 0, 15240],
[0, 0, 8530, 8530],
[0, 0, 0, 0]], dtype=np.float32)
points3D_h = np.copy(points3D_obj)
points3D_h[-1, :] = np.ones([1, 4])
H = homog.dlt_homography(points3D_h, uv)
(R, t) = homog.extract_transformation(K, H)
scale = homog.transformation_scale(K, R, t, uv[:, 1], points3D_obj[:, 1])
t *= scale
points3D_cam = R @ points3D_obj + t
(pts3D, uv_, depth) = cam.forwardprojectK(points3D_cam, K, (6000, 6000))
print("points3D_cam = R @ points3D_obj + t")
print(points3D_cam)
print()
print("(pts3D, uv_, depth) = cam.forwardprojectK(points3D_cam, K, (6000, 6000))")
print(uv_)
R = np.transpose(R)
world2cam = np.hstack((R, np.dot(-R, t).reshape(3,-1)))
points3D_h = np.vstack([points3D_h, np.ones([1, 4], dtype=np.float32)])
points3D_cam = world2cam@points3D_h
print()
print("points3D_cam = world2cam@points3D_h")
print(points3D_cam)
P = np.dot(K, world2cam)
_, uv, _ = cam.forwardprojectP(points3D_obj, P, (6000, 6000))
print()
print("_, uv, _ = cam.forwardprojectP(points3D_obj, P, (6000, 6000))")
print(uv_)
Dear @JarnoRalli ,
Thanks for sharing your great work. I would like to ask for help, so thanks so much for your time.
Given camera Intrinsic (
Kmatrix), I try to obtain Extrinsic (Rotation matrix and translation vector) to form a camera projection matrix. I have tried your lib, but I am not successful yet.Here is the procedure
If I use
forwardprojectKto obtainuv_, this works, the obtaineduv_is similar to the predefineduv.However, if I use
forwardprojectPto obtainuv_, this does not work the obtaineduv_is very different from the predefined `uv'.And I do not know why the later method does not work, the obtained
points3D_camof the two methods are very different.Here is the full src for reproducing.