From a mathematical point of view, it is possible to do that, but we are working with a camera and with images captured with a camera, so what you have to simulate is a camera, not a generic plane. What you have to do is project the 3D points using the K and R|t of a camera. I showed you an example of that on my GitHub repository, here is where I create a projection matrix from the K and R|t matrixes:
and R|t is the translation and rotation of the camera coordinate system in relation to the world coordinate system.
In another function I project 3D world points into an image using the projection matrix, I normalize homogeneous coordinates and introduce the quantization error as well:
The file camera.py has already the definition of the class camera with the functions that I showed you and many more useful ones. You can include this class into your code and use it for projecting the points.
An example of how to include the class in a python file, create a camera object and define the values would be like this (assuming you copy the vision folder into your workspace):
from vision.camera import Camera
## CREATE A SIMULATED CAMERA
cam = Camera()
cam.set_K(fx = 800,fy = 800,cx = 640,cy = 480)
cam.set_width_heigth(960,960)
## DEFINE CAMERA POSE LOOKING STRAIGTH DOWN INTO THE WORLD COORDINATE SYSTEM (THIS CREATES A P MATRIX INSIDE THE CAM OBJECT
cam.set_R_axisAngle(1.0, 0.0, 0.0, np.deg2rad(180.0))
cam.set_t(0.0,-0.0,0.5, frame='world')
# DEFINITION OF OBJECT POINTS (WORLD POINTS) IN WORLD COORDINATE SYSTEM
# ON THIS CASE WE ARE USING 4 POINTS IN A PLANE, BUT YOU WILL USE POINTS IN 3D
objectPoints = np.array([[ 0.075, -0.06, 0.06, -0.06 ],
[ 0.105, 0.105, 0.105, 0.09 ],
[ 0., 0., 0., 0., ],
[ 1., 1., 1., 1., ]])
#NOW WE PROJECT THE OBJECT POINTS INTO THE CAMERA IMAGE WITHOUT QUANTIZATION ERROR
imagePoints = np.array(cam.project(objectPoints, False))
I think you are confused about the projection, in this part of your code you define a plane: https://github.com/D1vt/DLT3D/blob/b926ce377cc0f41e07d084d35661103ae59bb6b9/Python/Shpere#L12-L21 and then you project the 3D points on the plane?
From a mathematical point of view, it is possible to do that, but we are working with a camera and with images captured with a camera, so what you have to simulate is a camera, not a generic plane. What you have to do is project the 3D points using the K and R|t of a camera. I showed you an example of that on my GitHub repository, here is where I create a projection matrix from the K and R|t matrixes:
https://github.com/raultron/ivs_sim/blob/95dc017ef2aec32173e73dc397ba00177d4f92ce/python/vision/camera.py#L53-L57
where K is defined like this: https://github.com/raultron/ivs_sim/blob/95dc017ef2aec32173e73dc397ba00177d4f92ce/python/vision/camera.py#L67-L69
and R|t is the translation and rotation of the camera coordinate system in relation to the world coordinate system.
In another function I project 3D world points into an image using the projection matrix, I normalize homogeneous coordinates and introduce the quantization error as well:
https://github.com/raultron/ivs_sim/blob/95dc017ef2aec32173e73dc397ba00177d4f92ce/python/vision/camera.py#L140-L148
The file camera.py has already the definition of the class camera with the functions that I showed you and many more useful ones. You can include this class into your code and use it for projecting the points.
An example of how to include the class in a python file, create a camera object and define the values would be like this (assuming you copy the vision folder into your workspace):