-
As line 68 - 73 in models.py:
```
# Graph Learning loss
D = tf.matrix_diag(tf.ones(self.placeholders['num_nodes']))*-1
D = tf.sparse_add(D, self.S)*-1
D = tf.matmul(tf.transpose(self.x), D)
se…
-
Hi,
thanks for the great project and easy calibration of lidar and camera.
As a nice addition to the project itself I would like to request a ROS 2 node for the implementation of fusion/overla…
-
Hi,
This is probably a very amateur question but is it possible to get the intrinsic matrix of the agent camera OR can I obtain point cloud from RGB-D image captured from ai2thor?
I tried https://…
-
Currently the drawing is tied directly to the input projection matrix. If you input an orthographic projection then often the widget drawing is tiny.
are you try when you input an orthographic projec…
-
Hi,
I would like to get the 2D coordinates of an object in a certain camera frame. So far I know that the simGetObjectPose() can return the 3D pose of the object in the world frame and the simGetCa…
-
I cannot understand in paragraph "Time to work in full 3D" what is rz in matrix. Is it somehow connected with "retro-projection" which i do not understand to. I started from lesson 4 because I started…
-
I know, we all hate maths but we need them.
For View Matrix, Projection Matrix etc.
LWJGL brings some functions with it, but we need some custom stuff.
-
When using perspective with an off-axis projection, `mat4.frustum` is used instead of `mat4.perspective`. In v3.4.1 there's a `mat4.perspectiveZO` method for WebGPU, but no `mat4.frustum`. So web apps…
-
In model, we are asked to return the transformation matrix (expected things include translate and rotate_about_y) but the values needed (theta, amount to translate, scaling factor) are in model_view_p…
-
Hello, I met some problems when I followed the instructions to try to view the training process. Specifically, I am using "yufeng" dataset, and my training command is `python train_splatting_avatar.py…