Closed frankhome61 closed 5 years ago
We are unfortunately not providing support for experimental / non-stable features. Nevertheless, I would suggest you to craft a simple example (1 triangle) and manually place the vertex position (with positive depth) such that they fall on pre-defined pixels; note that right now only the orthographic camera is supported in this experimental version, but this should simplify manualy crafting toy examples.
H, W = 64, 64
result = np.zeros([H, W, 5])
vertices = np.array([[5,0,0], [0, 5, 0], [0, 0, 5]])
triangle_lst = np.array([[0, 1, 2]])
res = dr.rasterizer_rasterize(vertices=vertices, triangles=triangle_lst, image_width=64, image_height=64, name="rasterize")
This is what I have right now, if I understand correctly, res
contains a tensor that has 5 channels from which I could extract images?
Note that we are in the process of re-writing this first implementation as we are not satisfied with it.
Any ETA?
You should be able to use the current one for now, updating to the new one should not incur much changes. We'll send an update once submitted.
Best.
I confirm I have the same issue, I get -1 everywhere I coded a while loop to try random triangle vertex positions until the barycentric coords map has at least 1 non-(-1) value, and it loops forever
def toTensor(arr):
return tf.convert_to_tensor(np.float32(arr), np.float32)
def toTensorInt(arr):
return tf.convert_to_tensor(np.int32(arr), np.int32)
has_non_zero = False
while has_non_zero == False:
cube_vertices = np.random.random((1, 1, 3))
cube_faces = [[0, 1, 2]]
cube_faces = toTensorInt([cube_faces])
result = rasterizer_rasterize(cube_vertices, cube_faces, 400, 400, 5)
depth_maps = result[0]
triangle_ind_maps = result[1]
barycentric_coord_maps = result[2]
non_zero_check = np.nonzero((barycentric_coord_maps[0].numpy()) + 1.0)
has_non_zero = (len(non_zero_check[0]) !=0 or len(non_zero_check[1]) != 0)
if has_non_zero == True:
print(cube_vertices)
print(non_zero_check)
Hi! @francoisruty were you using their newly pushed code? Seems that they have pushed new code to the repo several days ago.
@frankhome61 yeah I used the code from this page: https://github.com/tensorflow/graphics/blob/master/tensorflow_graphics/rendering/differentiable_renderer/rasterizer.py just this afternoon so I indeed used the most up-to-date version
To @frankhome61 example: the issue is that the winding order of your vertices is not correct:
I pretty much took your code, but changed the vertex order in triangle_lst from [0, 1, 2] to [0, 2, 1]:
H, W = 64, 64 result = np.zeros([H, W, 5]) vertices = np.array([[5.0,0,0], [0, 5.0, 0], [0, 0, 5.0]]) triangle_lst = np.array([[0, 2, 1]])
rendered_depth_map, triangle_ind_map, bc_map = rasterizer.rasterizer_rasterize(vertices=vertices, triangles=triangle_lst, image_width=64, image_height=64, name="rasterize") img = plt.imshow(rendered_depth_map)
this gives the following image: https://storage.googleapis.com/tensorflow-graphics/git/user_support/23_raster_triangle.png
@francoisruty, I am happy to debug specific examples (i.e. no random) if you have any, but I suspect that you have a similar issue as above.
@julienvalentin ok before I try to find a specific example, do you confirm that the convention for camera axis orientation is -Oz?
@francoisruty positive Z is from the optical centre to the image plane, in other words, if an object is potentially visible, its Z component has to be positive.
OK, thanks
cube_vertices = np.array([[[1.0, 1.0, 1.0],[1.0, 0.0, 1.0],[0.0, 1.0, 1.0]]])
cube_faces = np.array([[[2, 1, 0]]])
result = rasterizer_rasterize(cube_vertices, cube_faces, 400, 400, 5)
depth_maps = result[0]
triangle_ind_maps = result[1]
barycentric_coord_maps = result[2]
barycentric_coords_map = np.clip( barycentric_coord_maps[0].numpy() * 255, 0, 255 )
image_write("/app/test_tfg.png", barycentric_coords_map)
This yields a black image (same if I reverse triangle vertices order) Unless I'm mistaken, if the camera is at origin and looking along Oz, there should be something visible, shouldn't it?
Besides, I've tested the code you posted above, it works but I don't understand why, rasterizer_rasterize is supposed to take in input a batch of meshes, don't your example inputs lack one dimension?
One other thing,
H, W = 400, 400
vertices1 = np.array([[99.0,0,0], [0, 99.0, 0], [0, 0, 39.0]])
vertices2 = np.array([[99.0,0,0], [0, 99.0, 0], [0, 0, 19.0]])
triangle_lst = np.array([[0, 2, 1]])
rendered_depth_map, triangle_ind_map, bc_map = rasterizer_rasterize(vertices=vertices1, triangles=triangle_lst, image_width=W, image_height=H, name="rasterize")
output = np.clip( bc_map.numpy() * 255, 0, 255 )
rendered_depth_map2, triangle_ind_map2, bc_map2 = rasterizer_rasterize(vertices=vertices2, triangles=triangle_lst, image_width=W, image_height=H, name="rasterize")
output2 = np.clip( bc_map2.numpy() * 255, 0, 255 )
np.array_equal(output,output2)
This gives me a True, I assume it's because it's not using a perspective camera?
However, if I try your code sample, with a perspective projection, I get a black image:
def project(point3d):
return tfg.rendering.camera.perspective.project(point3d, principal_point=[[0.0,0.0]], focal=[[0.028,0.028]])
H, W = 400, 400
vertices = np.array([[5.0,0,0], [0, 5.0, 0], [0, 0, 5.0]])
triangle_lst = np.array([[0, 2, 1]])
rendered_depth_map, triangle_ind_map, bc_map = rasterizer_rasterize(vertices=vertices, triangles=triangle_lst, image_width=W, image_height=H, project_function=project, name="rasterize")
output = np.clip( bc_map.numpy() * 255, 0, 255 )
image_write("/app/test_tfg.png", output)
Btw I'm not sure I understand why focal length must be 2 dimensions, it's a distance, not a point, generally (I used 28mm which is a classic value for focal length)
Apologies if some questions are noob questions, I have an OpenGL ES background so I'm adapting to those new conventions
thanks!
Another noob question, in this orthographic projection context I see that if a vertex has coords (x,y,z), its screen position is (x,y). So far, nothing weird, it's orthographic projection. But I see that the screen space is not a traditional clip space, meaning it's not normalized. If I choose W,H = 400, then the screen coordinates of vertices can go from -200 to 200 Shouldn't it be normalized? Usually after projection the vertex coords are in clip space, and I've never seen a clip space not using [-1,1] along all its dimensions, but I may be mistaken
I can verify that after making the changes that @julienvalentin suggested, the rasterizer is still outputting inf
nb: this is @julienvalentin; currently having issues with my other GitHub account.
@franknod by default the maximum depth is set to infinity, so this is the expected value when no triangle is 'visible' by any given pixel.
Can you generate an image of the result and confirm that it matched what I sent?
Hi @julienvalentin the image is like this:
Seems like it is working! Meanwhile, could the code import a mesh with textures and rasterize it?
@franknod the renderer can rasterize the mesh, and provide for depth and barycentric coordinates. You can then perform deferred rendering and apply pretty much anything you wish (interpolation of vertex colors / lighting etc.). Right now we leave it to the user to apply these 'effects'. Happy to assist more if needed.
@julienvalentin I am thinking of working with .obj files using TFGraphics, do you have any suggestions on how I should go about it?
@franknod both topics are independent, you can just use any .obj parsing lib and then send the data to TFG in the format it needs
Closing this thread.
Side note: we are currently working towards an efficient GPU implementation based on OpenGL.
Hi! I have been trying to get rasterizer to work using the rasterizer_rasterize() function. However, the tensors it return to me are all inf or -1s. Any idea why this is the case?