Closed quyanqiu closed 5 years ago
obj files compress the vt coordinate, look at the face part. It is in following format: f v//vn//vt v//vn//vt v//vn//vt v vn vt are indice.
@shinxg,yes,because of the multi-edge "seams"effect,one vertex may corresponds to multiple texture coordinate,in marjority of 3D models they use perWedge UV map instead of perVertex UV map,so my question is how to compress the UV map so that make it own the same dimension of vertices number,to fit the rasterization operation in DIRT?
For those vertices which have different uv coordinates on different faces, you can add some duplicated vertices with corresponding uv coordinates.
For some reason ,I can not change the vertics number of the PCA model,I wonder if have any other solution to do texture rendering without using vertex-color because vertex-color is not realistic compare to texture rendering
@quyanqiu You need to duplicate the vertices when they have different UVs, as @shinxg stated. First load the vertex positions and UVs from the obj file into separate arrays, and also the indices from the faces.
obj_vertices = ...
obj_uvs = ...
obj_vertex_indices = ... - 1
obj_uv_indices = ... - 1
assert len(obj_vertex_indices) == len(obj_uv_indices)
Then, build a new set of vertices, UVs, and faces, with each vertex/UV used exactly once per face, something like
expanded_vertices = []
expanded_uvs = []
expanded_faces = []
for face_index in range(len(obj_vertex_indices)):
expanded_faces.append([len(expanded_vertices), len(expanded_vertices) + 1, len(expanded_vertices) + 2])
expanded_vertices.extend(obj_vertices[obj_vertex_indices[face_index]])
expanded_uvs.extend(obj_uvs[obj_uv_indices[face_index]])
assert len(expanded_vertices) == len(expanded_uvs) == len(expanded_faces) * 3
Then use the expanded_*
in the call to rasterise, concatenating as in the sample code.
Note I didn't run any of the above code -- there may be bugs, but the idea is there.
If you need the vertex positions to depend on some other tensor (rather than being constant), you may need to do the 'expansion' with tf.gather instead of a loop and numpy indexing.
@pmh47@shinxg thanks for your guys response,I think that’s the solution here, I will try
@pmh47@shinxg thanks for your guys response,I think that’s the solution here, I will try
Hi~ Did you make it? I still confused how to make it, I tried and failed. Is there a method to do this? Thanks.
@pmh47@shinxg thanks for your guys response,I think that’s the solution here, I will try
Hi~ Did you make it? I still confused how to make it, I tried and failed. Is there a method to do this? Thanks.
def convert_2_pervertex_uv(V, F, UV, TF):
V_new = np.zeros((F.shape[0]*3, 3))
UV_new = np.zeros((F.shape[0]*3, 2))
F_new = np.zeros((F.shape[0], 3))
for i in range(0, F.shape[0]):
for j in range(0, 3):
V_new[3*i+j, :] = V[F[i, j], :]
UV_new[3*i+j, :] = UV[TF[i, j], :]
F_new[i, j] = 3*i+j
return V_new, F_new, UV_new
V: #V3 F: #F3 UV: #UV2 TF: #F3(texture indice for faces) V F UV TF can be obtained here
@pmh47@shinxg thanks for your guys response,I think that’s the solution here, I will try
Hi~ Did you make it? I still confused how to make it, I tried and failed. Is there a method to do this? Thanks.
def convert_2_pervertex_uv(V, F, UV, TF): V_new = np.zeros((F.shape[0]*3, 3)) UV_new = np.zeros((F.shape[0]*3, 2)) F_new = np.zeros((F.shape[0], 3)) for i in range(0, F.shape[0]): for j in range(0, 3): V_new[3*i+j, :] = V[F[i, j], :] UV_new[3*i+j, :] = UV[TF[i, j], :] F_new[i, j] = 3*i+j return V_new, F_new, UV_new
V: #V_3 F: #F_3 UV: #UV_2 TF: #F_3(texture indice for faces) V F UV TF can be obtained here
Hi~ Thanks! But when I read in the model (SMPL, but high resolution):
And when I use the above code to transfer, I got:
Traceback (most recent call last):
File "/home/frank/PycharmProjects/MultiGarmentNetwork/test_DIRT.py", line 200, in <module>
main()
File "/home/frank/PycharmProjects/MultiGarmentNetwork/test_DIRT.py", line 107, in main
[V_new, F_new, UV_new] = convert_2_pervertex_uv(V, F, UV, TF)
File "/home/frank/PycharmProjects/MultiGarmentNetwork/test_DIRT.py", line 14, in convert_2_pervertex_uv
V_new[3*i+j, :] = V[F[i, j], :]
IndexError: index 27554 is out of bounds for axis 0 with size 27554
I think this is caused by the index.
Thanks!
@pmh47@shinxg thanks for your guys response,I think that’s the solution here, I will try
Hi~ Did you make it? I still confused how to make it, I tried and failed. Is there a method to do this? Thanks.
def convert_2_pervertex_uv(V, F, UV, TF): V_new = np.zeros((F.shape[0]*3, 3)) UV_new = np.zeros((F.shape[0]*3, 2)) F_new = np.zeros((F.shape[0], 3)) for i in range(0, F.shape[0]): for j in range(0, 3): V_new[3*i+j, :] = V[F[i, j], :] UV_new[3*i+j, :] = UV[TF[i, j], :] F_new[i, j] = 3*i+j return V_new, F_new, UV_new
V: #V_3 F: #F_3 UV: #UV_2 TF: #F_3(texture indice for faces) V F UV TF can be obtained here
@shinxg Hi~ I think the problem is caused by index. So I just modified the code:
V_new[3*i+j, :] = V[F[i, j]-1, :]
UV_new[3*i+j, :] = UV[TF[i, j]-1, :]
def convert_2_pervertex_uv(V, F, UV, TF):
V_new = np.zeros((F.shape[0]*3, 3))
UV_new = np.zeros((F.shape[0]*3, 2))
F_new = np.zeros((F.shape[0], 3))
for i in range(0, F.shape[0]):
for j in range(0, 3):
V_new[3*i+j, :] = V[F[i, j]-1, :]
UV_new[3*i+j, :] = UV[TF[i, j]-1, :]
F_new[i, j] = 3*i+j
return V_new, F_new, UV_new
And get the output. And seems like the result is wrong:
The original input texture is:
@Frank-Dz the reason is smpl uv order is not compatible with OpenGL,try following code: UV_new [:,1]*=-1,which means negtive your v coordinate
@Frank-Dz the reason is smpl uv order is not compatible with OpenGL,try following code: UV_new [:,1]*=-1,which means negtive your v coordinate
@quyanqiu Wow! Great! It works well for me!
Thanks! Best, Frank
@Frank-Dz the reason is smpl uv order is not compatible with OpenGL,try following code: UV_new [:,1]*=-1,which means negtive your v coordinate
Hi~ @quyanqiu Sorry for bothering you again. Do you know how to render an obj with a transparent background or specific image? I rendered some results, and all my results are like:
I know I should adjust the background_attributes, but I do not know how to set it.
pixels = dirt.rasterise_deferred(
vertices=cube_vertices_clip,
vertex_attributes=tf.concat([
tf.ones_like(cube_vertices_object[:, :1]), # mask
cube_uvs, # texture coordinates
cube_normals_world # normals
], axis=1),
faces=cube_faces,
background_attributes=(tf.ones([frame_height, frame_width, 6])*255),
shader_fn=shader_fn,
shader_additional_inputs=[texture, light_direction]
)
Thanks for any help and guidance! Best, Frank
@Frank-Dz its very easy bro,the render result is RGBA format ,where the A channel indicate which pixel belong to the model,you can just doing substitution with following code. mask = (render_result[:,:,-1]>0)[...,np.newaxis] overlay=render_result[:,:,:-1]mask+(1-mask)background
I do not debug these code,but the idea is here
@Frank-Dz its very easy bro,the render result is RGBA format ,where the A channel indicate which pixel belong to the model,you can just doing substitution with following code. mask = (render_result[:,:,-1]>0)[...,np.newaxis] overlay=render_result[:,:,:-1]mask+(1-mask)background
I do not debug these code,but the idea is here
Thanks! But I am still a little confused. Hope not to be too bothering.
pixels = dirt.rasterise_deferred(
vertices=cube_vertices_clip,
vertex_attributes=tf.concat([
tf.ones_like(cube_vertices_object[:, :1]), # mask
cube_uvs, # texture coordinates
cube_normals_world # normals
], axis=1),
faces=cube_faces,
background_attributes=(tf.ones([frame_height, frame_width, 6])*255),
shader_fn=shader_fn,
shader_additional_inputs=[texture, light_direction]
)
mask = (pixels[:, :, -1] > 0)[..., np.newaxis]
mybg = tf.ones([frame_height,frame_width,3])
overlay = pixels[:, :,-1] * mask + (1 - mask) * mybg
save_pixels = tf.write_file(
'textured.jpg',
tf.image.encode_jpeg(tf.cast(overlay * 255, tf.uint8))
)
The render result is pixels
and its rgb value is between 0-1.
Thus, I do not understand why it can be 4 dimension?
Can you give me more guidance?
Thank you very much!
Best, Frank
@Frank-Dz its very easy bro,the render result is RGBA format ,where the A channel indicate which pixel belong to the model,you can just doing substitution with following code. mask = (render_result[:,:,-1]>0)[...,np.newaxis] overlay=render_result[:,:,:-1]mask+(1-mask)background I do not debug these code,but the idea is here
Thanks! But I am still a little confused. Hope not to be too bothering.
pixels = dirt.rasterise_deferred( vertices=cube_vertices_clip, vertex_attributes=tf.concat([ tf.ones_like(cube_vertices_object[:, :1]), # mask cube_uvs, # texture coordinates cube_normals_world # normals ], axis=1), faces=cube_faces, background_attributes=(tf.ones([frame_height, frame_width, 6])*255), shader_fn=shader_fn, shader_additional_inputs=[texture, light_direction] ) mask = (pixels[:, :, -1] > 0)[..., np.newaxis] mybg = tf.ones([frame_height,frame_width,3]) overlay = pixels[:, :,-1] * mask + (1 - mask) * mybg save_pixels = tf.write_file( 'textured.jpg', tf.image.encode_jpeg(tf.cast(overlay * 255, tf.uint8)) )
The render result is
pixels
and its rgb value is between 0-1. Thus, I do not understand why it can be 4 dimension? Can you give me more guidance?Thank you very much!
Best, Frank
your code is wrong here: overlay = pixels[:, :,-1] mask + (1 - mask) mybg its should be overlay = pixels[:, :,:-1] mask + (1 - mask) mybg
as for why rgb value is between 0-1 and why its can be 4 dimension,May be you should know what is rasterization first,its a classical issue in computer graphics
@Frank-Dz its very easy bro,the render result is RGBA format ,where the A channel indicate which pixel belong to the model,you can just doing substitution with following code. mask = (render_result[:,:,-1]>0)[...,np.newaxis] overlay=render_result[:,:,:-1]mask+(1-mask)background I do not debug these code,but the idea is here
Thanks! But I am still a little confused. Hope not to be too bothering.
pixels = dirt.rasterise_deferred( vertices=cube_vertices_clip, vertex_attributes=tf.concat([ tf.ones_like(cube_vertices_object[:, :1]), # mask cube_uvs, # texture coordinates cube_normals_world # normals ], axis=1), faces=cube_faces, background_attributes=(tf.ones([frame_height, frame_width, 6])*255), shader_fn=shader_fn, shader_additional_inputs=[texture, light_direction] ) mask = (pixels[:, :, -1] > 0)[..., np.newaxis] mybg = tf.ones([frame_height,frame_width,3]) overlay = pixels[:, :,-1] * mask + (1 - mask) * mybg save_pixels = tf.write_file( 'textured.jpg', tf.image.encode_jpeg(tf.cast(overlay * 255, tf.uint8)) )
The render result is
pixels
and its rgb value is between 0-1. Thus, I do not understand why it can be 4 dimension? Can you give me more guidance? Thank you very much! Best, Frankyour code is wrong here: overlay = pixels[:, :,-1] mask + (1 - mask) mybg its should be overlay = pixels[:, :,:-1] mask + (1 - mask) mybg
as for why rgb value is between 0-1 and why its can be 4 dimension,May be you should know what is rasterization first,its a classical issue in computer graphics
Thanks again! So I just print it out:
pixels = dirt.rasterise_deferred(
vertices=cube_vertices_clip,
vertex_attributes=tf.concat([
tf.ones_like(cube_vertices_object[:, :1]), # mask
cube_uvs, # texture coordinates
cube_normals_world # normals
], axis=1),
faces=cube_faces,
background_attributes=(tf.ones([frame_height, frame_width, 6])*255),
shader_fn=shader_fn,
shader_additional_inputs=[texture, light_direction]
)
print(pixels.shape)
The output is "(600, 600, 3)".
And after using changed code
mask = (pixels[:, :, -1] > 0)[..., np.newaxis]
mybg = tf.ones([frame_height, frame_width, 3])
overlay = pixels[:, :, :-1] * mask + (1 - mask) * mybg
save_pixels = tf.write_file(
'textured.jpg',
tf.image.encode_jpeg(tf.cast(overlay * 255, tf.uint8))
)
I got the following error:
WARNING:tensorflow:From /home/frank/Desktop/mesh-py3/my_venv/lib/python3.5/site-packages/dirt/matrices.py:41: calling norm (from tensorflow.python.ops.linalg_ops) with keep_dims is deprecated and will be removed in a future version.
Instructions for updating:
keep_dims is deprecated, use keepdims instead
(600, 600, 3)
Traceback (most recent call last):
File "/home/frank/Desktop/mesh-py3/my_venv/lib/python3.5/site-packages/tensorflow/python/framework/ops.py", line 1659, in _create_c_op
c_op = c_api.TF_FinishOperation(op_desc)
tensorflow.python.framework.errors_impl.InvalidArgumentError: Index out of range using input dim 3; input has only 3 dims for 'strided_slice_3' (op: 'StridedSlice') with input shapes: [600,600,3], [4], [4], [4] and with computed input tensors: input[3] = <1 1 1 1>.
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "/home/frank/PycharmProjects/MultiGarmentNetwork/test_DIRT_render.py", line 213, in <module>
main()
File "/home/frank/PycharmProjects/MultiGarmentNetwork/test_DIRT_render.py", line 201, in main
overlay = pixels[:, :, :,-1] * mask + (1 - mask) * mybg
File "/home/frank/Desktop/mesh-py3/my_venv/lib/python3.5/site-packages/tensorflow/python/ops/array_ops.py", line 654, in _slice_helper
name=name)
File "/home/frank/Desktop/mesh-py3/my_venv/lib/python3.5/site-packages/tensorflow/python/ops/array_ops.py", line 820, in strided_slice
shrink_axis_mask=shrink_axis_mask)
File "/home/frank/Desktop/mesh-py3/my_venv/lib/python3.5/site-packages/tensorflow/python/ops/gen_array_ops.py", line 9356, in strided_slice
shrink_axis_mask=shrink_axis_mask, name=name)
File "/home/frank/Desktop/mesh-py3/my_venv/lib/python3.5/site-packages/tensorflow/python/framework/op_def_library.py", line 788, in _apply_op_helper
op_def=op_def)
File "/home/frank/Desktop/mesh-py3/my_venv/lib/python3.5/site-packages/tensorflow/python/util/deprecation.py", line 507, in new_func
return func(*args, **kwargs)
File "/home/frank/Desktop/mesh-py3/my_venv/lib/python3.5/site-packages/tensorflow/python/framework/ops.py", line 3300, in create_op
op_def=op_def)
File "/home/frank/Desktop/mesh-py3/my_venv/lib/python3.5/site-packages/tensorflow/python/framework/ops.py", line 1823, in __init__
control_input_ops)
File "/home/frank/Desktop/mesh-py3/my_venv/lib/python3.5/site-packages/tensorflow/python/framework/ops.py", line 1662, in _create_c_op
raise ValueError(str(e))
ValueError: Index out of range using input dim 3; input has only 3 dims for 'strided_slice_3' (op: 'StridedSlice') with input shapes: [600,600,3], [4], [4], [4] and with computed input tensors: input[3] = <1 1 1 1>.
Process finished with exit code 1
That's why I am confused, the output of dirt.rasterise_deferred
is a 3 channel image.
Thanks again!
@quyanqiu Hi~ I know there is a code which uses rasterise:
pixels = dirt.rasterise(
vertices=cube_vertices_clip,
faces=cube_faces,
vertex_colors=vertex_colors_lit,
background=tf.zeros([frame_height, frame_width, 3])*255,
width=frame_width, height=frame_height, channels=3
)
print(pixels.shape)
The output is (480, 640, 3) Seems like the channel is 3 too.
The code is in sample\simple.py
.
@quyanqiu Hi~ I know there is a code which uses rasterise:
pixels = dirt.rasterise( vertices=cube_vertices_clip, faces=cube_faces, vertex_colors=vertex_colors_lit, background=tf.zeros([frame_height, frame_width, 3])*255, width=frame_width, height=frame_height, channels=3 ) print(pixels.shape)
The output is (480, 640, 3) Seems like the channel is 3 too.
The code is in
sample\simple.py
.
you may need consult the author for more detail ,if you just need rendering instead of backward,you could use other render such as pyrender
Ok! Anyway,thanks!
On 04/25/2020 20:27, Dasrio wrote:
@quyanqiu Hi~ I know there is a code which uses rasterise:
pixels = dirt.rasterise( vertices=cube_vertices_clip, faces=cube_faces, vertex_colors=vertex_colors_lit, background=tf.zeros([frame_height, frame_width, 3])*255, width=frame_width, height=frame_height, channels=3 ) print(pixels.shape)
The output is (480, 640, 3) Seems like the channel is 3 too.
The code is in sample\simple.py.
you may need consult the author for more detail ,if you just need rendering instead of backward,you could use other render such as pyrender
— You are receiving this because you were mentioned. Reply to this email directly, view it on GitHub, or unsubscribe.
@Frank-Dz You need to use rasterise_deferred
, similar to in samples/textured.py
, but with an extra parameter to shader_fn
for the background image. Assuming you have a background image bg_im
of size [frame_height, frame_width, 3]
then
bg_im
to the list in shader_additional_inputs
parameter of rasterise_deferred
bg_im
as a new parameter at top of shader_fn
shader_fn
change to something like pixels = (diffuse_contribution + ambient_contribution) * mask + bg_im * (1. - mask)
@Frank-Dz You need to use
rasterise_deferred
, similar to insamples/textured.py
, but with an extra parameter toshader_fn
for the background image. Assuming you have a background imagebg_im
of size[frame_height, frame_width, 3]
then
- add
bg_im
to the list inshader_additional_inputs
parameter ofrasterise_deferred
- add
bg_im
as a new parameter at top ofshader_fn
- at the end of
shader_fn
change to something likepixels = (diffuse_contribution + ambient_contribution) * mask + bg_im * (1. - mask)
You saved my day! It works well for me! Thanks!
Hi,I read your texture.py code and found that in rasterise_defered function you concate vertics coordinate 、texture coordinate and vertex normal together ,but in mostly scence the dimension of vertices and texture coordinate(vt)are not the same,which means that they can not use tf.concat to concat together,for example ,in smpl body models ,there are 6890 vertics and 7576 vt coordinate,how to solve this issue?