Closed Arthur151 closed 6 years ago
This depends on a lot of factors (i.e. mesh size, rendered image size, GPU model etc.). I don't know if there is a way to speed up significantly the backward computation time. Have you tried to run the original Chainer code released by the authors to see if there are any differences?
Yes, I have just tried the original Chainer code with human body mesh (about 7000s vertices, 13000 faces, renderer image size 256*256) , the speed is on the same level (more or less) with yours....
I am closing this issue for now.
@Arthur151 Hi, Do you render an smpl model? I get more than 200ms when rasterize a silhouette, with no back computation. I wonder if something wrong in my code...
faces = torch.from_numpy(faces).cuda()
faces = [None,:,:]
ver = torch.from_numpy(ver).cuda()
ver = ver[None,:,:]
faces = nr.vertices_to_faces(ver, faces)
silhouettes = nr.rasterize_silhouettes(faces, 256, True)
If I use rederer like this (change to c++): https://github.com/YadiraF/PRNet/blob/master/utils/render.py It only takes several ms. I wonder if I can speed this up.
@ypflll Have you solve this problem? I alse render the mesh very slowly.
Thanks for your work. It costs over 0.1s on only one sample backward computation. As the batch size larger, it significantly slows down my training. Looking forward to your reply!