Open sunjc0306 opened 3 years ago
The minimum of Eq.(5) is achieved on its stationary point, which is the zero gradient point of Eq.(5) on pi.
There is not the variable named Tj in the Eq.(5) , but it appears at the Eq.(6). In addition, I have a question what the function named batch_knn_gpu in your library. Thanks for you much.
The index i or j is vertex index. Tj means the transformation matrix of j'th vertex. You should treat Eq(5) of all vertices as a whole, and compute the gradient for vertices' positions. The batch_knn_gpu is used to compute k nearest points from a source pointclound to target pointclound. Except that this function can compute for multiple groups of pointclounds data in parallel.
OK,thanks. Did you release your Encodeing code. Where can I get your code about Encodeing part?
The architecture of the encoder has been clarified in the paper, it's just MLP(multi-layer perceptron) structure.
Hi, thank you for sharing this work.
I followed this conversation and got back to the paper to see the architecture of the encoder. Two questions arises:
Thanks!
Okay I think I got it. It's analogue to keeping the same number of points but changing their features right? Concerning the dimension, 400 is the output of T while my second question was about the dimension of the new feature vector (output of the first layer). I really appreciate your collaboration! Thanks again!
Sorry, I think I made a wrong statement. In this work, I treat the 9vnum length feature as a 1-dimentional feature, and use a fully connected layer to transform the 9vnum length feature to 400 length. In this work, we do not utilize the connectivity of the mesh .
Excuse me, I want to know how to derive the Eq.(6) from the Eq.(5) in yout paper. Thanks for you very much.