ChrisWu1997 / PQ-NET

code for our CVPR 2020 paper "PQ-NET: A Generative Part Seq2Seq Network for 3D Shapes"
MIT License
116 stars 19 forks source link

Retrieve part order & Hyperparameter Tuning #33

Closed CKocher closed 1 year ago

CKocher commented 2 years ago

Hi guys, I'm currently trying to figure out where in your code the sorted assembly part order (Fig. 10 in your Paper) that is calculated by the network is available for further use. Could you help me out with this? That'd help me a lot!

ChrisWu1997 commented 2 years ago

Hi,

That part is not included in this repo, and I probably couldn't find it now. Sorry for that. But it should be straight-forward to implement it. As the paper states, the only modification is to randomly shuffle the input part sequence. Basically, you shuffle part_feature_seq before this line (padding needs extra care).

CKocher commented 2 years ago

Thank you for your super fast response! I try it out immediately! Is it possible to input the data differently than with the GAN you mentioned in the Test section? That's the other part im currently struggling with.

ChrisWu1997 commented 2 years ago

What do you mean by differently? Can you elaborate more on that?

CKocher commented 2 years ago

Sorry, I want to use the PQ-Net as a part of an Assembly Sequence Planning System in Robotics, therefore I'd like to give it the parts of an Object (for example the parts of a Lamp that I want to assemble) and let it assemble the parts to a complete Lamp. The part order is especially important for me since I want to use it for the next step prediction. As of now, I only managed to run the network by creating the fake Vectors with the GAN that you mentioned in the Readme but I thought that the GAN is only for demo purposes so that everyone can quickly use some Dummy-Vectors to test. I don't need random fake vectors since I already have specific parts that I want to assemble available. Can I put them into the network in some way or do I have to use some kind of GAN-generator to make the system work? Sorry if I'm completely not getting smth here, I'm fairly new on this field! I hope that this helps you understand my problem. Thank you very much for your time!

ChrisWu1997 commented 2 years ago

OK, so if I understand it correctly, figure 10 in our paper is mostly what you need, i.e., input parts of an arbitrary order and output an organized sequence of parts. Correct?

To do that with PQ-Net, GAN is not needed. Conceptually, you need to train a PQ-Net seq2seq autoencoder with shuffled input parts. You can take a look at Figure 2(b) in our paper, and all you need to do is to shuffle the input part order. In terms of implementation, just do some modification to make sure part_feature_seq is in a random part order (this line) so that the model is trained to recover the correct part order (target_seq). At inference time, run test.py to call pqnet.reconstruct(data) (this line), where you provide your parts of arbitrary order in data.

Lastly, I know another part assembly paper that may be in your interest: https://hyperplane-lab.github.io/Generative-3D-Part-Assembly/

CKocher commented 2 years ago

Thank you so much for all your super fast help and the recommended paper! I'll read / try it out right away! :)

CKocher commented 2 years ago

Hi.

sorry for reopening this issue again, I'm so close to making the system run as I need it to but there's one last little problem. Shuffling the input parts as you mentioned above is working perfectly now (Thank you again for that!) but I'm currently struggling to recover which part has been sorted to which position in the Sequence. I've annotated the parts in the following Figure from your paper to describe the problem a bit better:

fig10

How have you identified the single parts in the output order (like for example that a certain bit of the output sequence belongs to the leg with Index: 4) in Figure 10? I already tried to add an identifier Variable to the input parts so that I can recognize the parts in the output order but that didn't work as well as trying to analyze the vector values to find similarities. The part that I'm most interested in is the last one of the output Sequence (5 in the Figure above) since that is an important indicator for my Assembly System.

ChrisWu1997 commented 2 years ago

Hi, glad to hear you made some progress!

As I recall, I don't think we explicitly found the correspondence between the input and output sequences. The colors are only for demonstration purpose, all we did is to output a sequence of order in a consistent order (consistent over a dataset).

To find the correspondence, my first thought would be to use the cosine similarity between part vectors to establish a bipartite marching. But it seems that it didn't work out, as you described, no? Can you give more details about why using similarity between vectors fails?

CKocher commented 2 years ago

Hi, sorry that it took me so long to reply! I think I must've made a mistake when calculating the cosine similarity which might be fixed by now.

Just to be sure, I'm currently trying everything out on the pre-trained Lamp Network and Data that you provided. What makes me wonder if my results are legit, is that the range of the cosine similarity is very small (Given 2 random parts of one lamp shape, the range of the cos-Similarity is mostly between 0.93 - 0.99). To calculate the cos-Similarity between Input_seq and output_seq, I had to shrink input_seq from 140 to 134 elements so that it has the same length as output_seq. I did this by cutting off the last 6 values of Input_seq since they looked a lot like padding to me. Did you use padding for the input_seq? As you can see in the picture below, the cosine similarity calculations show that the network never changes the order of the parts though (When comparing input_seq and output_seq, the parts with the same index always achieve the highest cos-Similarity, so I'd guess that this could be an indicator that cos-Similarity works as intended because the net might've been trained to keep the parts this way). I'm currently training the network with the shuffled parts from above to see what happens then.

image

Thank you again for all your help :)

ChrisWu1997 commented 2 years ago

I think the additional dimensions (140 - 134) is the one-hot vector indicating the total number of parts. We found including it improves the results a bit. For calculating the cosine similarity, it's ok to remove it. Overall, I think what you did is right! Looking forward to the results!