CuiRuikai / Partial2Complete

[ICCV 2023] P2C: Self-Supervised Point Cloud Completion from Single Partial Clouds
MIT License
155 stars 9 forks source link

Request for Unsupervised Code Details and Guidance #32

Open XDUchpn opened 21 hours ago

XDUchpn commented 21 hours ago

Hello,

Thank you for your work on P2C, and for sharing the code. You really did an excellent job!

I have some questions regarding the implementation of the unsupervised method (P2C*) mentioned in the paper, and I would like to request additional details or guidance.

Question 1: Implementation of Unsupervised Training (P2C*)

From my understanding of the code, it appears that the current training setup primarily utilizes only partial point clouds in a self-supervised way. In the paper, however, P2C* is introduced as an unsupervised variant, which utilizes unpaired partial and complete point clouds during training. I could not locate the specific code to load unpaired complete point clouds or apply the Chamfer Distance (CD-L2) or similar losses for this purpose.

Could you please confirm whether the current code includes P2C* as described in the paper, or if it focuses solely on the self-supervised variant (P2C)?

Question 2: Unsupervised Training Assumptions

If P2C* is not included in the current code, I would like to ask if my assumption about implementing it is correct:

  1. Modify the data loading: Load both unpaired partial and complete point clouds for training.

  2. Integrate the CD-L2 loss: Incorporate unpaired complete point clouds in the loss calculation during training, allowing the model to learn from the distribution of complete shapes.

Could you please confirm if these steps align with your approach, or if there are additional considerations? Thank you for your time, and I appreciate any insights you can provide.

Best regards

CuiRuikai commented 20 hours ago

There is no need to modify code to implement the unpaired training. What you need to do is just copy the complete objects in 3D-EPN or PCN dataset to the partial object folder, then the dataloader could load both partial and complete.

XDUchpn commented 20 hours ago

There is no need to modify code to implement the unpaired training. What you need to do is just copy the complete objects in 3D-EPN or PCN dataset to the partial object folder, then the dataloader could load both partial and complete.

Actually, there's I performed your code successfully for partial self-supervised. Now, I am focusing on the P2C in chapter 4.3. I have some questions regarding the unsupervised training setup for P2C as mentioned in Section 3.4 of the paper. 1. Paper Statement and Assumptions

  1. Paper Statement and Assumptions In Section 3.4, the paper states:

"Since the unsupervised method utilizes unpaired partial and complete samples for training, we provide results of our P2C trained with the same data source, indicated as P2C*. The results on the 3D-EPN dataset are shown in Tab. 1, demonstrating the superiority of our method. P2C outperforms the best unpaired method [3] by 2.7 w.r.t CD-ℓ2 without any design to utilize known complete example shapes."

From this description, I assumed that in the P2C* unsupervised training setup, both unpaired partial and complete point clouds are loaded during training, with complete clouds used either as a reference distribution or as part of a loss function (e.g., Chamfer Distance L2). This design would help the model learn from the distribution of complete shapes, aligning with the typical goals of unsupervised point cloud completion.

  1. Code Observations In the provided code, I observed the following:

Training Data: In the current setup, only partial point clouds are loaded and used during training, even when P2C* is indicated. Complete point clouds do not seem to be used for forward computation or loss calculation during training.

Loss Calculation: The get_loss function in the code calculates loss only based on partial point clouds, and complete clouds are not involved in backward propagation. As a result, the training process does not appear to incorporate complete point clouds, which would typically be necessary to realize the unsupervised training described in the paper.

  1. Question on Experimental Results and Role of Complete Point Clouds in P2C In Table 1 of the paper, the P2C model shows an improvement over the fully self-supervised P2C (which uses only partial inputs). This suggests that some element of the unsupervised setup provided a benefit. However, since the code does not include any mechanism for incorporating complete point clouds during training, I am unsure how the improvement was achieved. Here are my questions based on these observations:

Clarification on Unsupervised Training: Could you clarify if complete point clouds were intended to be used directly during training (e.g., as part of the loss function) in the P2C* setup? If so, should additional modifications be made to the code to implement this?

Effect of Data Source Consistency: If P2C* simply ensures that complete and partial point clouds share the same data source without direct usage of complete clouds in training, could the observed improvement be due to other factors (e.g., data distribution effects or baseline adjustments)?

Thank u for your patience!

CuiRuikai commented 19 hours ago
  1. Unlike unsupervised methods that know whether an object is complete or incomplete, there is no explicit distinction between complete and partial objects in ours. This aligns with our assumption for unkwon incompleteness. Therefore, "with complete clouds used either as a reference distribution or as part of a loss function (e.g., Chamfer Distance L2)" is not correct. We have no specific design to utilize complete objects. We just mix complete and partial objects for training.

  2. As I mentioned, there is no need to modify the code to utilize complete data. To include complete objects for training, we only need to copy them to the partial object folder. So that in the training process, the dataloader randomly load partial or complete object, then our framework learns to complete object by creating synthetic incompleteness and complete them.

  3. No difference between P2C and P2C* except the data source.

Could you clarify if complete point clouds were intended to be used directly during training (e.g., as part of the loss function) in the P2C* setup? Yes, but you only need to treat complete object as partial objects.

If so, should additional modifications be made to the code to implement this? no additional modifications.

If P2C* simply ensures that complete and partial point clouds share the same data source without direct usage of complete clouds in training, could the observed improvement be due to other factors (e.g., data distribution effects or baseline adjustments)? Complete and partial together forms the data source. It it not correct to say that they share the same data source.

Let me explain the essential difference between our method and common unsupervised method: For unsupervised method, they know which object is partial and which object is complete, so that they use complete object as an example distribution and train the network to transport partial objects from the partial distribution to the complete object distribution. This framework is observed in [1,2] etc. However, our method in the unsupervised setting does not distinguish partial or complete objects. They are both objects, just with different level of incompleteness (some incompleteness and zero incompleteness), so that our method do not have such reference distribution concept.

Let me know if you have further concerns.

[1] Energy-based residual latent transport for unsupervised point cloud completion [2] Cycle4completion: Unpaired point cloud completion using cycle transformation with missing region coding

XDUchpn commented 19 hours ago
  1. Unlike unsupervised methods that know whether an object is complete or incomplete, there is no explicit distinction between complete and partial objects in ours. This aligns with our assumption for unkwon incompleteness. Therefore, "with complete clouds used either as a reference distribution or as part of a loss function (e.g., Chamfer Distance L2)" is not correct. We have no specific design to utilize complete objects. We just mix complete and partial objects for training.
  2. As I mentioned, there is no need to modify the code to utilize complete data. To include complete objects for training, we only need to copy them to the partial object folder. So that in the training process, the dataloader randomly load partial or complete object, then our framework learns to complete object by creating synthetic incompleteness and complete them.
  3. No difference between P2C and P2C* except the data source.

Could you clarify if complete point clouds were intended to be used directly during training (e.g., as part of the loss function) in the P2C* setup? Yes, but you only need to treat complete object as partial objects.

If so, should additional modifications be made to the code to implement this? no additional modifications.

If P2C* simply ensures that complete and partial point clouds share the same data source without direct usage of complete clouds in training, could the observed improvement be due to other factors (e.g., data distribution effects or baseline adjustments)? Complete and partial together forms the data source. It it not correct to say that they share the same data source.

Let me explain the essential difference between our method and common unsupervised method: For unsupervised method, they know which object is partial and which object is complete, so that they use complete object as an example distribution and train the network to transport partial objects from the partial distribution to the complete object distribution. This framework is observed in [1,2] etc. However, our method in the unsupervised setting does not distinguish partial or complete objects. They are both objects, just with different level of incompleteness (some incompleteness and zero incompleteness), so that our method do not have such reference distribution concept.

Let me know if you have further concerns.

[1] Energy-based residual latent transport for unsupervised point cloud completion [2] Cycle4completion: Unpaired point cloud completion using cycle transformation with missing region coding

Thank you so much for your detailed and patient response. I now realize that my initial understanding of the unsupervised learning setup in P2C was incorrect. My misunderstanding stemmed from not thoroughly reviewing the related works you referenced. I had assumed that P2C followed a conventional unsupervised approach, where complete point clouds would serve as a reference distribution. This assumption led to an inaccurate interpretation of the intended training process in P2C*.

Your explanation clarified that, unlike traditional unsupervised methods that distinguish between complete and partial objects, your framework treats both as varying degrees of incompleteness. This novel perspective on "unknown incompleteness" aligns well with the assumptions of your approach and sheds light on the innovative methodology you introduced in P2C*.

I am grateful for your patience in addressing my questions and for helping me understand this key distinction. Your work has expanded my understanding of unsupervised point cloud completion, and I appreciate the scientific contributions you have made in this area. Thank you for your generosity in sharing both your research and insights.