Open zhanglijun95 opened 2 months ago
Very good job! I also look forward to COND P-Diff!
Hi, thanks for your attention to our work. We will opensource by the end of Sep.
Thank you for your confirmation! That's great.
Hi, do we have any updates regarding the new code?
Hi, thanks for your attention to our work. We will opensource by the end of Sep.
Any updates on the timeline for conditional P-diff?
Hi, due to GPU issues, we will update by next week.
Hi, can we get access to the parameter autoencoder and the unet architecture used in conditional P-diff first?
Hi, I have sent all the code and dataset through email. Please check. We will reformat our code soon.
I recieved it. Thank you so much!
Hi, any updates on the timeline for conditional P-diff?
Hi, please email to jinxiaolong1129@gmail.com. I will share all the details.
Hi Authors,
Thank you for your great work! It inspired me a lot! I'm really looking forward to your code for Cond P-Diff. May I know the estimated time for getting access to that?
Besides, I have a question about Cond P-Diff. I saw the CV task in this paper is style image generation and Cond P-Diff will generate parameters according to the conditions, namely the style image. I want to know when you test Cond P-Diff, do you give it the style image it is trained with, or a totally new/unseen style? For example, train the Cond P-Diff with 10 style-parameter pairs, and test with another 5 styles.
I noticed that in the Appendix, you mentioned the style-continuous dataset and the generalizability of Cond P-Diff to generate parameters for style in the range that is not in the trainset. But here I want to discuss with you that do you think it can generate parameters for a totally unseen style? Or do you have any insight about this?
Really appreciate your response and great work. Thank you!
Best, Lijun