MIT-SPARK / MiDiffusion

Other
16 stars 2 forks source link

KeyError: 'fpbpn’ when training with PointNet as the image feature extractor #3

Open AnarchistKnight opened 1 month ago

AnarchistKnight commented 1 month ago

I switched from ResNet18 to PointNet, as said in your paper that PointNet better captures floor boundary. Besides, as presented in the paper, DDPM+PointNet has lower KL-divergence than DDPM+ResNet, which indicates that PointNet might help in fitting the underlying probability distribution. I was curious how mush PointNet helps in MiDiffusion, so I simply switched to PointNet. Unfortunately, KeyError: 'fpbpn' occured at the line 47 of networks\diffusion_scene_layout_mixed.py room_feature = sample_params["fpbpn"]

May I ask if I missed any procedure to preprocess the data so as to train with PointNet as the image feature extractor?

AnarchistKnight commented 1 month ago

And I saw the room mask is mapped into a 64 dimensional vector embedding. To me, saving the room boundary polygon coordinates into a \R^{64} vector is much more simple and straight.

Xmy1120 commented 1 month ago

And I saw the room mask is mapped into a 64 dimensional vector embedding. To me, saving the room boundary polygon coordinates into a \R^{64} vector is much more simple and straight.

Have you solved this problem? I have the same problem

AnarchistKnight commented 1 day ago

And I saw the room mask is mapped into a 64 dimensional vector embedding. To me, saving the room boundary polygon coordinates into a \R^{64} vector is much more simple and straight.

Have you solved this problem? I have the same problem

well,I suggest you just use resnet18