myc634 / UltraLiDAR_nusc_waymo

MIT License
30 stars 2 forks source link

paper emphasizes joint training #10

Open Kafka2122 opened 3 weeks ago

Kafka2122 commented 3 weeks ago

the paper emphasizes joint training of sparse encoder and dense VQ-VAE to optimize the codebook and improve generalization. But in this code joint training has not been done right? is there any reason for that?

myc634 commented 3 weeks ago

Hi, as joint training is especially for spares to dense generation, we only do unconditional generation here

Kafka2122 commented 3 weeks ago

Hi, as joint training is especially for spares to dense generation, we only do unconditional generation here

Ok, so if unconditional generation works, shouldn't your model be able to do conditional generation as well? In paper it is mentioned that for conditional generation we just need to give partial codeblock indices and the transformer will predict the full scene