nv-tlabs / ATISS

Code for "ATISS: Autoregressive Transformers for Indoor Scene Synthesis", NeurIPS 2021
Other
258 stars 55 forks source link

Positional Embedding in autoregressive_transformer.py #19

Closed coco1578 closed 1 year ago

coco1578 commented 1 year ago

pe_pos have three (x, y, z) in BaseAutoregressiveTransformer class. But, in the AutoregressiveTransformer forward function only pe_pos_x and pe_size_x used. Is it right?

pos_f_x = self.pe_pos_x(translations[:, :, 0:1])
pos_f_y = self.pe_pos_x(translations[:, :, 1:2])
pos_f_z = self.pe_pos_x(translations[:, :, 2:3])
pos_f = torch.cat([pos_f_x, pos_f_y, pos_f_z], dim=-1)

size_f_x = self.pe_size_x(sizes[:, :, 0:1])
size_f_y = self.pe_size_x(sizes[:, :, 1:2])
size_f_z = self.pe_size_x(sizes[:, :, 2:3])
wamiq-reyaz commented 1 year ago

Hey @coco-archisketch,

These are fixed functions with no learned parameters, so it doesn't really matter.

paschalidoud commented 1 year ago

Hi @coco-archisketch,

Thanks a lot for your comment. As @wamiq-reyaz pointed out this shouldn't really matter, but you are absolutely right and this is indeed a typo. I just committed a fix that solves this.

Best, Despoina