Closed rakeshshrestha31 closed 6 months ago
Thanks for reporting this. I think it's a bug indeed and I have updated the code to fix it. Feel free to pull the code and close this issue.
Since the height and width of the image are the same in the current environment (48 x 48 IIRC), I don't think it will hurt the algorithm because the conv2d parameters are free. For sanity check, you can add an breakpoint in the debug mode at this point and log the resulting image to see if it is really (B, C, W, H).
Thank you for sharing your wonderful work! I have a question regarding the following lines in data augmentation: https://github.com/chongyi-zheng/stable_contrastive_rl/blob/cfe5e17e3c63406e06bca9a96de67a84aa5c4c62/rlkit/torch/sac/stable_contrastive_rl_trainer.py#L128-L135
It looks to me that at the end of data augmentation the tensors are still in
(B, C, H, W)
order which is different from the(B, C, W, H)
order of the original observation/goal. The comment mentions transposing to(B, C, W, H)
but doesn't actually do the transposition, just reshaping. Is this deliberate?