Open ayushi2610 opened 3 months ago
It looks like the torch.cat(mask_filtered) has different shape as all_rgbs.shape[:-1]. Which data are you using? Can you please double check that?
Hi, I am using DaVinci robotic surgery endoscopy video dataset.
On Fri, Apr 19, 2024, 11:10 PM Caoang327 @.***> wrote:
It looks like the torch.cat(mask_filtered) has different shape as all_rgbs.shape[:-1]. Which data are you using? Can you please double check that?
— Reply to this email directly, view it on GitHub https://github.com/Caoang327/HexPlane/issues/20#issuecomment-2067023768, or unsubscribe https://github.com/notifications/unsubscribe-auth/AHORSVBLRDH3QVPVNXU5LO3Y6FJKPAVCNFSM6AAAAABFMRUQ2SVHI2DSMVQWIX3LMV43OSLTON2WKQ3PNVWWK3TUHMZDANRXGAZDGNZWHA . You are receiving this because you authored the thread.Message ID: @.***>
Sorry, I am not familiar with that dataset. But in general, you may want to check all_rays
shape and all_rgbs
shape. Their shapes' first two dimension should be the same
mask_filtered = torch.cat(mask_filtered).view(all_rgbs.shape[:-1]) RuntimeError: shape '[13080000]' is invalid for input of size 17440000 This is the error I am getting and if I take ndc_rays=True, the model works fine but I am getting plain white images in test and validation. What could be the problem?