Open biervat opened 1 year ago
Just edit the function in the Dataset class:
def filter_smallset(self):
...
# "training -> train"
if self.split == "train":
...
else:
# add self.max_prim >= len(target) for val and test data
if self.max_prim >= len(target) >= self.filter_num:
...
Image size = 100 max_prim = 1200 and the below
def filter_smallset(self):
...
# "training -> train"
if self.split == "train":
...
else:
# add self.max_prim >= len(target) for val and test data
if self.max_prim >= len(target) >= self.filter_num:
...
did not rectify the CUDA error
I think the issue might be coming from the definition of the CADDataLoader class:
def __init__(self, split='train', do_norm=True, cfg=None, max_prim=12000):
where max_prim is supplied and used as a separate argument (not as part of the cfg object where it is being updated per the external arg)
Long story short, either send it separately when using the class:
train_dataset = CADDataLoader(split='train', do_norm=cfg.do_norm, cfg=cfg, max_prim=cfg.max_prim)
or set the self.max_prim = cfg.max_prim
in line 17 in dataset.py
I still get the error out of memory at around 43%. I tried to reduce args.max_prim. I tried a lot of values even as low as 1 but it doesn't seem to change anything it keeps going out of memory at the same time.