Closed erenuzyildirim closed 2 years ago
Also having the exact same issue. Interesed if anyone has an explanation of why this is happening.
I found the source of the RecursionError in my case.
In custom.py file, I changed by accident what contains prepare_train_img(self, idx) and prepare_test_img(self, idx) from:
return self.pipeline(results)
to
self.pipeline(results)
return results
I thought at first doing so was equivalent, but it is not.
For example, when returning results after the call self.pipeline(results)
, img in results dictionary is not normalized (dtype=uint8), which is not the case with return self.pipeline(results)
.
It is expected that img should be normalized based on the pipeline I used:
pipeline = [ dict(type='LoadImageFromFile'), dict( type='MultiScaleFlipAug', img_scale=(2048, 1024), flip=False, transforms=[ dict(type='Resize', keep_ratio=True), dict(type='RandomFlip'), dict(type='Normalize', **img_norm_cfg), dict(type='ImageToTensor', keys=['img']), dict(type='Collect', keys=['img']), ])
More generaly, I guess the RecursionError occurs because the dataloader got an "invalid" (for whatever reason) dictionary after calling __getitem__(self, idx)
class method from dataset.
Hope it helps someone.
no response for a long time
Hi, I encounter the RecursionError: maximum recursion depth exceeded while calling a Python object for training with the custom dataset.
I applied the solution of setting the sys.getrecursionlimit() to a higher value but the error returns to Segmantation Fault(core dumped).
Thanks.