Closed saswat0 closed 1 year ago
We implemented evaluation by first extracting features from all class templates (class mages) and then using these features to detect everywhere. My hypothesis is that your dataset has too many classes to detect. I think that you can do one of the two things: 1) split classes in several "class batches" such that each batch would fit in GPU memory; 2) disable caching the class features and recompute everything on the fly - however this approach might slow down detection.
Yes, you're correct. I have a lot of classes (16k with 0.7M images). For the first approach that you suggested, is there any provision in the code to do so? For the second approach, is setting cache_images to False all that's needed?
I'm afraid none of these are supported in the code. cache_images seems to do something different. Probably the easiest thing to do is to split data manually for the first approach. For the second approach, you'll need to changes the iterators over data.
Okay. I'll give it a shot. Thanks
I have a dataset with 228454 images. Everytime I try to train OS2D on this dataset, CUDA runs out of memory
I'm already using a small batch size (4) and have set cfg.eval.scales_of_image_pyramid to [0.5, 0.625, 1, 1.6] but still this error persists
Full trace of the log: