Closed nburner96 closed 8 months ago
Are you using Microsoft Windows?
No this is on Linux. During the training process the epoch times have been very consistent. Ideally I would like to export the validation results after the final epoch but I have not found a way to do that in the documentation.
Hello: I'm not observing the memory leak issue that you're seeing. If there's a memory leak, perhaps it is related to a dependency, in which case, you might try upgrading all dependencies.
That said, predict_filename
is a convenience method to easily make a prediction on a single file and is not recommended when making predictions on a large number of images. You should really do the predictions in batches using one of the other methods in predictor
(e.g., predict_folder
or predict_generator
, or predict
). You can control the batch size with the predictor.batch_size
parameter.
However, if all you need is to save the validation scores for your image regression problem, you can always just do this:
learner.validate()
# [('mae', 5.971834)]
I'll close this issue for now, but feel free to respond to this thread if you have further issues or questions.
I have trained a resnet50 model and am attempting to make predictions on images whose file paths are derived from a data frame. After each prediction is made, the predicted value for each image is appended into a new column. I am attempting to do this using the apply function and am running into the issue of large memory leakage during this process. Through various validation set sizes I have found that for every 1,000 images predicted, 1 GB is used. As a result my computer will run out of memory when validating image sets containing ~50,000 files.
Below is the relevant code:
Do have an idea as to what might be causing the data leak?
Thanks