Open forestbat opened 1 year ago
try it out: torch.cuda.empty_cache()
try it out: torch.cuda.empty_cache()
It has no effect.
try halving the input image resolution:
# Resize the image to a smaller resolution
scale_factor = 0.5
new_width = int(img.shape[1] * scale_factor)
new_height = int(img.shape[0] * scale_factor)
image = cv2.resize(img, (new_width, new_height), interpolation=cv2.INTER_AREA)
# Convert the image from BGR to RGB color space
image = cv2.cvtColor(image, cv2.COLOR_BGR2RGB)
and/or running on CPU if you have more than 12gb+ ram, when I tested it rn on the CPU to see the time it would take it consumed roughly 12.5Gb on average.
try halving the input image resolution:
# Resize the image to a smaller resolution scale_factor = 0.5 new_width = int(img.shape[1] * scale_factor) new_height = int(img.shape[0] * scale_factor) image = cv2.resize(img, (new_width, new_height), interpolation=cv2.INTER_AREA) # Convert the image from BGR to RGB color space image = cv2.cvtColor(image, cv2.COLOR_BGR2RGB)
and/or running on CPU if you have more than 12gb+ ram, when I tested it rn on the CPU to see the time it would take it consumed roughly 12.5Gb on average.
What I want to talk is not "the program costs too much memory", is "the program can't release memory completely", but your advice is still useful.
I'm currently encountering this problem as well. Do you have any good solutions?
Yeah, same problem. Please help to solve this issue!
The solution is to use a multiprocessing approach, so you should move the image processing task to a separate process. By doing so, you isolate the memory usage of the task from the main process. This ensures that any memory consumed during processing is released back to the system when the process terminates, effectively preventing the memory leak
I run this code in jupyterlab:
Before I run it, memory cost is about 200MB, and when it completes, memory cost is about 3.4GB, even if I close the notebook or re-run this program, those memory won't be released. So how to solve the problem?