Closed likhita-8091 closed 2 months ago
If you cannot reproduce this problem outside of Docker, don't you think Docker is the problem then?
If you cannot reproduce this problem outside of Docker, don't you think Docker is the problem then?
I have tried many times, and I have executed the same code on my own window system without any problems. But there is one in Docker, mainly because I am not an expert in Docker and Linux, so I still hope you can investigate. I believe you. The main thing is that if you don't use your library and just start the function with a simple thread, there won't be this problem. Your suspicion of executing this function through your library is correct; -It may be due to the Python version -It could be due to Docker -It may be due to the Linux system -It may be due to your library
So if you run it on Linux without Docker, can you still see the problem? I'm not going to investigate this unless there is sufficient cause to believe that this is specifically caused by APScheduler, so it falls to you to eliminate other causes of the problem.
It has been eliminated. I tried to directly start the cal function using Python's threading module, but there was no memory growth and it was immediately reclaimed. So you also need to personally try the library to add a task and give it a try.
Ok, so this can be reproduced also without Flask?
Yes, I just used flask to replace one of my task triggers
Ok, so could I trouble you to construct a minimal example (that is, doesn't involve any 3rd party libraries it that are not necessary to reproduce the problem) that just uses apscheduler to demonstrate the issue?
Hello, I'm sure your library has an impact,
If using your scheduler, add_ Job, if you put 10 photos into the job, the memory will be 500-800mb. It's amazing. Why not release it? I waited all night and won't release it.
What are you talking about? Why not release what? What did you wait all night on?
My goodness, there's a problem with your library, why not solve it? As an author, you don't do anything. So lazy? They all said that memory is not released and there is a memory leak,
My goodness, there's a problem with your library, why not solve it? As an author, you don't do anything. So lazy? They all said that memory is not released and there is a memory leak,
My goodness, aren't you feeling entitled? I've asked you to provide a minimum workable example, but you've done nothing. So lazy? Do you think I owe you something?
I am afraid this problem do exist. I have verified many methods to recycle memory explicitly, but memory still holds after one instance have been excuted and will accumulate a lot after several instances. However if I run the task just once without apscheduler, the memory use by that docker container will drop at once. Really hope this can be investigated. Thanks a lot as there is really no better alternative package to use.
Maybe open a separate ticket where you provide a method to reproduce this (preferably without Docker) and as many details as you can. This particular issue is a bit loaded with excess negativity.
Things to check first
[X] I have checked that my issue does not already have a solution in the FAQ
[X] I have searched the existing issues and didn't find my bug already reported there
[X] I have checked that my bug is still present in the latest release
Version
3.9
What happened?
Please try executing this code inside the Docker container and check the Docker Stats, which has increased significantly. Here, a dynamic package is used, and the second job is in another custom.py file.
Here is just a simple example. My real code is actually calling GRPC, calling OpenCV's library to save images, and calling Open3D to save point cloud images. After performing several photo taking tasks, the memory reaches 1g and is not recycled.
How can we reproduce the bug?
Please try executing this code inside the Docker container and check the Docker Stats, which has increased significantly. Here, a dynamic package is used, and the second job is in another custom.py file. @agronholm
custom.py