nateemma / strategies

Custom trading strategies using the freqtrade framework
318 stars 85 forks source link

Memory usage issue #11

Open ExaByt3s opened 1 year ago

ExaByt3s commented 1 year ago

Hello, this time I see that a problem with memory is occurring .

in the freqtrade discord community some users told me about a memory leak problem, I didn't pay attention, I didn't really keep the bot on for a long time but I noticed that the bot consumed 60% of the ram in 1h after that when it reaches 100 process stops

I was using an old fork, but noticed a memory profiler was added in the last update. so I assume you must already know about this problem. in each iteration the bot only increases memory but it is never reduced or released. but I can't find where it is the problem,

I am trying to find where at the moment I have only managed to keep the bot running for 8 hours this imagen is with a 1 hour active bot running imagen

the bot running 6h imagen

the bot running 8h imagen

tomjrtsmith commented 1 year ago

I went a long way down the track with this and couldn't find a solution either. Phil says it doesn't happen on his M1 Macbook, I was using an oracle linux vm and spent a fortnight trying to debug.

I tried your tensorflow cpu fix and then changed the OS type. I set the bot to restart every 4 hrs in systemd, mmm

It's a tensorflow issue but not sure how to debug (ended up reading the python manual for good measure, still couldn't figure it out.

bane5000 commented 1 year ago

Lol just create a task scheduler that kills & restarts the process when memory utilization is <=90% - as long as you're saving to the same database it should pick right back up

tomjrtsmith commented 1 year ago

I did that and it killed the strat, wouldn't work

On Wed, Feb 22, 2023 at 1:55 PM bane5000 @.***> wrote:

Lol just create a task scheduler that kills & restarts the process when memory utilization is <=90% - as long as you're saving to the same database it should pick right back up

— Reply to this email directly, view it on GitHub https://github.com/nateemma/strategies/issues/11#issuecomment-1439286989, or unsubscribe https://github.com/notifications/unsubscribe-auth/AXXUNNOXBODQUY3PLTZRKZLWYVPZNANCNFSM6AAAAAAVDIHPCA . You are receiving this because you commented.Message ID: @.***>

nateemma commented 1 year ago

Thanks for looking into this - I ran strategies from every family for over a week on my (10 year old) iMac for over a week with no increase in memory usage, so it does appear to be ubuntu-specific.

As an aside, I'm in the process of replacing all of the NNBC* strategies, since they don't seem to work well. The replacements will be NNTC* (Neural Network Trinary Classifier), which will use a single neural network model to identify buy/sell/nothing instead of the separate buy and sell models in NNBC. Not sure why, but it appears to be faster and more accurate.

Thanks,

Phil

On Tue, Feb 21, 2023 at 5:09 PM tom s @.***> wrote:

I did that and it killed the strat, wouldn't work

On Wed, Feb 22, 2023 at 1:55 PM bane5000 @.***> wrote:

Lol just create a task scheduler that kills & restarts the process when memory utilization is <=90% - as long as you're saving to the same database it should pick right back up

— Reply to this email directly, view it on GitHub < https://github.com/nateemma/strategies/issues/11#issuecomment-1439286989>, or unsubscribe < https://github.com/notifications/unsubscribe-auth/AXXUNNOXBODQUY3PLTZRKZLWYVPZNANCNFSM6AAAAAAVDIHPCA

. You are receiving this because you commented.Message ID: @.***>

— Reply to this email directly, view it on GitHub https://github.com/nateemma/strategies/issues/11#issuecomment-1439296584, or unsubscribe https://github.com/notifications/unsubscribe-auth/ABD4X5Y66NIDKNJZL53SCG3WYVRNXANCNFSM6AAAAAAVDIHPCA . You are receiving this because you are subscribed to this thread.Message ID: @.***>

ExaByt3s commented 1 year ago

hello @nateemma about memory consumption apparently it is as suggested by the user @tomjrtsmith it is related to tensorflow, I have looked at several issues in the tensor repository:

all issues 58676 50765

At the moment I have only taken the decision to change the version of tensorflorw and see if that works. If the memory problem is not resolved, I would have to look at the classifiers and try to free the memory using tf.keras.backend.clear_session() or something like that

On the other hand, I am glad that you are in constant development. I would like to try the new NNTC_, but I would like to keep NNBC. In the first changes that you will make will remove NNBC completely? I would like to keep that class with the 2 models, I would like to continue testing it

I'm just getting used to python and machine learning so I'm learning along the way I'm a little slow

nateemma commented 1 year ago

I'm merging in the tensorflow 2.10 workaround (from issue 58676) to ClassifierKeras.py - however, since I don't see the problem in my environments, I can't tell if it works.

I'm currently having issues pushing to github, so if you want to try it independently, add this code to either ClasifierKeras.py or your strategy:

workaround for memory leak in tensorflow 2.10

os.environ['TF_RUN_EAGER_OP_AS_FUNCTION'] = '0'

Regarding NNBC, I'll leave it there for a while

Thanks,

Phil

On Wed, Feb 22, 2023 at 9:33 AM Exabyte @.***> wrote:

hello @nateemma https://github.com/nateemma about memory consumption apparently it is as suggested by the user @tomjrtsmith https://github.com/tomjrtsmith it is related to tensorflow, I have looked at several issues in the tensor repository:

all issues https://github.com/search?q=org%3Atensorflow+Memory+leak&type=issues 58676 https://github.com/tensorflow/tensorflow/issues/58676 50765 https://github.com/tensorflow/tensorflow/issues/50765

At the moment I have only taken the decision to change the version of tensorflorw and see if that works. If the memory problem is not resolved, I would have to look at the classifiers and try to free the memory using tf.keras.backend.clear_session() or something like that

On the other hand, I am glad that you are in constant development. I would like to try the new NNTC_, but I would like to keep NNBC. In the first changes that you will make will remove NNBC completely? I would like to keep that class with the 2 models, I would like to continue testing it

I'm just getting used to python and machine learning so I'm learning along the way I'm a little slow

— Reply to this email directly, view it on GitHub https://github.com/nateemma/strategies/issues/11#issuecomment-1440480455, or unsubscribe https://github.com/notifications/unsubscribe-auth/ABD4X57F3LWBNSEDULEYBNTWYZEXXANCNFSM6AAAAAAVDIHPCA . You are receiving this because you were mentioned.Message ID: @.***>

ExaByt3s commented 1 year ago

thanks Phil, indeed it is a problem that usually happens in ubuntu with tensorflow. I found a possible solution I will leave a test running the next few days to verify that this does not interfere with the predictions

nateemma commented 1 year ago

Thanks, if it works let me know and I'll merge it in. Also, try the NNTC versions, memory usage should be less than half of NNBC and they seem to be working better anyway

Thanks,

Phil

On Sat, Feb 25, 2023 at 3:07 PM Exabyte @.***> wrote:

thanks Phil, indeed it is a problem that usually happens in ubuntu with tensorflow. I found a possible solution I will leave a test running the next few days to verify that this does not interfere with the predictions

— Reply to this email directly, view it on GitHub https://github.com/nateemma/strategies/issues/11#issuecomment-1445224055, or unsubscribe https://github.com/notifications/unsubscribe-auth/ABD4X54VK6EXJH2ZIOUGOATWZKGCRANCNFSM6AAAAAAVDIHPCA . You are receiving this because you were mentioned.Message ID: @.***>

ExaByt3s commented 1 year ago

at the end of all it was something silly , this way worked for me

I use the cpu
pip install tensorflow-cpu==2.10.0
pip install torch==1.12.1
pip install pytorch-lightning==1.8.5
pip install darts==0.21.0
pip install scikit-learn==1.1.3
pip install prettytable==3.6.0
pip install finta==1.3

maybe we can put an option in the constructor to specify whether to use gpu and cpu options, and there specify the rest of the parameters like initial memory size etc etc. well it's just thought.

ClassifierKeras.py

import tensorflow as tf
# workaround for memory leak in tensorflow
#os.environ['TF_RUN_EAGER_OP_AS_FUNCTION'] = '0'
memoria = 0.4
config = tf.compat.v1.ConfigProto(device_count={'GPU': 0})
config.gpu_options.allow_growth = True
config.gpu_options.per_process_gpu_memory_fraction = memoria
sess = tf.compat.v1.Session(config=config)
tf.compat.v1.keras.backend.set_session(sess)

imagen

ExaByt3s commented 1 year ago

Thanks, if it works let me know and I'll merge it in. Also, try the NNTC versions, memory usage should be less than half of NNBC and they seem to be working better anyway

I have not tried NNTC yet but I will in a couple of days. At the moment there are some doubts that I have and wanted to ask you, one of which is the possibility of extending the sample data with information from informative pairs. I mean is there any way to calculate the informative pairs and in dataframePopulator.addindicators() select the sampling indicators for the training, or something like that. I still have little skill with python and little knowledge about ml so I'm learning on the way. I would like to find a proper way to do it