LUCIT-Systems-and-Development / unicorn-binance-websocket-api

A Python SDK by LUCIT to use the Binance Websocket API`s (com+testnet, com-margin+testnet, com-isolated_margin+testnet, com-futures+testnet, com-coin_futures, us, tr, dex/chain+testnet) in a simple, fast, flexible, robust and fully-featured way.
https://unicorn-binance-websocket-api.docs.lucit.tech/
Other
680 stars 165 forks source link

High CPU usage #132

Closed antebw closed 3 years ago

antebw commented 3 years ago

helllo, how to avoid this printing all the time WARNING:root:BinanceWebSocketApiManager._frequent_checks() - High CPU usage since 5 seconds: 100.0

thank you

oliver-zehentleitner commented 3 years ago

If you have a lot of them, then usually the lib is going to crash after a while. A. Power up your system B. Stream less data

Till now there is no flag to disable it.

https://github.com/oliver-zehentleitner/unicorn-binance-websocket-api/issues/124 https://github.com/oliver-zehentleitner/unicorn-binance-websocket-api/issues/125

antebw commented 3 years ago

I dont have much streams just 3 of them. Is this good solution to get data from stream in real time,

bwam = BinanceWebSocketApiManager(exchange="binance.com-futures") bwam.create_stream('arr', '!userData', api_key=config.KEY, api_secret=config.SECRET) bwam.create_stream(['kline_30m'], [config.pair.lower()]) bwam.create_stream(['bookTicker'], [config.pair.lower()], None, 'book')

stream = bwam.pop_stream_data_from_stream_buffer()
book = bwam.pop_stream_data_from_stream_buffer('book')

if book:
    data = json.loads(book).get('data')
    if data:
        bid, ask = float(data['b']), float(data['a'])

if stream:
    data = json.loads(stream).get('data')
    order = json.loads(stream).get('o')

`

oliver-zehentleitner commented 3 years ago

What is your system power? Could you upload full logfile please?

You do nothing wrong! lower() is not needed, the lib lowers and uppers everything itself. Here is an example file very similar to your code snippet: https://github.com/oliver-zehentleitner/unicorn-binance-websocket-api/blob/master/example_easy_migration_from_python-binance.py

But you could save an additional step! The lib can convert the receives for you into python dicts and unicorn-fied dicts, just take a look to output_default and output parameter in this example: https://github.com/oliver-zehentleitner/unicorn-binance-websocket-api/blob/master/example_kline_1m_with_unicorn_fy.py

antebw commented 3 years ago

Thx for info mate currently am using AWS 4 GB RAM, 2 vCPUs, 80 GB SSD

oliver-zehentleitner commented 3 years ago

Strange, I am streaming this also on a 2vCPU Server with 4 GB Ram and 20 GB SSD: https://ubwa-demo.lucit.tech

antebw commented 3 years ago

which server do you use?

oliver-zehentleitner commented 3 years ago

https://www.hetzner.com/cloud the CX21

I have signed up for the Hetzner Referral Program, so I can offer a 20 EUR starting credit and also get a small benefit if the customer stays.

antebw commented 3 years ago

Mine is placed on Tokyo AWS, I see the one above have only locations in Germany and Finland. Is there any decreases in performance over network if server is more far from Binance exchange

oliver-zehentleitner commented 3 years ago

I belief the connection is very good and even a few milliseconds are not relevant for me!

antebw commented 3 years ago

Thank you for help mate!

oliver-zehentleitner commented 3 years ago

Your welcome!

TKR-US commented 2 years ago

Hello,

i've this problem 3 times in 48 hours, and no problem since lot of month...

Problem for me:

thanks a lot!

oliver-zehentleitner commented 2 years ago

this is not an error. its a hint of the lib... look in your taskmanager. this is not a problem for a short time span but after a longer time the lib will crash in cause of back logs in underlying protocols and librarys. but that are separate errors. and this will also affect other things on your system (database,...)

TKR-US commented 2 years ago

ok. on server, this is the only thing, my script (one process = one pair, 1/4 of 1400 pair, so 350 process) and all process are completely renew each 8 hours

this first line i see in syslog is : Feb 24 00:23:12 checktrades4 kernel: [63757.814507] checktrade-unic invoked oom-killer: gfp_mask=0x40cc0(GFP_KERNEL|__GFP_COMP), order=0, oom_score_adj=0

so what would be a solution?

oliver-zehentleitner commented 2 years ago

you are sure you use subprocess not threadding?

how do you renew?

all your suggestions are possible solutions. I dont know your code and finally the design of your programm has to make sense but this can be done in a couple of variants and there is no "one size fits all" solution.

i would analyze what is causing the high load, then its possible to find a solution.

processes are good and bad, 350 sounds a lot to me, how much data are you passing through? i think one process per cpu core and out of the sub process you spawn threads is more efficient.

TKR-US commented 2 years ago

Hello, I use one process per trade pair extract of my code : `trade_stream_id = binance_websocket_api_manager.create_stream(['trade'], ["btcusdt"], output="UnicornFy")

while True: msg = binance_websocket_api_manager.pop_stream_data_from_stream_buffer() `

For renew, i launch a new process (the same process) and by a mecanism, synchronize datas between this same process. When the old process is sure the new process have the same informations, old process stop.

i create one process per trade to be sure get all trades informations of each pair. Do you mean it's bette to have a process who catch 10 or more pair (maximum?), and will have same result as 10 unique process?

of course, htis method consume lot of memory, i use just for this task a dedicate proxmox server with 4 VM, each vm have 30 go RAM, 8 CPU. Each server consume at run 12 go of memory, and up to 20/25 Go of memory when renew process (time to sync process, and kill old)

oliver-zehentleitner commented 2 years ago

More important is usually what happens in the code after the messages are received.

To me, the approach sounds logical and comprehensible, but excessive in terms of resources.

you can achieve the same effect "cheaper".

If you start a single UBWA instance and use a trade stream and simply restart it with replace_stream(). with this you actually have the same logic mapped as far as receiving and restarting the streams is concerned, but much more resource efficient.

But I don't know what you do with the data later and if this is done in the same process at all.

Given the amount of markets you want to monitor, the best solution for you is probably in the middle between your current approach and my suggestion involving your cluster.

TKR-US commented 2 years ago

Hello,

process just get trades and store trades in a big database (calculs on trades are made with another machine / processes) I will change my data recovery method. Instead of doing 1 process = 1 pair, I'm going to do 1 process = 15 pairs. this will reduce the amount of memory enormously, I just hope that I will be able to recover all the data in real time.

With replace_stream, i loose some (few) data, during the replace time, no?