Closed peebeejay closed 5 months ago
hi,
based on what i can see on the log, it seems that
what could be the cause, well i don't know much details, but i would start looking at
The advantage of reqRealTimeBars is that it behaves more robust when the connection to the IB server farms is interrupted. After the connection is restored, the bars from during the network outage will be backfilled and the live bars will resume.
reqHistoricalData + keepUpToDate will, at the moment of writing, leave the whole API inoperable after a network interruption.
The One Rule:
While some of the request methods are blocking from the perspective of the user, the framework will still keep spinning in the background and handle all messages received from TWS/IBG. It is important to not block the framework from doing its work. If, for example, the user code spends much time in a calculation, or uses time.sleep() with a long delay, the framework will stop spinning, messages accumulate and things may go awry.
The one rule when working with the IB class is therefore that user code may not block for too long.
hope it helps
if socat is working, then the issue is on 127.0.0.1:4001 which is the ibgateway port listening on localhost. so ibgateway is not listening. why? i don't know there is nothing on the log to diagnose that. data connection between ibgateway and IBKR seems to be OK. it would be in the logs if it's not.
Hm, from your deductions it seems that some sort of port-related misbehavior is occurring on the part of IB gateway after the nightly restart. I'm going to take a deeper look at this today & see if I can replicate locally during the day.
how are you doing the heartbeat? are you requesting bars periodically?
I've just been using the python code in a literal sense that I posted above lol. In the past, the function called reqCurrentTime
, but I felt that for now, I can rely on ib_insync to keep isConnected
in sync.
are you using async?
Currently not using async. Restarting my trading system doesn't fix the issue, so from that I infer that it's likely not due to an async blocking issue in ib_insync. 🤔
Ty for the response though, I really appreciate it. I'll report back with any interesting results as they come.
I'm removing the "bug" label, as so far we can't say this is caused by the container code base. if your investigation shows that it is please feel free to add it again.
Describe the bug
Once again, thank you for all the fantastic work on this docker image!
I have a container running the
ib-gateway-docker
image that has been running for a few days. The daily restart occurs at1AM
, and last night, around 1 hour after the restart, a set of errors began to appear, which I can't seem to properly understand. Essentially, what I'd like to figure out is the cause of the issue, and whether the issue is solvable here at the container level, or somewhere deeper in the stack, such as with IBC. Any help would be much appreciated!Is the issue caused by not waiting for the
Forking :::4001 onto 0.0.0.0:4003 > trading mode live
line to appear?Ostensibly, the error message was triggered by a 1-min heartbeat related request that I use for uptime monitoring. The python on the client side that makes this request looks something like this:
The logs from the docker container look something like this:
To Reproduce
docker run
command:docker compose up
docker-compose.yml
docker compose config
Expected
I expect these errors to not occur and for the connection to be more stable. Perhaps also expect that the logs say more about this particular issue.
Container logs
See above
Versions
Please complete the following information:
Ubuntu 20.04.6 LTS
Docker version 24.0.5, build 24.0.5-0ubuntu1~20.04.1
docker image inspect ghcr.io/gnzsnz/ib-gateway:tag
):stable
docker images --digests
):sha256:5b45bc84ad3e2ff363723592cb5df118d17ace67dcb676ffc6a3a9405da08968