erdewit / ib_insync

Python sync/async framework for Interactive Brokers API
BSD 2-Clause "Simplified" License
2.81k stars 753 forks source link

Connection timeout to live gateway 1023+ while using 2FA #638

Closed nemozny closed 1 year ago

nemozny commented 1 year ago

Update before I actually posted this: The connection works with stable gateway 1019! But not any "latest" version I posted below. I have updated the title. I also needed to pass "clientId=0"/or try and remove it:

from ib_insync import *
util.logToConsole('DEBUG')
ib = IB()
ib.connect("127.0.0.1", port=7777, clientId=0)

1019 also produced the "This event loop is already running" error, but managed to connect. I can reliably reproduce the below on 1023, 1024 and 1025.

------ Original text follows

I am using ib_insync with the paper gateway for a few years, to pull live market data through the paper account. No issues whatsoever. Now I want to hookup ib_insync to my live gateway, to do some position monitoring. However I am constantly getting timeout errors.

I am running through IBC, I fire up the gateway, enter my 2FA and then I am trying to connect via Python.

I didn't want to comment on your earlier thread https://github.com/erdewit/ib_insync/issues/386 , but I do have an unused "sister" account, not a sub account. I created it long time ago and never used.

ib-insync==0.9.86, running Debian on Raspberry Pi, but having the same issue on regular Intel too. I have tried 1023, 1024 and now 1025. API not read-only, download updates on connect, etc.

2023-09-11 14:46:50,309 ib_insync.client INFO Connecting to 127.0.0.1:7777 with clientId 1...
2023-09-11 14:46:50,322 ib_insync.client INFO Connected
2023-09-11 14:46:50,498 ib_insync.client DEBUG <<< 176,20230911 14:46:49 GMT
2023-09-11 14:46:50,503 ib_insync.client DEBUG >>> 71,2,1,
2023-09-11 14:46:50,506 ib_insync.client INFO Logged on to server version 176
2023-09-11 14:46:50,565 ib_insync.client DEBUG <<< 15,1,U700000
2023-09-11 14:46:50,608 ib_insync.client DEBUG <<< 9,1,1
2023-09-11 14:46:50,613 ib_insync.client DEBUG <<< 4,2,-1,2104,Market data farm connection is OK:usfarm,
2023-09-11 14:46:50,617 ib_insync.wrapper INFO Warning 2104, reqId -1: Market data farm connection is OK:usfarm
2023-09-11 14:46:50,619 ib_insync.client DEBUG <<< 4,2,-1,2106,HMDS data farm connection is OK:ushmds,
2023-09-11 14:46:50,623 ib_insync.wrapper INFO Warning 2106, reqId -1: HMDS data farm connection is OK:ushmds
2023-09-11 14:46:50,626 ib_insync.client DEBUG <<< 4,2,-1,2158,Sec-def data farm connection is OK:secdefnj,
2023-09-11 14:46:50,629 ib_insync.wrapper INFO Warning 2158, reqId -1: Sec-def data farm connection is OK:secdefnj
2023-09-11 14:46:50,633 ib_insync.client INFO API connection ready
2023-09-11 14:46:50,637 ib_insync.client DEBUG >>> 61,1
2023-09-11 14:46:50,645 ib_insync.client DEBUG >>> 5,1
2023-09-11 14:46:50,648 ib_insync.client DEBUG >>> 99,0
2023-09-11 14:46:50,658 ib_insync.client DEBUG >>> 6,2,1,U700000
2023-09-11 14:46:50,670 ib_insync.client DEBUG >>> 76,1,1,U700000,,0
---- then there is a gazillion of updates and it ends with this -----
2023-09-11 14:46:52,120 ib_insync.client DEBUG <<< 73,1,1,U700000,,Cryptocurrency,,BASE
2023-09-11 14:46:52,123 ib_insync.client DEBUG <<< 74,1,1
2023-09-11 14:46:54,706 ib_insync.ib ERROR completed orders request timed out
2023-09-11 14:46:54,709 ib_insync.client DEBUG >>> 7,3,2,0,,,,,,
2023-09-11 14:46:58,716 ib_insync.ib INFO Disconnecting from 127.0.0.1:7777, 115 B sent in 8 messages, 18.5 kB received in 367 messages, session time 10.2 s.
2023-09-11 14:46:58,719 ib_insync.client INFO Disconnecting
----- there was an asyncio timeout error here, before I upgraded to 1025 ---------------
2023-09-11 14:46:58,724 ib_insync.client INFO Disconnected.

I have changed my account ID.

The gateway itself logs the same as you see above (ends with "7,3,2,0....."), "client connected" blinks green there and then disconnects.

Strangely:

Let me know if you need anything from me.

This is a gateway log, level = Info.

3-09-11 14:56:10.812 [BX] INFO  [JTS-EServerSocket-121] - [1:176:176:1:0:0:0:DET] [3;0;null;null;null;;null;null]
2023-09-11 14:56:10.821 [BX] INFO  [JTS-EMsgPacer-126] - [1:176:176:1:0:0:0:DET] Start validating message:
2023-09-11 14:56:10.822 [BX] INFO  [JTS-EMsgPacer-126] - [1:176:176:1:0:0:0:INFO] The accountCode is ignored for single account customers
2023-09-11 14:56:10.822 [BX] INFO  [JTS-EMsgPacer-126] - [1:176:176:1:0:0:0:DET] Finished validating message:
2023-09-11 14:56:10.823 [BX] INFO  [JTS-EMsgPacer-126] - [1:176:176:1:0:0:0:DET] Started processing message:
2023-09-11 14:56:10.823 [BX] INFO  [JTS-EMsgPacer-126] - [1:176:176:1:0:0:0:INFO] Requesting trade reports.
2023-09-11 14:56:10.825 [BX] INFO  [JTS-EMsgPacer-126] - [1:176:176:1:0:0:0:DET] Finished processing message:
2023-09-11 14:56:14.542 [BX] INFO  [JTS-Cleaner-36] - market data wrapper, cleaning maps
2023-09-11 14:56:14.543 [BX] INFO  [JTS-Cleaner-36] - Start orphan ecs cleaning (force=false). Total orphan ecs to remove: 1
2023-09-11 14:56:14.556 [BX] INFO  [JTS-OrphanEcsCleaner-127] - Stop orphan ecs cleaning (force=false). Cleaned 0 of 1 orphan ecs.
2023-09-11 14:56:14.808 [BX] INFO  [JTS-EServerSocket-121] - [1:176:176:1:0:0:0:ERR] Socket connection for client{1} has closed. Reason: Connection terminated
2023-09-11 14:56:14.810 [BX] INFO  [JTS-EServerSocket-121] - [1:176:176:1:0:0:0:INFO] Stopped processing incoming messages for client{1}.
2023-09-11 14:56:14.810 [BX] INFO  [JTS-EServerSocket-121] - [1:176:176:1:0:0:0:INFO] Close call made for client{1} socket connection.
2023-09-11 14:56:14.812 [BX] INFO  [JTS-EServerSocketNotifier-122] - Terminating async queue thread
2023-09-11 14:56:14.816 [BX] INFO  [JTS-EServerSocket-121] - Cleaning up [serverId: 1]...
2023-09-11 14:56:14.816 [BX] INFO  [JTS-EServerSocket-121] - Cleaning up [serverId: 1]...
2023-09-11 14:56:14.816 [BX] INFO  [JTS-EServerSocket-121] - Cleaning up [serverId: 1]...

I don't think this is a port blocking issue.

root@rpi:/etc# lsof -i :7777
COMMAND   PID    USER   FD   TYPE DEVICE SIZE/OFF NODE NAME
java    35755 nemozny   66u  IPv6 118618      0t0  TCP *:7777 (LISTEN)
root@rpi:/etc# netstat -atupn |grep 7777
tcp        0      0 127.0.0.1:37656         127.0.0.1:7777          TIME_WAIT   -
tcp6       0      0 :::7777                 :::*                    LISTEN      35755/java

Huh, I haven't noticed before, but the connection now starts with

---------------------------------------------------------------------------
RuntimeError                              Traceback (most recent call last)
Cell In[2], line 4
      2 util.logToConsole('DEBUG')
      3 ib = IB()
----> 4 ib.connect("127.0.0.1", port=7777)
--------------- text omitted

File /usr/lib/python3.11/asyncio/base_events.py:588, in BaseEventLoop._check_running(self)
    [586](file:///usr/lib/python3.11/asyncio/base_events.py?line=585) def _check_running(self):
    [587](file:///usr/lib/python3.11/asyncio/base_events.py?line=586)     if self.is_running():
--> [588](file:///usr/lib/python3.11/asyncio/base_events.py?line=587)         raise RuntimeError('This event loop is already running')
    [589](file:///usr/lib/python3.11/asyncio/base_events.py?line=588)     if events._get_running_loop() is not None:
    [590](file:///usr/lib/python3.11/asyncio/base_events.py?line=589)         raise RuntimeError(
    [591](file:///usr/lib/python3.11/asyncio/base_events.py?line=590)             'Cannot run the event loop while another loop is running')

RuntimeError: This event loop is already running
nemozny commented 1 year ago

I cannot reproduce this reliably. Now the 1025 gateway is working alongside 1019. This is driving me crazy. I will give this some more testing and close it if it proves erratic.

nemozny commented 1 year ago

Ok, I am experiencing this error again. Incidentally after hours, so it might be related. The connection worked fine over the day, but not outside RTH. I have tracked the problem to be caused by this: ib.py line 1786

            # the request for executions must come after all orders are in
            await asyncio.wait_for(self.reqExecutionsAsync(), timeout)

This is causing the timeout that kills the connection. Commenting out this line fixes the problem.

But how come your Github file ib.py looks differently?

            # the request for executions must come after all orders are in
            try:
                await asyncio.wait_for(self.reqExecutionsAsync(), timeout)
            except asyncio.TimeoutError:
                msg = 'executions request timed out'
                errors.append(msg)
                self._logger.error(msg)

My package comes from pip. Am I missing something? The file was changed since the last pip package on Jul 2, 2023? I need to check if your bleeding edge Github repo fixed my problem. Quick patch showed it did not.

erdewit commented 1 year ago

This is a regression in TWS that is supposed to be fixed in recent versions. I can't reproduce it with TWS 1025. The code in the git repo bypasses the problem somewhat when connecting but it's not a fix, it's up to IB to fix it on their end.

Also see: https://groups.io/g/twsapi/message/51354 https://groups.io/g/insync/topic/99997016#9153

nemozny commented 1 year ago

Thanks for info!

I was experiencing the error just now on 1025 and during RTH, contrary to my previous beliefs. Your workaround in this repo worked, thanks!

Today is 2023-09-18 hour 14, connecting port 7777
executions request timed out
OrderId is: 3
Connected. Current orderId is: 3
faroukBakari commented 8 months ago

You need to manually set timeout this way : ib.connect('127.0.0.1', 7497, clientId=1, timeout=1000)