Open vocsong opened 1 month ago
Some other related TWS logs:
2024-10-18 21:12:54.373 [QY] INFO [JTS-CCPDispatcherS2-41] - Error: no preferred EC for conid 730443243 no sec defs returned forSecDef reqId=PreferredReqByConid237
2024-10-18 21:12:54.373 [QY] INFO [JTS-Async-35] - CM IB PORT Uxxxxxxx./1/147565047: Getting unknown EC instance for conid 730443243...
2024-10-18 21:12:54.373 [QY] WARN [JTS-Async-35] - Requesting unknown Ec for conid 730443243
2024-10-18 21:12:54.373 [QY] ERROR [AWT-EventQueue-0] - Instrumented AsyncScheduler action stats in OLT min: 0, max: 0, avg: 0 size: 0
2024-10-18 21:12:54.373 [QY] ERROR [JTS-Async-35] - Creating unknown contract for conid 730443243
2024-10-18 21:12:54.373 [QY] ERROR [JTS-Async-35] - Requesting unknown DefContent for conid 730443243
2024-10-18 21:12:54.374 [QY] INFO [JTS-CCPDispatcherS2-41] - Error: no preferred EC for conid 730443310 no sec defs returned forSecDef reqId=PreferredReqByConid238
2024-10-18 21:12:54.374 [QY] INFO [JTS-Async-35] - CM IB PORT Uxxxxxxx./1/147565047: Getting unknown EC instance for conid 730443310...
2024-10-18 21:12:54.374 [QY] WARN [JTS-Async-35] - Requesting unknown Ec for conid 730443310
2024-10-18 21:12:54.374 [QY] ERROR [JTS-Async-35] - Creating unknown contract for conid 730443310
2024-10-18 21:12:54.374 [QY] ERROR [JTS-Async-35] - Requesting unknown DefContent for conid 730443310
2024-10-18 21:12:54.375 [QY] INFO [JTS-CCPDispatcherS2-41] - Error: no preferred EC for conid 728290422 no sec defs returned forSecDef reqId=PreferredReqByConid240
2024-10-18 21:12:54.375 [QY] INFO [JTS-Async-35] - CM IB PORT Uxxxxxxx./1/147565047: Getting unknown EC instance for conid 728290422...
2024-10-18 21:12:54.375 [QY] WARN [JTS-Async-35] - Requesting unknown Ec for conid 728290422
2024-10-18 21:12:54.375 [QY] ERROR [JTS-Async-35] - Creating unknown contract for conid 728290422
2024-10-18 21:12:54.375 [QY] ERROR [JTS-Async-35] - Requesting unknown DefContent for conid 728290422
2024-10-18 21:12:54.373 [QY] ERROR [JTS-Async-35] - Creating unknown contract for conid 730443243
2024-10-18 21:12:54.373 [QY] ERROR [JTS-Async-35] - Requesting unknown DefContent for conid 730443243
2024-10-18 21:12:54.374 [QY] INFO [JTS-CCPDispatcherS2-41] - Error: no preferred EC for conid 730443310 no sec defs returned forSecDef reqId=PreferredReqByConid238
2024-10-18 21:12:54.374 [QY] INFO [JTS-Async-35] - CM IB PORT Uxxxxxxx./1/147565047: Getting unknown EC instance for conid 730443310...
2024-10-18 21:12:54.374 [QY] WARN [JTS-Async-35] - Requesting unknown Ec for conid 730443310
2024-10-18 21:12:54.374 [QY] ERROR [JTS-Async-35] - Creating unknown contract for conid 730443310
2024-10-18 21:12:54.374 [QY] ERROR [JTS-Async-35] - Requesting unknown DefContent for conid 730443310
2024-10-18 21:12:54.375 [QY] INFO [JTS-CCPDispatcherS2-41] - Error: no preferred EC for conid 728290422 no sec defs returned forSecDef reqId=PreferredReqByConid240
2024-10-18 21:12:54.375 [QY] INFO [JTS-Async-35] - CM IB PORT Uxxxxxxx./1/147565047: Getting unknown EC instance for conid 728290422...
2024-10-18 21:12:54.375 [QY] WARN [JTS-Async-35] - Requesting unknown Ec for conid 728290422
2024-10-18 21:12:54.375 [QY] ERROR [JTS-Async-35] - Creating unknown contract for conid 728290422
2024-10-18 21:12:54.375 [QY] ERROR [JTS-Async-35] - Requesting unknown DefContent for conid 728290422
2024-10-18 21:12:54.376 [QY] INFO [JTS-CCPDispatcherS2-41] - Error: no preferred EC for conid 730443341 no sec defs returned forSecDef reqId=PreferredReqByConid239
these happened in huge chunks, i only copy a snippet..
That does sound odd.
Generally, "no security definition found" is what IBKR says after a contract id has expired and you just can't look it up anymore.
If the logs are showing error messages of Error: Cannot find market rule for conid
then it looks more like an IBKR data problem somewhere. Maybe also log the full contract object too so we can see which ones are failing the lookup.
You can also try running an updated gateway version and also configure it with at least 4096 MB RAM under Settings: https://investors.interactivebrokers.com/en/index.php?f=16454 — it's technically just a headless TWS, but TWS has never worked great for me anyway.
Thanks for the quick response. I did some more testing. StartupFetchNONE connected while StartupFetchALL hits the same issue I saw this fetching was merged since June, can i understand what's the potential implication of not fetching?
abit more isolation.. i only got the issue when i fetch executions
ib.connect('127.0.0.1', 7496, clientId=777, fetchFields=StartupFetch.EXECUTIONS, raiseSyncErrors=True )
ConnectionError: ['executions request timed out']
i only got the issue when i fetch executions
oh, I see what you're saying.
If it works with NONE, then it must mean somehow your IBKR historical data can't be loaded (expired contracts again?) — but the original problem was also using NONE above too?
There are bugs in TWS/Gateway where sometimes if it gets bad connection attempts or data loading attempts, the entire gateway just stops responding until a full exit/restart (but, if you restart and the next connect attempt generates another bad session, it would continue breaking i guess).
Yes, the fetchFields
is an advanced feature and if you don't need it you can leave it undefined. The purpose is to just speed up startup time if you don't want the connect call to load all account data on startup (fetching executions is usually slow and can be delayed until later if run manually).
So one good usage is requesting "load ALL data but not EXECUTIONS":
fetchFields=ib_async.StartupFetchALL & ~ib_async.StartupFetch.EXECUTIONS,
If you wanted to figure out which fields are breaking for you, you could run through different combinations... StartupFetchALL
is just the union of all possible account data on startup:
StartupFetchALL =
StartupFetch.POSITIONS
| StartupFetch.ORDERS_OPEN
| StartupFetch.ORDERS_COMPLETE
| StartupFetch.ACCOUNT_UPDATES
| StartupFetch.SUB_ACCOUNT_UPDATES
| StartupFetch.EXECUTIONS
The logic for how all those fields are read is here: https://github.com/ib-api-reloaded/ib_async/blob/38cf54a66a4daefbd3fd1d7476381f0d178a8198/ib_async/ib.py#L2030-L2070 — it shows which APIs get called for each data request, but you can also run each of those API data fetches yourself for more debugging too (if only 1 of the 6 is failing or the order required isn't working for some reason).
interesting, thats something new and actually insightful...
so i identified its executions that's the problem.. first of all, previously i was using ib_insync.. and it doesnt have this new fetch controls.. the interesting benefit here is the try (line 2065) catch actually caught my error instead of crashing here.. so this 'feature' definitely helped..
at least it's properly caught.. but still without executions data im not sure whats the potential implications.. will test more and update soon..
follow up
so i fetched without executions and tried to call it separately.. it seems like its running forever and will never end..
I'm still somewhat guessing this is nothing to do with ib_async/ib_insync it seems very much to me some data on execution somehow corrupted on server side..
my question then is more of how can we handle this more gracefully? or anyone have any clue how could this happen?
regarding memory, i have 6000mb set for memory with 64gb ram
im currently exploring to see if i can filter the execution request.. because i tried to check the conid in the tws errors, it doesnt seems to match with the trades and orders i did for the day.. which is even weirder..
If IBKR decides to not reply to a request, the APIs can just hang indefinitely like that sometimes (it's why the regular calls have timeouts attached in different places because the client needs to timeout if the server never returns).
You can also try checking the gateway API log for the exact request sent?
Here's an example of a request for all executions with an empty reply (I currently have no executions). In this case, the request id for the command is the 11327
part:
22:57:48:068 (sync 22:57:47:507) <- 7-3-11327-0-------
22:57:48:071 (sync 22:57:47:510) -> ---55-1-11327-
Just run reqExecutions()
while also watching the gateway log for updates. Since you aren't getting a reply, it should return nothing after the first <- 7-3
call, but if it does return something and the request id values don't match, then there could be a bug in the client processing somewhere.
You can also try starting your client with more protocol detail output in API debug mode with ib_async.util.logToConsole(logging.DEBUG)
If the request is going out exactly as above (just with a different request id) but no reply ever happens, then you're probably right your data is broken somehow inside IBKR's database for your account. Also, IBKR only saves about 1 to 2 days of request history per account, so if your problem does clear up every 2 days, maybe you are hitting limits in their system somewhere (doesn't explain the contract id problems though).
and I guess a final test (just to prove it is their data problem and not a problem with our API implementation) could be trying to fetch execution history using their official library in either the Latest or Beta python versions from https://interactivebrokers.github.io/
So i tried to interpret the API logs and did some conid search on it
API Log
00:13:04:137 -> ---m7-8-734616654-SPX-OPT-20241021-5885-C-100-CBOE-USD-SPXW
TWS Log
[JTS-CCPDispatcherS24-68] - [0:178:178:1:0:61:3:WARN] EC is not known for 734616654
I notice that it seems to be related to yesterday's conid not today's.. Seems like at some point 'today', some of the yesterday's executions were 'removed?'. But fetching/requesting executions still had some lingering conid to attempt pulling but could not find.. thats what i could interpret from what i see in the 2 logs so far..
then i tried the ExecutionFilter by time and slowly shift the time forward..
ExecutionFilter(time=dt_utc)
i somehow narrowed it to a particular trade.. the moment i fetch execution for that trade the issue occur..
Trade(contract=Bag(conId=28812380, symbol='SPX', right='?', exchange='SMART', currency='USD', localSymbol='28812380', tradingClass='COMB', comboLegsDescrip='731522867|1,734263992|-1', comboLegs=[ComboLeg(conId=731522867, ratio=1, action='BUY', exchange='SMART', openClose=0, shortSaleSlot=0, designatedLocation='', exemptCode=-1), ComboLeg(conId=734263992, ratio=1, action='SELL', exchange='SMART', openClose=0, shortSaleSlot=0, designatedLocation='', exemptCode=-1)]), order=Order(permId=2119720566, action='BUY', orderType='LMT', lmtPrice=-2.85, auxPrice=0.0, tif='GTC', ocaType=3, orderRef='entry-P-5855.0-5785.0', displaySize=2147483647, rule80A='0', openClose='', volatilityType=0, deltaNeutralOrderType='None', referencePriceType=0, account='UXXXXXXX', clearingIntent='IB', cashQty=0.0, dontUseAutoPriceForHedge=True, autoCancelDate='20250401 04:15:00 SGT', filledQuantity=2.0, refFuturesConId=2147483647, shareholder='Not an insider or substantial shareholder'), orderStatus=OrderStatus(orderId=0, status='Filled', filled=0.0, remaining=0.0, avgFillPrice=0.0, permId=0, parentId=0, lastFillPrice=0.0, clientId=0, whyHeld='', mktCapPrice=0.0), fills=[], log=[], advancedError=''),
also sharing my trades for reference
so if i set the filter time to 2:02:00 i will hit it
working backwards, thats the first Bag/COMB order.. since ExecutionFilter time is 'onwards'.. i cant fetch anything earlier..
ExecutionFilter(time=dt_utc, side='SELL', exchange="SMART")
i also tried adding side and exchange filter into it, which manage to pull earlier than 2:02:00 but that combo order is excluded because its a BUY order not a SELL that combo spread is suppose to be a SELL Bull Put as seen in screenshot.. but its showing BOT negative 2.9 price... it seems like the COMB execution is missing? my guess.. since the ExecutionFilter is quite limited, i cant exclude or filter by tradingClass or conid
I'm not sure how all these relates.. just throwing more data points.. my question is then, if i use CBOE (since im on SPX only) instead of SMART.. will that help? can i do CBOE exchange on Bag/Combo?
more insights would be appreciated..
some quick update..
I did secType="OPT" on the ExecutionFilter and it avoided the COMBO as i wished..
i think its an indirect effect of how ib_async doing all these passive fetching/requesting of data.. if there's any bad data being pulled, causing socket stuck forever leading to timeout.. the troubleshooting is full of unknowns..
im not sure how from ib_async perspective this could be better handled.. maybe this thread itself provided lotsa data points for future reference.. do feel free to discuss if there's any ideas on improving this..
one small little thing not sure if its a concern, there's no exception raised on the execution timeout.. it merely printed "executions request timed out" maybe it should raise the exception so it can be handled on application side?
I did secType="OPT" on the ExecutionFilter and it avoided the COMBO as i wished..
Interesting! If only automatic execution fetching causes problems, you can just disable it with the fetchFields
disable feature. The API doesn't need the account execution data unless you want to read it for local reporting.
Maybe we could add an extra parameter to enable custom ExecutionFilter
on startup too if only one kind of data is breaking the API on startup.
maybe it should raise the exception so it can be handled on application side?
There is the extra option for raiseSyncErrors
which would raise an error if the data loading fails. It just says "if any errors are returned, raise ConnectionError(list of errors)
, but this exception only happens on manual opt-in (because even if one of the account data fetches fails, it doesn't mean the entire connection is broken).
i kinda need executions to better track commission/fees which logically COMBO doesnt have such data.. the individual legs are the more precise data IBKR has awful management on spread.. specifically referring to the combo of sell/buy with positive/negative values (iykyk)
yea i missed the raiseSyncErrors.. but that kinda lead to.. the suggestion on ExecutionFilter on initial fetch.. i like that.. but that also make me think.. should the connect be deem 'failed' if some data raised exception?
not entire familiar with the entire ib_async codebase.. should we self.connectedEvent.emit() before even fetching all the data? afterall the idea of 'connect' is just to connect, all these fetching is initialisation.. handling it this way would mean connect is still successful regardless of fetching data exception.. that would then allow application to handle it more properly?
just my thought
The timeout on reqExecutions has been around at least a year now and I hit it frequently. From the old ib_insync group
Re: Random timeout while connecting From: [Gonzalo Saenz] Date: Wed, 04 Oct 2023 12:57:11 PDT I think I found where it’s failing
File ~/.local/lib/python3.11/site-packages/ib_insync/ib.py:1782, in IB.connectAsync(self, host, port, clientId, timeout, readonly, account) 1779 self._logger.error(f'{name} request timed out') 1781 # the request for executions must come after all orders are in -> 1782 await asyncio.wait_for(self.reqExecutionsAsync(), timeout) 1784 # final check if socket is still ready 1785 if not self.client.isReady():
The reqExecutions call is raising the timeout. But I don’t know why or how to fix it.
Re: Random timeout while connecting From: nkulki@hot....com Date: Thu, 05 Oct 2023 20:18:40 PDT Please run the code below. It uses the official API. You will see that this code will hang because the method execDetailsEnd is not fired. So ib_insycn gives a timeout. The issue is with TWS. I have not been able to reproduce this bug reliably. But if you could file a bug with IB I would greatly appreciate this. We need more people to report this issue so that can look at this more seriously.
from ibapi.client import from ibapi.wrapper import port = 7497
class TestApp(EClient, EWrapper):
def __init__(self):
EClient.__init__(self, self)
def nextValidId(self, orderId: OrderId):
exec_filter = ExecutionFilter()
self.reqExecutions(
12345,
exec_filter
)
def execDetails(self, reqId: int, contract: Contract, execution: Execution):
print(reqId, contract, execution)
def execDetailsEnd(self, reqId: int):
print("execDetailsEnd.", reqId)
self.disconnect()
app = TestApp() app.connect("127.0.0.1", port, 1007) app.run()
I worked around it by using fetchFields=StartupFetchNONE, then making any needed calls myself and failing if reqExecutions fails. I don't remember if restarting the gateway fixes it, but I think it does.
i kinda need executions to better track commission/fees
Oh — this may help: the startup loading only fetches the recent history, but you always get live updates of new execution status changes.
You can watch these events for various state changes when orders get filled:
Fill()
objectsTrade()
objectsPosition()
objectsCommissionReport()
objects (includes PnL for closing trades)PortfolioItem()
objectswhich logically COMBO doesnt have such data.. the individual legs are the more precise data
Exactly. Especially with PnL calculations on bags/spreads where the events only report PnL per leg, but you have to look at the full execution history for full spread PnL or run the math individually.
should we self.connectedEvent.emit() before even fetching all the data?
One of the design goals of the library is to always synchronize user positions and account details, so connectedEvent
also means all your account data is loaded.
I think it's safe to assume the connection can't be trusted until connectAsync()
fully returns.
The timeout on reqExecutions has been around at least a year now and I hit it frequently. From the old ib_insync group
Thanks a lot for finding those details. I've also seen places where the gateway will accept a request then just never reply, but it all lives in their java blob app thing so we can't figure it out.
A different example of the gateway not replying I can reproduce: I tried adding support for the new Event Contracts instrument and I can't get them to preview or trade (the gateway receives the request with the contract id in the regular format, no reply ever comes back), but I can get quote Tickers for them. The only place the official IBKR API docs mention Event Contracts is using their newer public web API endpoint not using the gateway, so if IBKR starts only supporting new features on a different API, we will have to either add adapters/workarounds or eventually move support to the different API too.
The web API already supports more features the TWS API can't do like:
But it also appears their web API isn't designed with any real time feedback mechanism (no streaming websocket account updates, but it does have websocket quote streaming), so you have to just refresh the /orders
endpoint every 5 second waiting for updates I guess?
We could make a new package to support the web API with good defaults and automatic/transparent state management like the current library, but it would be a full time job to build it out with the same level of automation and completeness (if anybody wants to sponsor building it for a year).
Recently I'm encountering some weird connectivity errors that I've never seen before in past 2-3yrs running my bot. I highly suspect its TWS side issue but I can't really figure out.
Some background:
- my bot runs many 0dte spx trades during regular trading hours
- I had several friends running my bot all running fine
Some added tests I did to give more context:
- I tried to run on my friends PC (his home internet), the issue is exactly the same
- I tried using a different username in my IBKR account, it worked fine
- Next day i tried back the original username it worked fine again
- End of day, 4pm after my bot did a usual auto restart the exact same issue occur again
- once it occur, i will not be able to connect to tws with this username again (probably until the 1-2 days later)
any help would be deeply appreciated. thanks.
Hello, I hope you have resolved your issue. I was facing the same issue with the connection fail error but maybe for a different reason. Looks like the problem you were/are facing is due to performance of an unsupported operation that resulted in locking up your session via clientId 777. To bypass the lock, maybe try changing your clientId and basically start a new session. I use "clientId=random.randint(1, 10000)" to get a new clientId. Hopefully this helps. gl
I've also run into this reqExecutions
issue and mentioned it here: https://github.com/ib-api-reloaded/ib_async/issues/54. I have yet to find a reliable solution other than not fetching them as mentioned here. I opened a ticket with IBKR support to no immediate avail. I did a little more testing and on versions 10.32+ both settings of "Download execution reports" (Today or All Available) now hang.
What is really worrisome is that this is a long-time regression apparently introduced alongside the ability to specify all or today's execution reports and it doesn't seem like IBKR is able or willing to address it. You can no longer download TWS versions that predate the introduction of this bug.
Update: I was able to get reqExecutions
returning again using the fix mentioned here to set "Send instrument-specific attributes for dual-mode API client in: UTC format".
When I first discovered this I thought it was some sort of timezone issue related to daylight savings/UTC offsets changing and that when America/New_York
went back to UTC-5 for daylight savings this wouldn't be necessary, however that has happened and it seems that with that setting at "Instrument timezone" (the default), it still doesn't work.
I would encourage others to report this issue to IBKR support as it's existed for years and who knows when the workaround will break—being able to get your executions seems pretty important! 😅
Recently I'm encountering some weird connectivity errors that I've never seen before in past 2-3yrs running my bot. I highly suspect its TWS side issue but I can't really figure out.
Some background:
Some added tests I did to give more context:
At this point i already kinda figure out its some corrupted data somehow on my username when connecting. So I did some own investigating and try to narrow the code to replicate the issue in jupytor notebook:
Here's the errors:
The above error doesnt help much its just timeout error. I further dug into the ib_insync/ib_async code and see the issue happening here in client.py > connectAsync():
await asyncio.wait_for(ib.client.apiStart, timeout)
I even contacted the IBKR technical support but they're not in anyway helping, giving me random excuses like ib_insync not compatible and their TWS API had recent changelogs which are totally not true.
So i retrieved the TWS log in diagnostic detailed logging, this chunk always happen whenever i do a connect:
From what I have so far (above) it seems like something went missing in my account somehow and causing subsequent connect to timeout.
my current workaround if this ever happens is to keep switching the username every other day.. subscribing to marketdata on both username.. which is not the most ideal..
I'm just kinda documenting it here to see if anyone also encounter anything similar.. Or anyone have any idea what else can I try resolve this or even anyone who understand whats going on could help explain and advise anything.. or is there something i can do on ib_async side to prevent triggering the dummy error..
any help would be deeply appreciated. thanks.