Open FTI76 opened 1 year ago
Logger: homeassistant.components.homekit.type_thermostats Source: components/homekit/type_thermostats.py:599 Integration: HomeKit (documentation, issues) First occurred: 7:35:06 AM (12 occurrences) Last logged: 7:51:08 AM
Cannot map hvac target mode: heat to homekit as only {0: <HVACMode.OFF: 'off'>, 2: <HVACMode.COOL: 'cool'>} modes are supported
I deleted integration and set it up fresh with cache enabled and put in all of the IP Addresses. All 3 units are connected and show "heating" but two of them are still missing heat mode icon so something is still off.
I have been seeing this issue myself for the past couple of days, so it will get fixed once I have time to dig into it. Anyone who's seeing this and wants to help out, use the Interactive Use instructions to capture good & bad status dictionaries from an indoor unit, so we can see what's going on.
OK, I did a little exploration this evening. The errant indoor units are returning:
{'_api_error': 'serializer_error'}
for requests they used to process just fine. This is the error you get if the response you would get is too big.
Some background: the Kumo local API is a big JSON dictionary. You send the part of the dictionary you want as a 'c' command and get back an 'r' response with everything under that point filled in. For example, the sensors query is {"c":{"sensors":{}}}
.
If you ask for {"c":{}}
it always returns that serializer error. Which is too bad because otherwise it would tell us everything the indoor unit has to offer, and we would not have had to deduce the existence of (say) the mhk2 portion of the dictionary.
But now, asking for all the sensors is (sometimes) too much. This one's not too bad because I've seen in the past there are 4 possible sensors (only one of my indoor units has any sensors) so I can just request each of '{"c":{"sensors":{'0': {}}}}'
and 1, 2, and 3, and get back all the info.
But I'm also seeing this issue right now on the "profile" query, which is just a list of attributes. So it seems like there we'll need to fetch the attributes we care about individually in separate API calls. This is doable, but not on a weeknight so it'll have to wait for the weekend.
What's weird is that sometimes it's fine. Right now one of my 3 indoor units is showing the problem, but the other 2 are fine; yesterday one of the other units was having trouble.
FYI this has nothing to do with Home Assistant version. This issue is actually in the pykumo library and seems to be caused by something Mitsubishi changed. So it's possible redoing some of the network snooping between indoor unit and the app would (a) reveal if the app is now doing something different, and (b) possibly reveal some new functionality they've added that we could exploit.
Yes i'm seeing the exact same thing. Right now all 3 of my units the heat mode is back it's completely random on when it changes. I don't go into the Kumo Cloud App that often on my iPhone but it definitely looks like they added things to the app, there looks to be more menus and options than I previously remember. Thanks for maintaining this plugin it's been excellent up this this point.
I've published a new beta v0.3.5-beta which should resolve this issue. Given the severity I wanted to get a fix out quickly, so I've only done basic testing. Please test and reply here with results. If all goes well I'll make a new production release soon.
Thanks, I have 3.5 Beta installed. I will monitor and let you know how it's looking. Appreciate the quick turnaround.
Things look overall somewhat better but I'm still seeing issues. I also started seeing a lot of '__no_memory' responses from 2 of my 3 indoor units. I power-cycled the system and that seems to have stopped that from happening.
I am going to try always querying the individual attributes and see if that helps. I'll let that soak on my own system for a while and push another update if it seems improved.
So, indications are this is something Mitsubishi broke. Let's hope they fix it, and let's hope my workarounds are good enough to improve reliability.
v0.3.5-beta2 is doing better for me overnight.
I just installed 3.5 beta2. the plugin seems to take longer than usual to initialize after HA restart, but it's running and all 3 units look good. I'll let you know if I run into any issues.
I'm running the new beta and my problematic unit still seems to go up and down. I'm seeing the same issue from the kumo app. Not sure there is much to do until Mitsubishi fixes it from their end.
Power cycling (at the breaker) seems to have helped my system, at least for today. Before I was getting a lot of "__no_memory" errors in the logs, and I haven't seen one of those today.
My theory (based on nothing but having been writing software since 1984 :-) ) is that there's some issue that's occurring that results in the "serializer_error", and when this happens there's also a memory leak on the adapter. Thus later triggering the "__no_memory" condition. That's why I switched pykumo to querying individual values only, since whatever's triggering the serializer error happens with some regularity when getting the larger responses. Power cycling the adapter would, of course, restore any leaked memory.
So I'd also advise staying out of the KumoCloud app, which probably still uses the larger requests.
Power cycling (at the breaker) seems to have helped my system, at least for today. Before I was getting a lot of "__no_memory" errors in the logs, and I haven't seen one of those today.
My theory (based on nothing but having been writing software since 1984 :-) ) is that there's some issue that's occurring that results in the "serializer_error", and when this happens there's also a memory leak on the adapter. Thus later triggering the "__no_memory" condition. That's why I switched pykumo to querying individual values only, since whatever's triggering the serializer error happens with some regularity when getting the larger responses. Power cycling the adapter would, of course, restore any leaked memory.
So I'd also advise staying out of the KumoCloud app, which probably still uses the larger requests.
If local cache is enabled does the plugin talk directly to the units and doesn't need anything from Kumo Cloud for operation?
If local cache is enabled does the plugin talk directly to the units and doesn't need anything from Kumo Cloud for operation?
Yes.
I'm running the new beta and my problematic unit still seems to go up and down. I'm seeing the same issue from the kumo app. Not sure there is much to do until Mitsubishi fixes it from their end.
I would remove the configuration and re-configure it with cache mode enabled if it's not setup this way. Just make sure the kumo wifi adapter(s) are setup with DHCP reservations beforehand.
Thanks for the troubleshooting help. I can confirm that turning the breaker off seems to have fixed the connectivity issue (albeit I did it in conjunction with removing and reinstalling the kumo custom component.) Connectivity seems really slow now compared to what it used to be. I'm seeing relatively large delays between commands, it getting executed, and the state being updated in Kumo. Are you guys seeing this too?
I've been toying with the idea of trying to firewall the kumo wifi adapters from WAN. Do you guys think that would help or even be worth while?
Slower is expected. It's doing ~47 requests per refresh, where before it did ~5. If your WiFi is marginal it will only magnify the effect.
I think firewalling would totally prevent you from using the Kumo Cloud app. It would also prevent you from getting any hypothetical fix for this issue. It's not going to help the current situation, though.
1 out of my 5 units went offline this morning. I'd like to think I don't have a wifi issue... Unifi reports everything is solid and I have no issues with other devices.
I should revise by earlier comment by saying that I had toyed with the idea of taking my adapters off of WAN because I couldn't think of any good that keeping them connected to the internet would do. I don't use the Kumo app at all as I found the home assistant integration much more responsive. It sounds like if I had done it, it would have saved me from these headaches.
Which firmware version are folks seeing this issue with? (Kumo app: Settings > System Setup > Installer Settings > (site) > (zone) > firmware version shows in the lower right corner of the page)
I'm currently on 02.06.05 and will be on the lookout for the behavior change. Currently not yet seeing the issue.
I suspect this is highly variable with what equipment you have. Mine is 00.04.21.
I'm still amazed that the indoor unit model is not available through the API (nor WiFi adapter model).
I think exposing the firmware version as an attribute (it is in available via the local API) would be a good idea.
Interesting. The only reference I've found to my firmware rev is a Mitsubishi FAQ about a fix introduced to support commissioning on Android 12 devices, apparently dated April 2022, so I've likely had version 02.06.05 since the system was installed in December.
I'm using the PAC-USWHS002-WF-2 interfaces on all 4 of my units, which are a mixture of the MLZ-KP ceiling cassettes, an SEZ ducted, and an MSZ-GL wall mount.
Mine is 00.04.21 for what that's worth.
Interesting, so it seems like it's possible the older PAC-USWHS002-WF-1 interfaces are showing this issue and the WF-2 ones are not. Maybe I should put in a config option -- I doubt I'll have time before the weekend, though.
I have the older PAC-USWHS002-WF-1 adapter with firmware 00.04.21. It briefly worked again for about an hour after updating to beta2 a few days ago, but has been unavailable since.
It briefly worked again for about an hour
Did you try power-cycling your system (i.e. throw the breaker, wait 10 seconds, turn it back on)?
The theory is that there's a memory leak on the adapter that's triggered in certain error conditions, which the beta2 tries to avoid.
I did, yes. I did it again just now for good measure and it's still unavailable.
You may have something else going on. What are the error messages in your logs?
Never mind, I'm just an idiot. I got a new router the other day and did DHCP reservations for all my smart devices shortly after installing beta2. I noticed in the logs that it was trying to poll the old IP address.
I reinstalled the integration and it's working again now. Sorry about that!
I'm still getting tons of errors. Any thoughts on this error:
2023-03-04 22:39:22.216 WARNING (SyncWorker_9) [pykumo.py_kumo] Kids room: Did not get numberOfFanSpeeds from b'{"c":{"indoorUnit":{"profile":{"numberOfFanSpeeds":{}}}}}': {'_api_error': 'serializer_error'} 2023-03-04 22:39:22.405 WARNING (SyncWorker_9) [pykumo.py_kumo] Kids room: Did not get hasFanSpeedAuto from b'{"c":{"indoorUnit":{"profile":{"hasFanSpeedAuto":{}}}}}': {'_api_error': 'serializer_error'} 2023-03-04 22:39:22.619 WARNING (SyncWorker_9) [pykumo.py_kumo] Kids room: Did not get hasVaneSwing from b'{"c":{"indoorUnit":{"profile":{"hasVaneSwing":{}}}}}': {'_api_error': 'serializer_error'} 2023-03-04 22:39:23.043 WARNING (SyncWorker_9) [pykumo.py_kumo] Kids room: Did not get hasModeDry from b'{"c":{"indoorUnit":{"profile":{"hasModeDry":{}}}}}': {'_api_error': 'serializer_error'} 2023-03-04 22:39:23.245 WARNING (SyncWorker_9) [pykumo.py_kumo] Kids room: Did not get hasModeHeat from b'{"c":{"indoorUnit":{"profile":{"hasModeHeat":{}}}}}': {'_api_error': 'serializer_error'} 2023-03-04 22:39:23.437 WARNING (SyncWorker_9) [pykumo.py_kumo] Kids room: Did not get hasModeVent from b'{"c":{"indoorUnit":{"profile":{"hasModeVent":{}}}}}': {'_api_error': 'serializer_error'} 2023-03-04 22:39:23.767 WARNING (SyncWorker_9) [pykumo.py_kumo] Kids room: Did not get hasModeAuto from b'{"c":{"indoorUnit":{"profile":{"hasModeAuto":{}}}}}': {'_api_error': 'serializer_error'} 2023-03-04 22:39:23.849 WARNING (SyncWorker_9) [pykumo.py_kumo] Kids room: Did not get hasVaneDir from b'{"c":{"indoorUnit":{"profile":{"hasVaneDir":{}}}}}': {'_api_error': 'serializer_error'} 2023-03-04 22:43:18.571 WARNING (SyncWorker_6) [pykumo.py_kumo] Kids room: Error retrieving profile from {'r': {}}: 'indoorUnit'
2023-03-04 22:44:08.252 WARNING (MainThread) [homeassistant.helpers.entity] Update of climate.living_room is taking over 10 seconds
Serializer error is the message that took us down this path in the first place. I don't know -- only guessing -- why we see these errors at all, but experimentation showed that fetching smaller pieces seemed to help. Clearly for you that's not the case
I am not seeing these errors (though my logs don't cover very long because my HA restarted overnight).
Does it self-recover at some point? If so, how long do these episodes last?
There's one more improvement regarding sensors that I'm looking at (skip calls for sensors that aren't there to improve overall speed) but not sure if that would help you or not.
So I've got 5 of these things. Over the past week, I've seen pretty much all of them pop in and out, so it does seem to self recover... But it can take a while. The breaker reset seems to always work to fix the issue though.
The somewhat saving grace to this is that it is failing in the kumo app as well. If this is a firmware issue, that should be able to see that there is a widespread issue that needs to be addressed. Once they fix this I think I'm going to block the devices from talking to the internet. It really feels like no good could come of it at this point.
I'd advise staying out of Kumo Cloud app, on the assumption it's making the very queries that cause the memory leak on the adapter. But yes, I agree, hopefully Mitsubishi notices this and fixes it. At that point maybe we can go back to the bigger queries.
I have just pushed a beta3 that has a single change: it bails out early if it can't get sensor data. No need to try to fetch all 4 sensors for units that have only 1 (or 0) sensors. This should speed things up a little since fetching info for all 4 sensor slots is 24 queries.
If this does OK I think I'll promote this to stable this evening. I don't think there's much more workaround I can do here.
OK, I've released production v.0.3.5, no changes from beta 3. I'm keeping this issue open for tracking.
I'm getting some errors since updating to 0.3.5 and my heat pump will change to unknown status for one minute, but seems to recover on its own every time.
One of the errors is a little strange... It seems like the integration is trying to connect to a random device on my network.
Any idea why it's trying to connect to that IP address? I'm not sure if this might just be a remnant of the DHCP reservation issue I had the other day, but I don't think that's the same IP it had.
The only way it can be trying to talk to an IP address is if it's in your kumo_cache.json.
For those following along, I've released a new pykumo library that attempts some limited retries on errors. Anyone who'd like to try it and knows what the following means: pip install pykumo==v0.3.4
please give it a try and report back.
When do you think the updates will be pushed to release? I’ve also had this issue. Updated to latest official release and flipped power on my unit. So far it’s been working for a few days.
I am still trying out some combinations of ideas to see what lets me run the longest without having to power-cycle. That said, I hope to push a new version of the integration tomorrow.
I've had zero issues with 0.3.5
I have pushed a new version of hass-kumo to pick up the pykumo retry changes. For me this has been better -- specifically, less likely to randomly lose heat mode -- for one of my indoor units. Power cycling does seem to still be needed if a unit goes totally Unavailable and stays that way.
I just updated. Hope it’ll hold. Had to restart again just this morning.
Do you think Mitsubishi is even aware of this issue?
I think if Mitsubishi were aware they might have done something by now.
Cynically, they might be moving to a cloud-first approach. I did a packet capture of traffic from the KumoCloud WebApp and it's not even attempting local communication. The Kumo Cloud app, at least at Android, is still doing local communication -- though with capture software turned on the app warns about being "offline", "functionality may be limited".
One idea I had is to implement a dual mode where pykumo is capable of using either cloud or local APIs. It could use cloud APIs for status updates (falling back to local if needed) to minimize the number of local API calls to try to avoid triggering the issue. But prefer local for control, for responsiveness. I definitely don't have spare cycles to work on this at the moment, though.
Assuming I have no need of the cloud app, what do you think would happen if I blocked the WiFi module from accessing the cloud at my router? So, still accessible internally, but for all intents and purposes 'offline' outside of my LAN. Do you think that would have any effect on the reliability of this integration?
Things seem a bit more stable for me the last few weeks, not sure if I am imagining things.
I’ve had to completely reset the system multiple times in the last few weeks. I’ve now gotten to the point where my 5-split system will only show the state of 4 splits, with the 5th always booting to “unavailable”.
I decided to migrate away from 3x WF-1 units (FW 00.04.21) over to v2 - PAC-USWHS002-WF-2 (FW: 02.06.12). So far I'm observing much better results. Primarily driver for the upgrade was this issue + control to turn off the LEDs to prevent young children from seeing "shadows" :)
Contributing what I'm seeing. I've had a horrible wifi connection ever since my air handler unit was installed in the fall of 2023. However, my mini-split unit has a very stable connection.
My Unifi & HA both show the internal air handler unit constantly bouncing in & out:
I've moved the wifi receiver outside the cabinet, so it's not blocked by metal. When it has signal it's a very strong signal:
Running the Interactive Use logs, I see a ConnectTimeoutError, and unable to get status on the troublesome unit:
>>> kumos = account.make_pykumos()
Timeout issuing request http://10.88.1.100/api: HTTPConnectionPool(host='10.88.1.100', port=80): Max retries exceeded with url: /api?m=96836f8f063336649e4987f85ea11fb0ff6d19235a35ec87d269568eae2fa5b0 (Caused by ConnectTimeoutError(<urllib3.connection.HTTPConnection object at 0x105ee5490>, 'Connection to 10.88.1.100 timed out. (connect timeout=1.2)'))
Timeout issuing request http://10.88.1.100/api: HTTPConnectionPool(host='10.88.1.100', port=80): Max retries exceeded with url: /api?m=cb05637b96d459800b19bd715492224c2f1b1fa1769a51e534385157338dec46 (Caused by ConnectTimeoutError(<urllib3.connection.HTTPConnection object at 0x105ee7320>, 'Connection to 10.88.1.100 timed out. (connect timeout=1.2)'))
Main House: Did not get mode from b'{"c":{"indoorUnit":{"status":{"mode":{}}}}}': {}
Timeout issuing request http://10.88.1.100/api: HTTPConnectionPool(host='10.88.1.100', port=80): Max retries exceeded with url: /api?m=e0e94036d60ce564e0df2e773d7ed686d0a7522d2c6126d78bee995d14a3dfa9 (Caused by ConnectTimeoutError(<urllib3.connection.HTTPConnection object at 0x105f15190>, 'Connection to 10.88.1.100 timed out. (connect timeout=1.2)'))
Main House: Did not get standby from b'{"c":{"indoorUnit":{"status":{"standby":{}}}}}': {}
Timeout issuing request http://10.88.1.100/api: HTTPConnectionPool(host='10.88.1.100', port=80): Max retries exceeded with url: /api?m=ad51b96fff7aa388cfbfd58069bd7aeca0ea3556b2b65bbd8a013cae62e0d725 (Caused by ConnectTimeoutError(<urllib3.connection.HTTPConnection object at 0x105ee6180>, 'Connection to 10.88.1.100 timed out. (connect timeout=1.2)'))
Main House: Did not get spHeat from b'{"c":{"indoorUnit":{"status":{"spHeat":{}}}}}': {}
Timeout issuing request http://10.88.1.100/api: HTTPConnectionPool(host='10.88.1.100', port=80): Max retries exceeded with url: /api?m=1ac0134e70b0595b2d65bcd96860f6eb7f53f5a57a7cfd2b9783b688f0a4d129 (Caused by ConnectTimeoutError(<urllib3.connection.HTTPConnection object at 0x105ee7ce0>, 'Connection to 10.88.1.100 timed out. (connect timeout=1.2)'))
Main House: Did not get spCool from b'{"c":{"indoorUnit":{"status":{"spCool":{}}}}}': {}
Timeout issuing request http://10.88.1.100/api: HTTPConnectionPool(host='10.88.1.100', port=80): Max retries exceeded with url: /api?m=318a4a9ae2a4079232295dd735a96404e5d7a6350b2ba998bd5da43811dad9d1 (Caused by ConnectTimeoutError(<urllib3.connection.HTTPConnection object at 0x105f150a0>, 'Connection to 10.88.1.100 timed out. (connect timeout=1.2)'))
Main House: Did not get roomTemp from b'{"c":{"indoorUnit":{"status":{"roomTemp":{}}}}}': {}
Timeout issuing request http://10.88.1.100/api: HTTPConnectionPool(host='10.88.1.100', port=80): Max retries exceeded with url: /api?m=018bd20dfba9a5b5ca230dd9c4ec97a941441e894d7207a7a0433af494508e02 (Caused by ConnectTimeoutError(<urllib3.connection.HTTPConnection object at 0x105ee7ad0>, 'Connection to 10.88.1.100 timed out. (connect timeout=1.2)'))
Main House: Did not get fanSpeed from b'{"c":{"indoorUnit":{"status":{"fanSpeed":{}}}}}': {}
Timeout issuing request http://10.88.1.100/api: HTTPConnectionPool(host='10.88.1.100', port=80): Max retries exceeded with url: /api?m=8fbad697d2aa8556c323750a7cd51593f9003b5975476910e6b0f07cee1758a3 (Caused by ConnectTimeoutError(<urllib3.connection.HTTPConnection object at 0x105ee5f10>, 'Connection to 10.88.1.100 timed out. (connect timeout=1.2)'))
Main House: Did not get vaneDir from b'{"c":{"indoorUnit":{"status":{"vaneDir":{}}}}}': {}
Timeout issuing request http://10.88.1.100/api: HTTPConnectionPool(host='10.88.1.100', port=80): Max retries exceeded with url: /api?m=deece0e68da7a6bccee378ed491b9ff738c03a32de3837f7a8623ef218bbcfa9 (Caused by ConnectTimeoutError(<urllib3.connection.HTTPConnection object at 0x105ee69f0>, 'Connection to 10.88.1.100 timed out. (connect timeout=1.2)'))
Main House: Did not get filterDirty from b'{"c":{"indoorUnit":{"status":{"filterDirty":{}}}}}': {}
Timeout issuing request http://10.88.1.100/api: HTTPConnectionPool(host='10.88.1.100', port=80): Max retries exceeded with url: /api?m=00805bf2343936333f610adc47cc3b88309c46065fced9d071fd19a30ab073f3 (Caused by ConnectTimeoutError(<urllib3.connection.HTTPConnection object at 0x105ee70b0>, 'Connection to 10.88.1.100 timed out. (connect timeout=1.2)'))
Main House: Did not get defrost from b'{"c":{"indoorUnit":{"status":{"defrost":{}}}}}': {}
Main House: Error retrieving status from {}: 'r'
>>> unit = kumos['Upstairs']
>>> print(kumos)
{'Main House': <pykumo.py_kumo.PyKumo object at 0x104de7800>, 'Upstairs': <pykumo.py_kumo.PyKumo object at 0x104b8c620>}
>>> unit.update_status()
True
>>> pp.pprint(unit.__dict__)
{'_address': '10.88.1.101',
'_last_reboot': None,
'_last_status_update': 298756.582687708,
'_mhk2': {'status': {'indoorHumid': None,
'outdoorHumid': None,
'outdoorTemp': None}},
'_name': 'Upstairs',
'_profile': {'extendedTemps': True,
'hasDefrost': True,
'hasFanSpeedAuto': True,
'hasHotAdjust': True,
'hasInitialSettings': False,
'hasModeAuto': False,
'hasModeDry': False,
'hasModeHeat': True,
'hasModeTest': False,
'hasModeVent': True,
'hasStandby': True,
'hasVaneDir': True,
'hasVaneSwing': True,
'maximumSetPoints': {'auto': 31, 'cool': 31, 'heat': 31},
'minimumSetPoints': {'auto': 16, 'cool': 16, 'heat': 10},
'numberOfFanSpeeds': 5,
'runState': 'normal',
'usesSetPointInDryMode': True,
'wifiRSSI': -54},
'_security': {...},
'_sensors': [],
'_serial': '2334P008J100095F',
'_status': {'activeThermistor': 'unset',
'defrost': False,
'fanSpeed': 'auto',
'filterDirty': False,
'hotAdjust': False,
'humidTest': 0,
'mode': 'cool',
'roomTemp': 19,
'runTest': 0,
'spCool': 19,
'spHeat': 20,
'standby': False,
'tempSource': 'unset',
'vaneDir': 'vertical'},
'_timeouts': (1.2, 8.0)}
>>> unit = kumos['Main House']
>>> unit.update_status()
Timeout issuing request http://10.88.1.100/api: HTTPConnectionPool(host='10.88.1.100', port=80): Max retries exceeded with url: /api?m=96836f8f063336649e4987f85ea11fb0ff6d19235a35ec87d269568eae2fa5b0 (Caused by ConnectTimeoutError(<urllib3.connection.HTTPConnection object at 0x105ee68d0>, 'Connection to 10.88.1.100 timed out. (connect timeout=1.2)'))
Timeout issuing request http://10.88.1.100/api: HTTPConnectionPool(host='10.88.1.100', port=80): Max retries exceeded with url: /api?m=cb05637b96d459800b19bd715492224c2f1b1fa1769a51e534385157338dec46 (Caused by ConnectTimeoutError(<urllib3.connection.HTTPConnection object at 0x105f44440>, 'Connection to 10.88.1.100 timed out. (connect timeout=1.2)'))
Main House: Did not get mode from b'{"c":{"indoorUnit":{"status":{"mode":{}}}}}': {}
Timeout issuing request http://10.88.1.100/api: HTTPConnectionPool(host='10.88.1.100', port=80): Max retries exceeded with url: /api?m=e0e94036d60ce564e0df2e773d7ed686d0a7522d2c6126d78bee995d14a3dfa9 (Caused by ConnectTimeoutError(<urllib3.connection.HTTPConnection object at 0x105f45520>, 'Connection to 10.88.1.100 timed out. (connect timeout=1.2)'))
Main House: Did not get standby from b'{"c":{"indoorUnit":{"status":{"standby":{}}}}}': {}
Timeout issuing request http://10.88.1.100/api: HTTPConnectionPool(host='10.88.1.100', port=80): Max retries exceeded with url: /api?m=ad51b96fff7aa388cfbfd58069bd7aeca0ea3556b2b65bbd8a013cae62e0d725 (Caused by ConnectTimeoutError(<urllib3.connection.HTTPConnection object at 0x105f17d40>, 'Connection to 10.88.1.100 timed out. (connect timeout=1.2)'))
Main House: Did not get spHeat from b'{"c":{"indoorUnit":{"status":{"spHeat":{}}}}}': {}
Timeout issuing request http://10.88.1.100/api: HTTPConnectionPool(host='10.88.1.100', port=80): Max retries exceeded with url: /api?m=1ac0134e70b0595b2d65bcd96860f6eb7f53f5a57a7cfd2b9783b688f0a4d129 (Caused by ConnectTimeoutError(<urllib3.connection.HTTPConnection object at 0x105f450d0>, 'Connection to 10.88.1.100 timed out. (connect timeout=1.2)'))
Main House: Did not get spCool from b'{"c":{"indoorUnit":{"status":{"spCool":{}}}}}': {}
Timeout issuing request http://10.88.1.100/api: HTTPConnectionPool(host='10.88.1.100', port=80): Max retries exceeded with url: /api?m=318a4a9ae2a4079232295dd735a96404e5d7a6350b2ba998bd5da43811dad9d1 (Caused by ConnectTimeoutError(<urllib3.connection.HTTPConnection object at 0x105f46450>, 'Connection to 10.88.1.100 timed out. (connect timeout=1.2)'))
Main House: Did not get roomTemp from b'{"c":{"indoorUnit":{"status":{"roomTemp":{}}}}}': {}
Timeout issuing request http://10.88.1.100/api: HTTPConnectionPool(host='10.88.1.100', port=80): Max retries exceeded with url: /api?m=018bd20dfba9a5b5ca230dd9c4ec97a941441e894d7207a7a0433af494508e02 (Caused by ConnectTimeoutError(<urllib3.connection.HTTPConnection object at 0x105f17890>, 'Connection to 10.88.1.100 timed out. (connect timeout=1.2)'))
Main House: Did not get fanSpeed from b'{"c":{"indoorUnit":{"status":{"fanSpeed":{}}}}}': {}
Timeout issuing request http://10.88.1.100/api: HTTPConnectionPool(host='10.88.1.100', port=80): Max retries exceeded with url: /api?m=8fbad697d2aa8556c323750a7cd51593f9003b5975476910e6b0f07cee1758a3 (Caused by ConnectTimeoutError(<urllib3.connection.HTTPConnection object at 0x105f45fd0>, 'Connection to 10.88.1.100 timed out. (connect timeout=1.2)'))
Main House: Did not get vaneDir from b'{"c":{"indoorUnit":{"status":{"vaneDir":{}}}}}': {}
Timeout issuing request http://10.88.1.100/api: HTTPConnectionPool(host='10.88.1.100', port=80): Max retries exceeded with url: /api?m=deece0e68da7a6bccee378ed491b9ff738c03a32de3837f7a8623ef218bbcfa9 (Caused by ConnectTimeoutError(<urllib3.connection.HTTPConnection object at 0x105f44a70>, 'Connection to 10.88.1.100 timed out. (connect timeout=1.2)'))
Main House: Did not get filterDirty from b'{"c":{"indoorUnit":{"status":{"filterDirty":{}}}}}': {}
Timeout issuing request http://10.88.1.100/api: HTTPConnectionPool(host='10.88.1.100', port=80): Max retries exceeded with url: /api?m=00805bf2343936333f610adc47cc3b88309c46065fced9d071fd19a30ab073f3 (Caused by ConnectTimeoutError(<urllib3.connection.HTTPConnection object at 0x105f17860>, 'Connection to 10.88.1.100 timed out. (connect timeout=1.2)'))
Main House: Did not get defrost from b'{"c":{"indoorUnit":{"status":{"defrost":{}}}}}': {}
Main House: Error retrieving status from {}: 'r'
False
>>> pp.pprint(unit.__dict__)
{'_address': '10.88.1.100',
'_last_reboot': None,
'_last_status_update': 298588.480950083,
'_name': 'Main House',
'_profile': {},
'_security': {...},
'_sensors': [],
'_serial': '2334P008J100089F',
'_status': {},
'_timeouts': (1.2, 8.0)}
Hey this plugin has been rock solid since I set it up in January. I know Kumo Cloud is flaky but things have become very unreliable. I see the following so not sure if this is part of the issue.
The problem is the head units either go "unavailable" or the heat mode disappears completely from HA so when the units are set to fire on the heat isn't working.
Logger: homeassistant.helpers.frame Source: helpers/frame.py:77 First occurred: 7:34:59 AM (1 occurrences) Last logged: 7:34:59 AM
Detected integration that called async_setup_platforms instead of awaiting async_forward_entry_setups; this will fail in version 2023.3. Please report issue to the custom integration author for kumo using this method at custom_components/kumo/init.py, line 102: hass.config_entries.async_setup_platforms(entry, PLATFORMS)