Open screamingbaby opened 2 years ago
hallo, I can confirm exactly this behavior what you describe. I have like 15 Tuya devices, used for window external covers and approx. 150 other devices in my house (mostly ewelink). Two days ago we had a power loss and after the grid got back online, all my Tuya (Tuya-Local) devices are gray, the LocalTuya integration is orange, and all Tuya entities are with red mark in the list. The SmartLife app is working normally after the new IPs lease, and in the HA a msg popped out with new discovery - Tuya inegration (the official one) wanting to discover all "new" Tuya devices.
I guess I will have to set "ignore" for the ofic. Tuya and reset the Local Tuya integration again. Then I guess I will have to make a permanent IP list like you described.
The reason I chose the Local Tuya instead of the official one was the long delay of the status update for my covers. Making impossible to set automations for covers. (the status was "unknown" for all the time).
Local tuya worked quite fine until this power loss.
I am curious why the local Tuya can't work with dynamic IPs, or did I miss something, when setting it for the first time?
thanks for any hints. Willy
Its not the ip addrress, localtuya is smart enough to find the new IP of that device by deviceid using the network discovery broadcast.
I actually have all of my tuya devices changing ip after prolonged power outage (long enough that the last leased ip expired) and localtuya managed to recover from this. I know this because of the diagnostic logs from old one to the new one are different.
OK then, so why it stopped working? It shouldn't be a conflict with (cloud)Tuya integration (I have read somewhere else, that both integrations were confirmed to work simultaneously - with two sets of entities created).
So why after this grid power off and recovery the Local Tuya stopped while the official (cloud)Tuya started to apear (trying to discover new devices)?
Thx. Willy
if it helps, I have found this log entry:
Logger: homeassistant.components.websocket_api.http.connection Source: config_entries.py:1052 Integration: Home Assistant WebSocket API (documentation, issues) First occurred: 08:33:41 (2 occurrences) Last logged: 08:33:55
[2586400008] The config entry localtuya (localtuya) with entry_id xxx cannot be unloaded because it is not in a recoverable state (ConfigEntryState.FAILED_UNLOAD) Traceback (most recent call last): File "/usr/src/homeassistant/homeassistant/components/websocket_api/commands.py", line 200, in handle_call_service await hass.services.async_call( File "/usr/src/homeassistant/homeassistant/core.py", line 1738, in async_call task.result() File "/usr/src/homeassistant/homeassistant/core.py", line 1775, in _execute_service await cast(Callable[[ServiceCall], Awaitable[None]], handler.job.target)( File "/usr/src/homeassistant/homeassistant/helpers/service.py", line 746, in admin_handler await result File "/config/custom_components/localtuya/init.py", line 84, in _handle_reload await asyncio.gather(*reload_tasks) File "/usr/src/homeassistant/homeassistant/config_entries.py", line 1068, in async_reload unload_result = await self.async_unload(entry_id) File "/usr/src/homeassistant/homeassistant/config_entries.py", line 1052, in async_unload raise OperationNotAllowed( homeassistant.config_entries.OperationNotAllowed: The config entry localtuya (localtuya) with entry_id xxx cannot be unloaded because it is not in a recoverable state (ConfigEntryState.FAILED_UNLOAD)
Dont doesnt really help either, first glance your having difficulty running localtuya integration, your not yet on the part where localtuya is connecting to your iot devices. Much better if you use a debug log for it which exact part your having issue to begin with, check the wiki for more infornation.
Thanks for the fast reply and sorry for bothering here, I just have solved this with few following steps: 1) updated HA to the latest version (but previously I had the version from the last week, so I didn't suspect this might be a problem), 2) reinstalled few covers via the Local Tuya standard install/update procedure,
Now it works again, I only hope this will not occur too often, Thanks to all for previous help.
Willy PS: so for me, this issue might be closed (not sure how about screamingbaby )
After power loss on HA and generally my house LocalTuya starts but, everything is greyed out, and it lingers in a semi-broken state. The log fills rapidly with cannot connect errors like below
This error originated from a custom integration.
_``` _Logger: custom_components.localtuya.common Source: custom_components/localtuya/common.py:219 Integration: LocalTuya (documentation, issues) First occurred: 6:27:43 PM (73 occurrences) Last logged: 6:27:57 PM
[eb0...zsj] Connect to 192.168.1.49 failed [eb4...0hc] Connect to 192.168.1.80 failed Traceback (most recent call last): File "/usr/local/lib/python3.10/asyncio/locks.py", line 390, in acquire await fut asyncio.exceptions.CancelledError
During handling of the above exception, another exception occurred:
Traceback (most recent call last): File "/usr/local/lib/python3.10/asyncio/tasks.py", line 456, in wait_for return fut.result() asyncio.exceptions.CancelledError
The above exception was the direct cause of the following exception:
Traceback (most recent call last): File "/config/custom_components/localtuya/common.py", line 202, in _make_connection status = await self._interface.status() File "/config/custom_components/localtuya/pytuya/init.py", line 507, in status status = await self.exchange(STATUS) File "/config/custom_components/localtuya/pytuya/init.py", line 486, in exchange msg = await self.dispatcher.wait_for(seqno) File "/config/custom_components/localtuya/pytuya/init.py", line 259, in wait_for await asyncio.wait_for(self.listeners[seqno].acquire(), timeout=timeout) File "/usr/local/lib/python3.10/asyncio/tasks.py", line 458, in wait_for raise exceptions.TimeoutError() from exc asyncio.exceptions.TimeoutError
During handling of the above exception, another exception occurred:
Traceback (most recent call last): File "/config/custom_components/localtuya/common.py", line 219, in _make_connection status = await self.interface.status() AttributeError: 'NoneType' object has no attribute 'status'
I am not the smartest guy at this stuff, but I am fairly good at networking. After running some Wireshark and digging, I believe the cause of my issue is DHCP leases changing, and the way localtuya handles offline devices. Here is what I think is happening, when power fails, everything renews its DHCP lease. At any given time, I have a selection of lights that are offline because the physical switch was used. When power has restored, those devices that were in an off state do not go for DHCP right away. I believe what is happening is that an offline device that has not been updated in localtuya is occupying the same old address as a device that was just renewed. For example, 'garage light left' had the address 192.168.1.54 prior to the power loss, and was offline at the time of the loss. 'Porch light 3' had an IP address of 192.168.1.23 prior to the power loss and was in an online state when power was restored. 'Porch light 3' renews its leases at power on and restoration of the DHCP service and pulls the address 192.168.1.54, which is updated via broadcast to localtuya. Because the 'garage light left' is offline, there is no renewal and thus no broadcast. I think what is happening, in this case, is that localtuya has the IP for the 'porch light 3' as 192.168.1.54, and the ip for the 'garage light left' (offline) is still stored at the same address. It appears that conflict keeps the integration from starting, as best I can tell. This didn't manifest until I had a lot of stuff on my network. I have roughly 200 devices on a /24 network, so on renewal of all the addresses, conflicts become likely with offline devices. I am going to set reservations for the lights this time to see if the stuff behaves better following a power loss.
Can anyone confirm that behavior? Or can anyone confirm that localtuya would indeed explode if that conflict exists?
I can confirm the same behavior, so reserving those localtuya IPs should avoid entirely that problem correct ?
I can confirm the same behavior, so reserving those localtuya IPs should avoid entirely that problem correct ?
Not really sure if it is needed, I didn't reserve the permanent IPs finally. The procedure I described above helped me to solve this without reserving the permament IPs. W.
Just enable debug logging with local tuya and you will see what really happens here, and static ip will not fix it because localtuya is waiting for the device to appear via discovery before attempting to reconnect.
If im not mistaken localtuya will reattempt connection to tuya device for 3 tries, if that still fails it will just stop trying and wait for it to reappear on the network by using the tuya device broadcast, so when the broadcast see the deviceid reappear on the network, it will attempt to reconnect to the device all of that with its newly assigned ip address.
thx for explaining :-)
ok thank you for reply, so do you plz have any other idea for what might be the reason of rhis issue?thx.willyDne 26. 9. 2022 7:22 napsal uživatel remlei @.***>: Its not the ip addrress, localtuya is smart enough to find the new IP of that device by deviceid using the network discovery broadcast. I actually have all of my tuya devices changing ip after prolonged power outage (long enough that the last leased ip expired) and localtuya managed to recover from this. I know this because of the diagnostic logs from old one to the new one.
—Reply to this email directly, view it on GitHub, or unsubscribe.You are receiving this because you commented.Message ID: @.***>
May be it would help. After a device firmware update, my 2 sockets were grayed in HA and I found this log for each:
2022-10-18 08:08:39.669 ERROR (MainThread) [custom_components.localtuya.common] [bfc...zmn] Connect to 192.168.1.58 failed
Traceback (most recent call last):
File "/usr/local/lib/python3.10/asyncio/locks.py", line 390, in acquire
await fut
asyncio.exceptions.CancelledError
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "/usr/local/lib/python3.10/asyncio/tasks.py", line 456, in wait_for
return fut.result()
asyncio.exceptions.CancelledError
The above exception was the direct cause of the following exception:
Traceback (most recent call last):
File "/config/custom_components/localtuya/common.py", line 170, in _make_connection
status = await self._interface.status()
File "/config/custom_components/localtuya/pytuya/__init__.py", line 481, in status
status = await self.exchange(STATUS)
File "/config/custom_components/localtuya/pytuya/__init__.py", line 460, in exchange
msg = await self.dispatcher.wait_for(seqno)
File "/config/custom_components/localtuya/pytuya/__init__.py", line 247, in wait_for
await asyncio.wait_for(self.listeners[seqno].acquire(), timeout=timeout)
File "/usr/local/lib/python3.10/asyncio/tasks.py", line 458, in wait_for
raise exceptions.TimeoutError() from exc
asyncio.exceptions.TimeoutError
The devices are working normally with Smart Home and don't works either with the official Tuya integration. So I guess it is not a Localtuya issue but more a Tuya issue in general.
My two devices are responding to ping. All my devices have static DHCP configuration.
After power loss on HA and generally my house LocalTuya starts but, everything is greyed out, and it lingers in a semi-broken state. The log fills rapidly with cannot connect errors like below
This error originated from a custom integration.
_``` _Logger: custom_components.localtuya.common Source: custom_components/localtuya/common.py:219 Integration: LocalTuya (documentation, issues) First occurred: 6:27:43 PM (73 occurrences) Last logged: 6:27:57 PM
[eb0...zsj] Connect to 192.168.1.49 failed [eb4...0hc] Connect to 192.168.1.80 failed Traceback (most recent call last): File "/usr/local/lib/python3.10/asyncio/locks.py", line 390, in acquire await fut asyncio.exceptions.CancelledError
During handling of the above exception, another exception occurred:
Traceback (most recent call last): File "/usr/local/lib/python3.10/asyncio/tasks.py", line 456, in wait_for return fut.result() asyncio.exceptions.CancelledError
The above exception was the direct cause of the following exception:
Traceback (most recent call last): File "/config/custom_components/localtuya/common.py", line 202, in _make_connection status = await self._interface.status() File "/config/custom_components/localtuya/pytuya/init.py", line 507, in status status = await self.exchange(STATUS) File "/config/custom_components/localtuya/pytuya/init.py", line 486, in exchange msg = await self.dispatcher.wait_for(seqno) File "/config/custom_components/localtuya/pytuya/init.py", line 259, in wait_for await asyncio.wait_for(self.listeners[seqno].acquire(), timeout=timeout) File "/usr/local/lib/python3.10/asyncio/tasks.py", line 458, in wait_for raise exceptions.TimeoutError() from exc asyncio.exceptions.TimeoutError
During handling of the above exception, another exception occurred:
Traceback (most recent call last): File "/config/custom_components/localtuya/common.py", line 219, in _make_connection status = await self.interface.status() AttributeError: 'NoneType' object has no attribute 'status'
I am not the smartest guy at this stuff, but I am fairly good at networking. After running some Wireshark and digging, I believe the cause of my issue is DHCP leases changing, and the way localtuya handles offline devices. Here is what I think is happening, when power fails, everything renews its DHCP lease. At any given time, I have a selection of lights that are offline because the physical switch was used. When power has restored, those devices that were in an off state do not go for DHCP right away. I believe what is happening is that an offline device that has not been updated in localtuya is occupying the same old address as a device that was just renewed. For example, 'garage light left' had the address 192.168.1.54 prior to the power loss, and was offline at the time of the loss. 'Porch light 3' had an IP address of 192.168.1.23 prior to the power loss and was in an online state when power was restored. 'Porch light 3' renews its leases at power on and restoration of the DHCP service and pulls the address 192.168.1.54, which is updated via broadcast to localtuya. Because the 'garage light left' is offline, there is no renewal and thus no broadcast. I think what is happening, in this case, is that localtuya has the IP for the 'porch light 3' as 192.168.1.54, and the ip for the 'garage light left' (offline) is still stored at the same address. It appears that conflict keeps the integration from starting, as best I can tell. This didn't manifest until I had a lot of stuff on my network. I have roughly 200 devices on a /24 network, so on renewal of all the addresses, conflicts become likely with offline devices. I am going to set reservations for the lights this time to see if the stuff behaves better following a power loss.
Can anyone confirm that behavior? Or can anyone confirm that localtuya would indeed explode if that conflict exists?