Open homecb opened 2 years ago
Have the same issue and know that at least 10 people has the same problem
I have the same error in my log
Logger: homeassistant Source: custom_components/localtuya/common.py:117 Integration: LocalTuya integration (documentation, issues) First occurred: 08:57:17 (156 occurrences) Last logged: 09:10:12
Error doing job: Exception in callback _SelectorDatagramTransport._read_ready()
Same here :
Logger: homeassistant
Source: custom_components/localtuya/common.py:117
Integration: LocalTuya integration ([documentation](https://github.com/rospogrigio/localtuya/), [issues](https://github.com/rospogrigio/localtuya/issues))
First occurred: 10:31:33 (1572 occurrences)
Last logged: 12:42:36
Error doing job: Exception in callback _SelectorDatagramTransport._read_ready()
Traceback (most recent call last):
File "/usr/local/lib/python3.9/asyncio/events.py", line 80, in _run
self._context.run(self._callback, *self._args)
File "/usr/local/lib/python3.9/asyncio/selector_events.py", line 1026, in _read_ready
self._protocol.datagram_received(data, addr)
File "/config/custom_components/localtuya/discovery.py", line 70, in datagram_received
self.device_found(decoded)
File "/config/custom_components/localtuya/discovery.py", line 79, in device_found
self._callback(device)
File "/config/custom_components/localtuya/__init__.py", line 105, in _device_discovered
entry = async_config_entry_by_device_id(hass, device_id)
File "/config/custom_components/localtuya/common.py", line 117, in async_config_entry_by_device_id
if device_id in entry.data[CONF_DEVICES]:
KeyError: 'devices'
I'm getting same error, about 27,000 occurrences in 12 hours.
Exactly the same over here, 3500 in about 5 hours so far
Same problem here. I had to rollback a recent backup. Hope you can solve it soon...
Me too. Just updated and crazy number of log errors accumulating.
Does anyone know how I can rollback to the previous version to get my devices back until this is resolved? I'd like to avoid restoring my entire HA from a backup.
I had the same problem -- had to remove LocalTuya altogether and add everything back. It's working for me now.
Same problem here. I had to rollback a recent backup. Hope you can solve it soon...
Update: Tried to update to V4 again and everything worked fine without any errors or device conflicts.
Removing the localtuya not really an option here , having to add 40+ devices and adding all DPS is a pain .... With the errors in logs , everything in localtuya works as expected though
Fortunately I only have 4 lights. Removed integration, restarted HA then re-added integration. After some trial and error with DPs I eventually got them working again. However, I'm still getting these log errors (one every 10s or so) and the 'device already configured' errors too.
I'm using the following to filter out these errors until this issue is resolved, they were making my log almost as big as my database.
logger:
filters:
homeassistant:
- '_SelectorDatagramTransport'
I'm using the following to filter out these errors until this issue is resolved, they were making my log almost as big as my database.
logger: filters: homeassistant: - '_SelectorDatagramTransport'
Amazing! I didn't know you could do that. Thanks for sharing.
Me too and I cannot go through removing and setting it up again, there are too many devices.
I'm seeing the same issue on my side.
Everything appears to be working - despite the numerous errors filling up the logs.
I'll add the log filter in.
Same issue here (log errors, but working ok)
What is happening here is:
def async_config_entry_by_device_id(hass, device_id):
"""Look up config entry by device id."""
current_entries = hass.config_entries.async_entries(DOMAIN)
for entry in current_entries:
if device_id in entry.data[CONF_DEVICES]:
return entry
return None
hass.config_entries.async_entries(DOMAIN) is likely in a state that isn't expected by the code
If I had to guess; this is something like:
1) async def async_migrate_entry(hass, config_entry: ConfigEntry):
not having run successfully to populate
new_data[CONF_DEVICES] = {
config_entry.data[CONF_DEVICE_ID]: config_entry.data.copy()
}
or 2) Something has created things in hass.config_entries.async_entries('localtuya') that are malformed (missing the data dict) IE: This never got called when creating
async def _create_entry(self, user_input):
"""Register new entry."""
# if self._async_current_entries():
# return self.async_abort(reason="already_configured")
await self.async_set_unique_id(user_input.get(CONF_USER_ID))
user_input[CONF_DEVICES] = {}
1) When you download the diagnostics here; what do you see?
I do not experience the same issue, and can see data structures like:
"data": {
"region": "eu",
"client_id": "....",
"client_secret": "...",
"user_id": "....",
"username": "localtuya",
"devices": { # I suspect many of you will not have this key.
"bf5****": {
2) Do any of you see (like I do) warnings about Platform localtuya does not generate unique IDs.
?
Wondering if that is causing an exception to be raised before _create_entry gets to the right spots or something similar.
Logger: homeassistant.components.sensor
Source: helpers/entity_platform.py:620
Integration: Sensor ([documentation](https://www.home-assistant.io/integrations/sensor), [issues](https://github.com/home-assistant/home-assistant/issues?q=is%3Aissue+is%3Aopen+label%3A%22integration%3A+sensor%22))
First occurred: 1:58:54 PM (8 occurrences)
Last logged: 1:58:58 PM
Platform localtuya does not generate unique IDs. ID local_bf5******_20 already exists - ignoring sensor.localtuya_fridge_voltage
Platform localtuya does not generate unique IDs. ID local_bf5******_19 already exists - ignoring sensor.fridge_current_consumption
I do not experience the same issue, and can see data structures like:
"data": { "region": "eu", "client_id": "....", "client_secret": "...", "user_id": "....", "username": "localtuya", "devices": { # I suspect many of you will not have this key. "bf5****": {
I see the data structure and I do have the devices key, within that there are 3 devices listed that correspond to the 3 devices I am using with localtuya.
- Do any of you see (like I do) warnings about
Platform localtuya does not generate unique IDs.
? Wondering if that is causing an exception to be raised before _create_entry gets to the right spots or something similar.
I am not seeing warnings about localtuya not generating unique IDs. I believe I've seen that warning some time in the past, perhaps just prior to localtuya version 4.0 coming out. When that came out I ended up manually removing all my yaml and adding my 3 devices back in the gui manually since the auto conversion during 4.0 upgrade did not work for me. I have not seen any 'not generating unique IDs' since sometime before all that happened.
Thanks @prndlrm - can I get you to carefully check the error in your logs, as I found I was experiencing something similar to this, but different in #968 KeyError: (a specific key)
While this one does seem to be about devices not being set. The fact it's in your diagnostics with "devices"... I wouldn't expect it to fail looking up that key; unless there is some process that adds/removes it at a different time to when you see the warning.
Hi @CloCkWeRX, I turned off my log filter to capture the error:
2022-07-25 07:04:42 ERROR (MainThread) [homeassistant] Error doing job: Exception in callback _SelectorDatagramTransport._read_ready()
Traceback (most recent call last):
File "/usr/local/lib/python3.10/asyncio/events.py", line 80, in _run
self._context.run(self._callback, *self._args)
File "/usr/local/lib/python3.10/asyncio/selector_events.py", line 1027, in _read_ready
self._protocol.datagram_received(data, addr)
File "/config/custom_components/localtuya/discovery.py", line 70, in datagram_received
self.device_found(decoded)
File "/config/custom_components/localtuya/discovery.py", line 79, in device_found
self._callback(device)
File "/config/custom_components/localtuya/__init__.py", line 105, in _device_discovered
entry = async_config_entry_by_device_id(hass, device_id)
File "/config/custom_components/localtuya/common.py", line 117, in async_config_entry_by_device_id
if device_id in entry.data[CONF_DEVICES]:
KeyError: 'devices'
@CloCkWeRX I didn't have any "Platform localtuya does not generate unique ID" in my logs message.
My diagnostic data field look like this:
"data": {
"region": "eu",
"client_id": "...",
"client_secret": "...",
"user_id": "...",
"username": "localtuya",
"no_cloud": true,
"devices": {
"bf2c22e644ab85cxxxxxug": {
"friendly_name": "Interrupteurs escaliers et garage",
"local_key": "26d...930",
"host": "192.168.1.xxx",
"device_id": "xxxx",
"protocol_version": "3.3",
"scan_interval": 30,
"product_key": "xxxx",
"dps_strings": [
"1 (value: False)",
"2 (value: False)",
"7 (value: 0)",
"8 (value: 0)",
"16 (value: True)"
],
"entities": [
{
"id": 1,
"friendly_name": "Lumi\u00e8res escalier",
"platform": "switch"
},
{
"id": 2,
"friendly_name": "Lumi\u00e8res garage",
"platform": "switch"
}
]
},
"bfa7bxxxxiypj": {
"friendly_name": "Interrupteurs bricolage",
"local_key": "c4a...d70",
"host": "xxxx",
"device_id": "bfa7bc8xxxxfdiypj",
"protocol_version": "3.3",
"scan_interval": 30,
"product_key": "tazmxxxxeszbb",
"dps_strings": [
"1 (value: False)",
"2 (value: False)",
"7 (value: 0)",
"8 (value: 0)",
"16 (value: True)"
],
"entities": [
{
"id": 1,
"friendly_name": "Lumi\u00e8res bricolage 1",
"platform": "switch"
},
{
"id": 2,
"friendly_name": "Lumi\u00e8res bricolage 2",
"platform": "switch"
}
]
},
"005xxxx87": {
"friendly_name": "Lumi\u00e8res terrasse sam",
"local_key": "d13...f39",
"host": "192.168xxxx",
"device_id": "0052xxxxe8a2187",
"protocol_version": "3.3",
"scan_interval": 30,
"product_key": "keyxxxxy43jp",
"dps_strings": [
"1 (value: False)",
"9 (value: 0)"
],
"entities": [
{
"id": 1,
"friendly_name": "Lumi\u00e8re terrase sam",
"platform": "switch"
}
]
},
"322xxxxe0f9": {
"friendly_name": "Lumi\u00e8res entr\u00e9e et terrasse",
"local_key": "8a8...289",
"host": "192.168xxxx",
"device_id": "322xxxxce8ae0f9",
"protocol_version": "3.3",
"scan_interval": 30,
"product_key": "key5xxxx43jp",
"dps_strings": [
"1 (value: True)",
"2 (value: True)",
"9 (value: 0)",
"10 (value: 0)"
],
"entities": [
{
"id": 1,
"friendly_name": "Lumi\u00e8res entr\u00e9e",
"platform": "switch"
},
{
"id": 2,
"friendly_name": "Lumi\u00e8res terrasse",
"platform": "switch"
}
]
},
"3220xxxxda6f": {
"friendly_name": "Interrupteur escalier mezzanine",
"local_key": "d37...29a",
"host": "192xxxx",
"device_id": "32xxxx8ada6f",
"protocol_version": "3.3",
"scan_interval": 30,
"product_key": "key5xxxxjp",
"dps_strings": [
"1 (value: False)",
"2 (value: False)",
"9 (value: 0)",
"10 (value: 0)"
],
"entities": [
{
"id": 1,
"friendly_name": "Lumi\u00e8re escalier mezzanine",
"platform": "switch"
},
{
"id": 2,
"friendly_name": "Bonjour Bonne nuit",
"platform": "switch"
}
]
},
"3220xxxxe29d": {
"friendly_name": "Interrupteurs lumi\u00e8res cuisines",
"local_key": "ca5...038",
"host": "192.168xxxx",
"device_id": "3220xxxxae29d",
"protocol_version": "3.3",
"scan_interval": 30,
"product_key": "keyxxxx43jp",
"dps_strings": [
"1 (value: False)",
"2 (value: False)",
"9 (value: 0)",
"10 (value: 0)"
],
"entities": [
{
"id": 1,
"friendly_name": "LEDs cuisine",
"platform": "switch"
},
{
"id": 2,
"friendly_name": "Spots cuisine",
"platform": "switch"
}
]
},
"32206xxxx02": {
"friendly_name": "Interrupteurs lumi\u00e8res sam",
"local_key": "c96...7a1",
"host": "192.xxxx",
"device_id": "3220xxxx202",
"protocol_version": "3.3",
"scan_interval": 30,
"product_key": "keyxxxx3jp",
"dps_strings": [
"1 (value: False)",
"2 (value: False)",
"9 (value: 0)",
"10 (value: 0)"
],
"entities": [
{
"id": 1,
"friendly_name": "Spots sam",
"platform": "switch"
},
{
"id": 2,
"friendly_name": "LEDs sam",
"platform": "switch"
}
]
},
"bfc55xxxxf7uqo": {
"friendly_name": "VMC",
"local_key": "ff0...0bb",
"host": "192.168.1.10",
"device_id": "bfc5xxxx5f7uqo",
"protocol_version": "3.3",
"scan_interval": 30,
"product_key": "key7xxxxa3x9",
"dps_strings": [
"1 (value: False)",
"2 (value: False)",
"7 (value: 0)",
"8 (value: 0)",
"14 (value: memory)",
"17 (value: )",
"18 (value: )"
],
"entities": [
{
"id": 1,
"friendly_name": "VMC lent",
"platform": "switch"
},
{
"id": 2,
"friendly_name": "VMC Rapide",
"platform": "switch"
}
]
},
"bf03xxxxf3colwc": {
"friendly_name": "Volet roulant sam",
"local_key": "2c6...ca9",
"host": "192.168.1.32",
"device_id": "bf031xxxxcolwc",
"protocol_version": "3.3",
"scan_interval": 30,
"product_key": "lidxxxxiqw",
"dps_strings": [
"1 (value: stop)",
"2 (value: 100)",
"8 (value: forward)",
"10 (value: 28)"
],
"entities": [
{
"id": 1,
"friendly_name": "Volet roulant sam",
"positioning_mode": "timed",
"position_inverted": false,
"span_time": 28.0,
"commands_set": "open_close_stop",
"platform": "cover"
},
{
"id": 2,
"friendly_name": "Volet sam position",
"min_value": 0.0,
"max_value": 100000.0,
"platform": "number"
},
{
"id": 8,
"friendly_name": "Volet sam forward",
"state_on": "True",
"state_off": "False",
"platform": "binary_sensor"
},
{
"id": 10,
"friendly_name": "Volet sam tempo",
"min_value": 0.0,
"max_value": 100000.0,
"platform": "number"
}
]
},
"bfd0xxxxe5lpes": {
"friendly_name": "Temp\u00e9rature salon",
"local_key": "b14...4bb",
"host": "192.xxxx",
"device_id": "bfd0exxxxe5lpes",
"protocol_version": "3.3",
"scan_interval": 30,
"product_key": "7akxxxxxdsib",
"dps_strings": [
"1 (value: 211)",
"2 (value: 37)"
],
"entities": [
{
"id": 1,
"friendly_name": "Temp\u00e9rature salon",
"device_class": "temperature",
"unit_of_measurement": "\u00b0C",
"scaling": 0.1,
"platform": "sensor"
},
{
"id": 2,
"friendly_name": "Humidit\u00e9e salon",
"device_class": "humidity",
"unit_of_measurement": "%",
"scaling": 1.0,
"platform": "sensor"
}
]
},
"bfbxxxxdctiz": {
"friendly_name": "Temp\u00e9rature entr\u00e9e",
"local_key": "348...fcc",
"host": "192.168.1.22",
"device_id": "bfbaxxxx7dctiz",
"protocol_version": "3.3",
"scan_interval": 30,
"product_key": "7axxxxkdsib",
"dps_strings": [
"1 (value: 181)",
"2 (value: 43)"
],
"entities": [
{
"id": 1,
"friendly_name": "Temp\u00e9rateur entr\u00e9e",
"device_class": "temperature",
"unit_of_measurement": "\u00b0C",
"scaling": 0.1,
"platform": "sensor"
},
{
"id": 2,
"friendly_name": "Humidit\u00e9e entr\u00e9e",
"device_class": "humidity",
"unit_of_measurement": "%",
"scaling": 1.0,
"platform": "sensor"
}
]
},
"bfe4xxxx6tb": {
"friendly_name": "Prise radiateur mezzanine",
"host": "192xxxx7",
"local_key": "673...348",
"protocol_version": "3.3",
"entities": [
{
"friendly_name": "Prise radiateur mezzanine",
"id": 1,
"platform": "switch"
},
{
"friendly_name": "Prise radiateur mezzanine power",
"unit_of_measurement": "W",
"device_class": "power",
"scaling": 0.1,
"id": 19,
"platform": "sensor"
}
],
"device_id": "bfe407813ac1646fxxxx5236tb",
"dps_strings": [
"1 (value: True)",
"9 (value: 0)",
"17 (value: 2)",
"18 (value: 8811)",
"19 (value: 22040)",
"20 (value: 2458)",
"21 (value: 1)",
"22 (value: 657)",
"23 (value: 32483)",
"24 (value: 19617)",
"25 (value: 2150)",
"26 (value: 0)",
"38 (value: memory)",
"40 (value: relay)",
"41 (value: False)",
"42 (value: )",
"43 (value: )",
"44 (value: )"
],
"product_key": "keyjuxxxxyhan"
},
"bfcc80xxxxrzmn": {
"friendly_name": "Prise t\u00e9l\u00e9",
"host": "192.168.1.58",
"local_key": "399...e1a",
"protocol_version": "3.3",
"entities": [
{
"friendly_name": "Prise t\u00e9l\u00e9",
"id": 1,
"platform": "switch"
},
{
"friendly_name": "Prise t\u00e9l\u00e9 power",
"unit_of_measurement": "W",
"device_class": "power",
"scaling": 0.1,
"id": 19,
"platform": "sensor"
}
],
"device_id": "bfcc8xxxxbrzmn",
"dps_strings": [
"1 (value: True)",
"9 (value: 0)",
"17 (value: 17)",
"18 (value: 199)",
"19 (value: 349)",
"20 (value: 2500)",
"21 (value: 1)",
"22 (value: 657)",
"23 (value: 32258)",
"24 (value: 19666)",
"25 (value: 2150)",
"26 (value: 0)",
"38 (value: memory)",
"40 (value: relay)",
"41 (value: False)",
"42 (value: )",
"43 (value: )",
"44 (value: )"
],
"product_key": "kexxxxyhan"
}
},
"updated_at": "1659291336009",
"cloud_devices": {}
}
I don't see any weird entries.
I hope this helps
I am experiencing the same issue, and for many months now.
I'm getting this too. Any fix other than reinstalling and setting everything up from scratch?
2023-01-01 14:57:56.018 ERROR (MainThread) [homeassistant] Error doing job: Exception in callback _SelectorDatagramTransport._read_ready()
Traceback (most recent call last):
File "/usr/local/lib/python3.10/asyncio/events.py", line 80, in _run
self._context.run(self._callback, *self._args)
File "/usr/local/lib/python3.10/asyncio/selector_events.py", line 1027, in _read_ready
self._protocol.datagram_received(data, addr)
File "/config/custom_components/localtuya/discovery.py", line 70, in datagram_received
self.device_found(decoded)
File "/config/custom_components/localtuya/discovery.py", line 79, in device_found
self._callback(device)
File "/config/custom_components/localtuya/__init__.py", line 105, in _device_discovered
entry = async_config_entry_by_device_id(hass, device_id)
File "/config/custom_components/localtuya/common.py", line 125, in async_config_entry_by_device_id
if device_id in entry.data[CONF_DEVICES]:
KeyError: 'devices'
I'm running HA 2022.12.8, HACS 1.29.0 and LocalTuya 4.1.1.
Errors are appearing in the logs multiple times every minute.
Don’t think there’s a fix. I just have the error filtered out of my logs (as suggested earlier in this thread) and forgot all about it.
Maybe I'm going to finally learn Python. 😆
I have this error since I added a Lidl Zigbee Gateway. When I disconnect the Zigbee gateway, the error stops.
I have the same error, has anyone found a solution?
I have the same error and also some whining about doing bad things to Ingress.
I had added a new bulb to replace a failed one and mistakenly removed the one I had just added to LocalTuya. I tried every remove and add back solution I could find but each time it brought me the screen to add DPID 20 for the bulb I was not allowed enter the required values. So - luckinly last night was back up night and now tonight is restore night
The problem
After upgrading to 4.0, I started seeing the following error in the logs:
Error doing job: Exception in callback _SelectorDatagramTransport._read_ready()
Home Assistant taceback/logs