Open madchap opened 1 year ago
Same here with 3 ZIGBEE_DOOR_AND_WINDOW_SENSOR
I have the same issue with a temperature sensor connected to TH10. local mode.
I haven't noticed this until today when I put my Home Assistant Raspberry Pi on a mini UPS that keeps it powered on all the time. I have multiple power outs per day in my country and I put the Pi on UPS so that I didn't have to wait each time for the Pi to boot up to get control of my sonoff lights and devices when power restores. Now the problem is that power restores and my sonoff devices get power and go online, it does not inform home assistant that its state changed from unavailable to on. Previously when I wasn't using a UPS on the Pi, the Pi would power on alongside the sonoff devices and presumably would run the integration reload on boot so I it would always refresh on each Pi boot. Now that my Pi stays on 24/7, the sonoff reload not working is obvious
After 2 days I could say that that problem is quite different apart the error that is the same:
"exception": "Traceback (most recent call last):\n File \"/config/custom_components/sonoff/core/ewelink/cloud.py\", line 318, in connect\n raise Exception(resp)\nException: {'error': 406, 'apikey': 'b1552c13-f6d1-4993-b7b8-fe81f07407a0', 'sequence': '1675840624863', 'actionName': 'userOnline'}\n",
"count": 3,
"first_occurred": 1675840624.8961823
}
],
"device": {
"uiid": 3026,
"params": {
"bindInfos": "***",
"subDevId": "87ae9028004b12003026",
"parentid": "1001703423",
"lock": 0,
"trigTime": "1675837586829",
"battery": 100,
"supportPowConfig": 1
},
"model": "ZIGBEE_DOOR_AND_WINDOW_SENSOR",
"online": true,
"localtype": null,
"host": null,
"deviceid": "a48004640f"
}
The issue occurs only after a restart and, restarting again it works fine
have
This is actually something I encounter too. Many power outages caused a lot of issues so I put a ups as well and am seeing the Sensor unavailable until reloading the integration
ZBBridge works only via cloud. There may be a problem with automatically reconnecting to the cloud if the cloud returns errors. I will check this point.
I think I have the same problem since the upgrade to the v3.4.0 I did yesterday. Approximately 1 hour after the restart of the integration, the entities are suddenly no more available (I have a RF Bridge 433 and a remote) and I found that in the debug log (device becomes offline at the end):
2023-02-08 08:51:59 [D] SysInfo: {'installation_type': 'Home Assistant OS', 'version': '2023.2.3', 'dev': False, 'hassio': True, 'virtualenv': False, 'python_version': '3.10.7', 'docker': True, 'arch': 'x86_64', 'timezone': 'Europe/Paris', 'os_name': 'Linux', 'os_version': '5.15.90', 'user': 'root', 'supervisor': '2023.01.1', 'host_os': 'Home Assistant OS 9.5', 'docker_version': '20.10.22', 'chassis': 'vm', 'sonoff_version': '3.4.0 (5406fa7)'}
2023-02-08 08:51:59 [D] 1 devices loaded from Cloud
2023-02-08 08:51:59 [D] 1000f22239 UIID 0028 | {'version': 8, 'sledOnline': 'on', 'init': 1, 'fwVersion': '3.5.2', 'rssi': -70, 'setState': 'arm', 'rfList': [{'rfChl': 0, 'rfVal': '1E0000FA0302929148'}, {'rfChl': 1, 'rfVal': '1DF6010E02F892914C'}, {'rfChl': 2, 'rfVal': '1E0A00FA0302929144'}, {'rfChl': 3, 'rfVal': '1E1401040302929149'}, {'rfChl': 4, 'rfVal': '1DEC011802EE929142'}, {'rfChl': 5, 'rfVal': '1E1E01040302929145'}, {'rfChl': 6, 'rfVal': '1E6401040302929141'}, {'rfChl': 7, 'rfVal': '1E50010E02F8929143'}, {'rfChl': 8, 'rfVal': '1E32010E02F8DF1518'}, {'rfChl': 9, 'rfVal': '1E5A010E02EEDF151C'}, {'rfChl': 10, 'rfVal': '1E2801040302DF1511'}, {'rfChl': 11, 'rfVal': '1E8C010E02F8DF1513'}, {'rfChl': 12, 'rfVal': '1E64010E02F8DF1514'}, {'rfChl': 13, 'rfVal': '1E78010E02F8DF1519'}], 'only_device': {'ota': 'success'}, 'cmd': 'trigger', 'rfChl': 8, 'rfTrig0': '2022-05-04T18:37:05.000Z', 'rfTrig4': '2023-01-27T14:06:10.000Z', 'rfTrig1': '2022-09-07T20:07:32.000Z', 'rfTrig3': '2022-11-06T14:19:06.000Z', 'rfTrig2': '2022-12-02T18:56:51.000Z', 'rfTrig5': '2023-01-25T18:38:13.000Z', 'rfTrig7': '2023-01-23T08:07:48.000Z', 'rfTrig6': '2021-09-24T16:39:56.000Z', 'rfTrig8': '2023-02-07T21:11:34.000Z', 'rfTrig9': '2023-02-07T21:11:27.000Z', 'rfTrig10': '2022-10-11T05:52:41.000Z', 'rfTrig11': '2022-04-03T12:40:42.000Z', 'rfTrig12': '2023-02-07T23:31:06.000Z', 'rfTrig13': '2023-02-07T23:31:08.000Z'}
2023-02-08 08:51:59 [D] LOCAL mode start
2023-02-08 08:51:59 [D] 1000f22239 <= Local3 | {'sledOnline': 'on', 'arming': True, 'rfTrig13': '2023-02-07T23:31:08.000Z'} | 21
2023-02-08 08:52:02 [D] Add 3 entities
2023-02-08 08:52:24 [D] CLOUD None => None
2023-02-08 08:52:24 [D] 1 devices loaded from Cloud
2023-02-08 08:52:24 [D] 1000f22239 UIID 0028 | {'version': 8, 'sledOnline': 'on', 'init': 1, 'fwVersion': '3.5.2', 'rssi': -70, 'setState': 'arm', 'rfList': [{'rfChl': 0, 'rfVal': '1E0000FA0302929148'}, {'rfChl': 1, 'rfVal': '1DF6010E02F892914C'}, {'rfChl': 2, 'rfVal': '1E0A00FA0302929144'}, {'rfChl': 3, 'rfVal': '1E1401040302929149'}, {'rfChl': 4, 'rfVal': '1DEC011802EE929142'}, {'rfChl': 5, 'rfVal': '1E1E01040302929145'}, {'rfChl': 6, 'rfVal': '1E6401040302929141'}, {'rfChl': 7, 'rfVal': '1E50010E02F8929143'}, {'rfChl': 8, 'rfVal': '1E32010E02F8DF1518'}, {'rfChl': 9, 'rfVal': '1E5A010E02EEDF151C'}, {'rfChl': 10, 'rfVal': '1E2801040302DF1511'}, {'rfChl': 11, 'rfVal': '1E8C010E02F8DF1513'}, {'rfChl': 12, 'rfVal': '1E64010E02F8DF1514'}, {'rfChl': 13, 'rfVal': '1E78010E02F8DF1519'}], 'only_device': {'ota': 'success'}, 'cmd': 'trigger', 'rfChl': 8, 'rfTrig0': '2022-05-04T18:37:05.000Z', 'rfTrig4': '2023-01-27T14:06:10.000Z', 'rfTrig1': '2022-09-07T20:07:32.000Z', 'rfTrig3': '2022-11-06T14:19:06.000Z', 'rfTrig2': '2022-12-02T18:56:51.000Z', 'rfTrig5': '2023-01-25T18:38:13.000Z', 'rfTrig7': '2023-01-23T08:07:48.000Z', 'rfTrig6': '2021-09-24T16:39:56.000Z', 'rfTrig8': '2023-02-07T21:11:34.000Z', 'rfTrig9': '2023-02-07T21:11:27.000Z', 'rfTrig10': '2022-10-11T05:52:41.000Z', 'rfTrig11': '2022-04-03T12:40:42.000Z', 'rfTrig12': '2023-02-07T23:31:06.000Z', 'rfTrig13': '2023-02-07T23:31:08.000Z'}
2023-02-08 08:52:24 [D] LOCAL mode start
2023-02-08 08:52:24 [D] 1000f22239 <= Local3 | {'sledOnline': 'on', 'arming': True, 'rfTrig13': '2023-02-07T23:31:08.000Z'} | 21
2023-02-08 08:52:27 [D] Add 3 entities
2023-02-08 08:53:29 [D] 1000f22239 => Local4 | {'cmd': 'transmit', 'rfChl': 0} <= {'seq': 22, 'sequence': '1675842809000', 'error': 0}
2023-02-08 09:08:31 [D] 1000f22239 <= Local3 | {'sledOnline': 'on', 'arming': True, 'rfTrig8': '2023-02-08T08:08:32.000Z'} | 22
2023-02-08 09:08:34 [D] 1000f22239 <= Local3 | {'sledOnline': 'on', 'arming': True, 'rfTrig8': '2023-02-08T08:08:34.000Z'} | 23
2023-02-08 09:23:31 [D] 1000f22239 <= Local3 | {'sledOnline': 'on', 'arming': True, 'rfTrig8': '2023-02-08T08:08:34.000Z'} | 23
2023-02-08 09:37:35 [D] 1000f22239 <= Local3 | {'sledOnline': 'on', 'arming': True, 'rfTrig8': '2023-02-08T08:08:34.000Z'} | 23
2023-02-08 09:51:39 [D] 1000f22239 <= Local3 | {'sledOnline': 'on', 'arming': True, 'rfTrig8': '2023-02-08T08:08:34.000Z'} | 23
2023-02-08 10:10:32 [D] 1000f22239 <= Local0 | {'online': False} |
2023-02-08 10:10:43 [D] 1000f22239 => Local4 | {'cmd': 'info'} !! Timeout 10
2023-02-08 10:10:43 [D] 1000f22239 !! Local4 | Device offline
and in the config-entry file, the ip in the host field is replaced by "null" (the ip is correct after the restart):
"model": "RFBridge433",
"online": true,
"localtype": "rf",
"host": null
I have to reload the integration, but 1 hour after the problem occurs again.
(I did also a firmware upgrade of the bridge yesterday, but with no impact on the problem)
@AlexxIT if you tell me what debug statements to put in cloud.py
or others, I'll update the scripts manually to help out.
I have the same issue. Before latest upgrade the sonoff integration was rock solid but now I have to restart it all the time to reconnect to all my devices. I only run in local mode.
Тоже заметил проблему, когда вентиляция перестала включаться. Перезапуск помогает, но не долго. Пробовал и Локально и облоко. Пропадает!
I have it configured in local mode and they always go unresponsive after 10-15 minutes until I toggle a device in the app and reload. It also cares if you recently used the ewelink phone app.
It's hella weird
Hi @AlexxIT do you have some news about this strange problem ? or did you have change to check the errors ? just to know if the problem is for all and you suggest to wait before install to latest release or if we can go ahed due to only some installation are afftected. wbr
I have the same issue after updating to 3.4.0. I roll back to 3.3.1, and it works fine.
А у меня после обновления стала отваливаться реле RE5V1C. Причем спонтанно. Пробовал и авто режим, и локальный. Время до отвала от 10 минут до 1,5часов.
После переустановки интеграции вообще реле пропало в любом режиме...
2023-02-12 10:36:17 [D] SysInfo: {'installation_type': 'Home Assistant Supervised', 'version': '2023.2.3', 'dev': False, 'hassio': True, 'virtualenv': False, 'python_version': '3.10.7', 'docker': True, 'arch': 'x86_64', 'timezone': 'Europe/Moscow', 'os_name': 'Linux', 'os_version': '5.10.0-19-amd64', 'user': 'root', 'supervisor': '2023.01.1', 'host_os': 'Debian GNU/Linux 11 (bullseye)', 'docker_version': '20.10.21', 'chassis': 'desktop', 'sonoff_version': '3.4.0 (5406fa7)'}
2023-02-12 10:36:17 [D] 1 devices loaded from Cloud
2023-02-12 10:36:17 [D] LOCAL mode start
2023-02-12 10:36:17 [D] 10012ae408 !! skip setup for encrypted device
2023-02-12 10:36:33 [D] CLOUD None => None
2023-02-12 10:36:33 [D] 1 devices loaded from Cloud
2023-02-12 10:36:33 [D] LOCAL mode start
2023-02-12 10:36:33 [D] 10012ae408 !! skip setup for encrypted device
2023-02-12 10:37:00 [D] CLOUD None => None
2023-02-12 10:37:00 [D] 1 devices loaded from Cloud
2023-02-12 10:37:00 [D] AUTO mode start
2023-02-12 10:37:00 [D] 10012ae408 !! skip setup for encrypted device
2023-02-12 10:37:01 [D] CLOUD None => True
2023-02-12 10:37:08 [D] CLOUD True => None
2023-02-12 10:37:09 [D] 1 devices loaded from Cloud
2023-02-12 10:37:09 [D] AUTO mode start
2023-02-12 10:37:09 [D] 10012ae408 !! skip setup for encrypted device
2023-02-12 10:37:09 [D] CLOUD None => True
2023-02-12 10:39:54 [D] CLOUD True => None
2023-02-12 10:39:54 [D] 1 devices loaded from Cloud
2023-02-12 10:39:54 [D] AUTO mode start
2023-02-12 10:39:54 [D] 10012ae408 !! skip setup for encrypted device
2023-02-12 10:39:55 [D] CLOUD None => True
2023-02-12 10:41:20 [D] CLOUD True => None
2023-02-12 10:41:21 [D] 1 devices loaded from Cloud
2023-02-12 10:41:21 [D] AUTO mode start
2023-02-12 10:41:21 [D] 10012ae408 !! skip setup for encrypted device
2023-02-12 10:41:21 [D] CLOUD None => True
2023-02-12 10:44:04 [D] CLOUD True => None
2023-02-12 10:44:04 [D] 1 devices loaded from Cloud
2023-02-12 10:44:04 [D] CLOUD mode start
2023-02-12 10:44:05 [D] CLOUD None => True
2023-02-12 10:44:15 [D] CLOUD True => None
2023-02-12 10:44:16 [D] 1 devices loaded from Cloud
2023-02-12 10:44:16 [D] CLOUD mode start
2023-02-12 10:44:16 [D] CLOUD None => True
2023-02-12 10:46:15 [D] CLOUD True => None
2023-02-12 10:46:53 [D] 1 devices loaded from Cloud
2023-02-12 10:46:53 [D] AUTO mode start
2023-02-12 10:46:53 [D] 10012ae408 !! skip setup for encrypted device
2023-02-12 10:46:54 [D] CLOUD None => True
2023-02-12 10:47:19 [D] CLOUD True => None
2023-02-12 10:47:20 [D] 1 devices loaded from Cloud
2023-02-12 10:47:20 [D] AUTO mode start
2023-02-12 10:47:20 [D] 10012ae408 !! skip setup for encrypted device
2023-02-12 10:47:21 [D] CLOUD None => True
2023-02-12 10:47:30 [D] CLOUD True => None
2023-02-12 10:47:31 [D] 1 devices loaded from Cloud
2023-02-12 10:47:31 [D] AUTO mode start
2023-02-12 10:47:31 [D] 10012ae408 !! skip setup for encrypted device
2023-02-12 10:47:31 [D] CLOUD None => True
2023-02-12 10:48:19 [D] CLOUD True => None
2023-02-12 10:48:19 [D] 1 devices loaded from Cloud
2023-02-12 10:48:19 [D] AUTO mode start
2023-02-12 10:48:19 [D] 10012ae408 !! skip setup for encrypted device
2023-02-12 10:48:20 [D] CLOUD None => True
2023-02-12 10:48:44 [D] CLOUD True => None
2023-02-12 10:48:44 [D] 1 devices loaded from Cloud
2023-02-12 10:48:44 [D] CLOUD mode start
2023-02-12 10:48:45 [D] CLOUD None => True
2023-02-12 10:50:04 [D] CLOUD True => None
2023-02-12 10:50:05 [D] 1 devices loaded from Cloud
2023-02-12 10:50:05 [D] CLOUD mode start
2023-02-12 10:50:06 [D] CLOUD None => True
2023-02-12 10:50:23 [D] CLOUD True => None
2023-02-12 10:50:23 [D] 1 devices loaded from Cloud
2023-02-12 10:50:23 [D] LOCAL mode start
2023-02-12 10:50:23 [D] 10012ae408 !! skip setup for encrypted device
2023-02-12 10:50:29 [D] CLOUD None => None
2023-02-12 10:50:29 [D] 1 devices loaded from Cloud
2023-02-12 10:50:29 [D] LOCAL mode start
2023-02-12 10:50:29 [D] 10012ae408 !! skip setup for encrypted device
I have the same issue after updating to 3.4.0. I roll back to 3.3.1, and it works fine.
How to roll back correctly so as not to break it?
I didn't have this issue with the previous version. I mean not in the last few months (when I started using the whole Sonoff stuff, I experienced it a few times). It is unusable at the moment, the heating fails at least once every day.
I use one 4ch and one 4chpro, local, or auto mode.
Here are a few ldebug logs of such an unfortunate event:
2023-02-11 19:59:20 [D] 10015bc32f => Local4 | {'switches': [{'outlet': 0, 'switch': 'on'}]} <= {'seq': 5924, 'sequence': '1676141960001', 'error': 0}
2023-02-11 19:59:20 [D] 10015bc32f <= Local3 | {'sledOnline': 'on', 'configure': [{'startup': 'stay', 'outlet': 0}, {'startup': 'stay', 'outlet': 1}, {'startup': 'stay', 'outlet': 2}, {'startup': 'stay', 'outlet': 3}], 'pulses': [{'pulse': 'off', 'width': 1000, 'outlet': 0}, {'pulse': 'off', 'width': 1000, 'outlet': 1}, {'pulse': 'off', 'width': 1000, 'outlet': 2}, {'pulse': 'off', 'width': 1000, 'outlet': 3}], 'switches': [{'switch': 'on', 'outlet': 0}, {'switch': 'on', 'outlet': 1}, {'switch': 'on', 'outlet': 2}, {'switch': 'on', 'outlet': 3}]} | 5924
2023-02-11 19:59:20 [D] 10015aae9b => Local4 | {'switches': [{'outlet': 1, 'switch': 'on'}, {'outlet': 3, 'switch': 'on'}, {'outlet': 2, 'switch': 'on'}]} <= {'seq': 2606, 'sequence': '1676141960000', 'error': 0}
2023-02-11 19:59:20 [D] 10015aae9b <= Local3 | {'sledOnline': 'on', 'configure': [{'startup': 'stay', 'outlet': 0}, {'startup': 'stay', 'outlet': 1}, {'startup': 'stay', 'outlet': 2}, {'startup': 'stay', 'outlet': 3}], 'pulses': [{'pulse': 'off', 'width': 1000, 'outlet': 0}, {'pulse': 'off', 'width': 1000, 'outlet': 1}, {'pulse': 'off', 'width': 1000, 'outlet': 2}, {'pulse': 'off', 'width': 1000, 'outlet': 3}], 'switches': [{'switch': 'on', 'outlet': 0}, {'switch': 'on', 'outlet': 1}, {'switch': 'on', 'outlet': 2}, {'switch': 'on', 'outlet': 3}]} | 2606
2023-02-11 20:18:12 [D] 10015bc32f <= Local0 | {'online': False} |
2023-02-11 20:18:12 [D] 10015aae9b <= Local0 | {'online': False} |
2023-02-11 20:18:23 [D] 10015bc32f => Local4 | {'cmd': 'info'} !! Timeout 10
2023-02-11 20:18:23 [D] 10015bc32f !! Local4 | Device offline
2023-02-11 20:18:23 [D] 10015aae9b => Local4 | {'cmd': 'info'} !! Timeout 10
2023-02-11 20:18:23 [D] 10015aae9b !! Local4 | Device offline
2023-02-11 20:24:07 [D] CLOUD None => None
2023-02-11 20:24:07 [D] 2 devices loaded from Cache
2023-02-11 20:24:07 [D] 10015aae9b UIID 0004 | {'version': 8, 'only_device': {'ota': 'success'}, 'sledOnline': 'on', 'fwVersion': '3.5.1', 'rssi': -56, 'init': 1, 'lock': 0, 'configure': [{'startup': 'stay', 'outlet': 0}, {'startup': 'stay', 'outlet': 1}, {'startup': 'stay', 'outlet': 2}, {'startup': 'stay', 'outlet': 3}], 'pulses': [{'pulse': 'off', 'width': 1000, 'outlet': 0}, {'pulse': 'off', 'width': 1000, 'outlet': 1}, {'pulse': 'off', 'width': 1000, 'outlet': 2}, {'pulse': 'off', 'width': 1000, 'outlet': 3}], 'switches': [{'switch': 'off', 'outlet': 0}, {'switch': 'off', 'outlet': 1}, {'switch': 'off', 'outlet': 2}, {'switch': 'off', 'outlet': 3}]}
2023-02-11 20:24:07 [D] 10015bc32f UIID 0004 | {'version': 8, 'only_device': {'ota': 'success'}, 'sledOnline': 'on', 'fwVersion': '3.5.1', 'rssi': -57, 'init': 1, 'lock': 0, 'configure': [{'startup': 'stay', 'outlet': 0}, {'startup': 'stay', 'outlet': 1}, {'startup': 'stay', 'outlet': 2}, {'startup': 'stay', 'outlet': 3}], 'pulses': [{'pulse': 'off', 'width': 1000, 'outlet': 0}, {'pulse': 'off', 'width': 1000, 'outlet': 1}, {'pulse': 'off', 'width': 1000, 'outlet': 2}, {'pulse': 'off', 'width': 1000, 'outlet': 3}], 'switches': [{'switch': 'off', 'outlet': 0}, {'switch': 'off', 'outlet': 1}, {'switch': 'off', 'outlet': 2}, {'switch': 'on', 'outlet': 3}]}
2023-02-11 20:24:07 [D] AUTO mode start
2023-02-11 20:24:10 [D] 10015bc32f <= Local3 | {'sledOnline': 'on', 'configure': [{'startup': 'stay', 'outlet': 0}, {'startup': 'stay', 'outlet': 1}, {'startup': 'stay', 'outlet': 2}, {'startup': 'stay', 'outlet': 3}], 'pulses': [{'pulse': 'off', 'width': 1000, 'outlet': 0}, {'pulse': 'off', 'width': 1000, 'outlet': 1}, {'pulse': 'off', 'width': 1000, 'outlet': 2}, {'pulse': 'off', 'width': 1000, 'outlet': 3}], 'switches': [{'switch': 'on', 'outlet': 0}, {'switch': 'on', 'outlet': 1}, {'switch': 'on', 'outlet': 2}, {'switch': 'on', 'outlet': 3}]} | 5924
2023-02-11 20:24:10 [D] 10015aae9b <= Local3 | {'sledOnline': 'on', 'configure': [{'startup': 'stay', 'outlet': 0}, {'startup': 'stay', 'outlet': 1}, {'startup': 'stay', 'outlet': 2}, {'startup': 'stay', 'outlet': 3}], 'pulses': [{'pulse': 'off', 'width': 1000, 'outlet': 0}, {'pulse': 'off', 'width': 1000, 'outlet': 1}, {'pulse': 'off', 'width': 1000, 'outlet': 2}, {'pulse': 'off', 'width': 1000, 'outlet': 3}], 'switches': [{'switch': 'on', 'outlet': 0}, {'switch': 'on', 'outlet': 1}, {'switch': 'on', 'outlet': 2}, {'switch': 'on', 'outlet': 3}]} | 2606
2023-02-11 20:24:10 [D] Add 12 entities
Considering rolling back one minor version at least.
OMG! Issue is still with me after downgrading to 3.3.1
My devices run with factory firmware. It has been upgraded the other day to 3.6 - but the issue appeared with SonoffLAN 3.4 with the same (3.5.1) fw version on the devices
It is getting worse, as I see...
:(
Just a small question: how many of the affected users use fixed IP address for the Sonoff devices?
I just fixed the IPs for the 4ch-r3 devices in the router, and no unavailability issue occurred since then. It hasn't been a long ago, so it't way too early to say, this was the solution, but I'm interested in your configuration.
I don't fix my IP addresses
On Tue, 14 Feb 2023, 12:44 Heidrich Attila, @.***> wrote:
Just a small question: how many of the affected users use fixed IP address for the Sonoff devices?
I just fixed the IPs for the 4ch-r3 devices in the router, and no unavailability issue occurred since then. It hasn't been a long ago, so it't way too early to say, this was the solution, but I'm interested in your configuration.
— Reply to this email directly, view it on GitHub https://github.com/AlexxIT/SonoffLAN/issues/1108#issuecomment-1429510758, or unsubscribe https://github.com/notifications/unsubscribe-auth/AH2XX7RFKCLEKUVPLJJJTRTWXNOYRANCNFSM6AAAAAAUTGE3CM . You are receiving this because you commented.Message ID: @.***>
I got disconnected with 3.3.1 version and fix IP addresses too. ewelink app doesn't complain a word, but it doesn't look like an integration version issue to me.
manually downgrading to 3.3.0 seems to have fixed most issues for me so far (auto mode with mDNS working)
Hi,
using HACS to downgrade to 3.3.1 fixed the issue here for more than 48hours but now is unavailable again.
Cheers,
Simone
Mine are becoming unavailable every 2 hours since Home Assistant 2023.2.5 update.
Hi @AlexxIT do you have some news about this strange problem ? or did you have change to check the errors ? just to know if the problem is for all and you suggest to wait before install to latest release or if we can go ahed due to only some installation are afftected. wbr
@AlexxIT sorry for my second question, but just to know if it is better wait for a fix :-) thanks a lot fr your effort on that. :-)
after I deleted the integration (deletion didn't actually remove it, just the configuration), the device data got refreshed, now I have got the home selection part. after I selected the home, I have no getting-offline errors.
devices don't always work in auto mode and go off-sync from the cloud state (uncontrollable from HA, while working in the eWeLink app...), but it's ok using LAN mode, so I think this solved the current issue for me (it was last afternoon, so fingers crossed)
I remove the integration and add it again. After a few hours all devices seem to be working fine. Even the missing option to check the home appeared. So far so good ....
I had the same issue with my zigby thermometer. Using the homeassistant.reload_config_entry service I am able force an update every 10 minutes. Need to check in a couple of days if it fixed it, usually mine go down after 48 hours
alias: "Update Office Thermometer " description: Every 10 mins trigger:
I'm desperate now... This bug makes the whole system unusable. It is off-sync right now, shows one switch On, while it's off... Ewelink status is ok. Sometimes just doesn't do the required operation, leaves heater on/off, doesn't show operations in the log...
I try to reload the config on schedule, but have no idea what's next.
Just download v3.3.0 version, this will fix the problems for now until is gonna fix it. Mine is working ok now.
Just download v3.3.0 version, this will fix the problems for now until is gonna fix it. Mine is working ok now.
That's what fixed it for me
Just download v3.3.0 version, this will fix the problems for now until is gonna fix it. Mine is working ok now.
Thanks, I'll try it. Again... once I already thougth I did it (HACS: reinstall with selected version), but looks like I have to
I pressed download version 3.3.0, no changes no headache
good for you. :)
it was different for me.
Reverting back to 3.3.0 seems to be positive for me too... it's been a few hours now that I didn't have to reload and things seems to still work (my use case is around auto
/cloud
, not local
)
Также перешёл на 3.3.0. Всё работает.
Also got a false offline with 3.3.0. Never happened before I tried 3.4
Уже 3 дня полёт нормальный. Удалил интеграцию со всеми сущностями. Заново установил и всё.
Issue hasn't happened to me for 3 days now since changing to 3.3.0 thanks 👍
Edit: I turned off my automation to restart the config entry and the issue happens again even with 3.3.0. I would recommend adding my above automation if you still experience the issue
Still on 3.3.0, since my last comment no issue. Just updated HA to 2023.3.1 a few hours ago… and the problem is back. 100% correlation.
I am not sure why it would make a difference honestly, but updated to 3.4.0, restarted HA, downgraded back to 3.3.0, restarted HA, and it seems it just held the night.
hey all, a great workaround since this issue doesn't appear to be being worked on, is to create an automation to monitor a sonoff device and restart the service automatically. its not a perfect solution but hopefully it should make the devices work for "most" of the time. https://github.com/AlexxIT/SonoffLAN/issues/1138#issuecomment-1478580171
@madchap you need to collect more logs. Few seconds are not very helpful.
@bax137 you have this problem https://github.com/AlexxIT/SonoffLAN/issues/1126
@roscio1975 some problems with cloud servers, integration can't do anything about it https://coolkit-technologies.github.io/eWeLink-API/#/en/APICenterV2?id=general-error-codes
@heidricha you have auto mode, but cloud doesn't work for you for some reason. Anyway v3.5.0 should helps in this situation.
Issue always there !! Currently in 3.5.3 (hass.io integration) . I have to reload the integration from time to time. Works sometimes during 3 days, then stop, reload, stop again 4 hours after, reload,...
If there's a way I could help let me know. I have the same issue - noted with SNZB-03 (sensor movement) that after a while stays in "no detection" mode forever
The cloud can constantly drop your connection when you use one account in 2 places. Two Hass. Or Hass and some other software.
I think that best suggestion is what I've already read here (or somewhere else): if you have already devices set up in one account, create a second eWeLink account and share the devices with that.
Hello,
After the new release, hopes were up that it would fix my reload issues, but it didn't :-( So I am opening this issue to try to provide the right amount of details, even though it seems I don't have much to give at this point.
Relates to https://github.com/AlexxIT/SonoffLAN/issues/1072
General info
I am running the following on a RaspberryPi3.
This issue arose all of a sudden some time ago; This issue was the one I saw already opened. None of the devices in the impacted list below had any firmware upgrade between the time it worked and the time it didn't anymore. However, Home Assistant updates happened fairly regularly, and sadly, I didn't write down HA versions :-/
Devices impacted by this issue are, all on latest firmwares:
Subpar workaround
My current workaround, which is not perfect as it impedes some other automation, is the following automation to reload the integration every 2 minutes. Very often, the integration ceases to update within that time period:
With the following shell script
Update to 3.4
I updated this morning to 3.4 and restart HA. I disabled my workaround, but the same symptoms appeared immediately.
The diagnostic for the integration does not indicate anything special, except what seems to be a one-time error upon HA restart after the update. I'll still put it here, might be my weak raspberrypi could not cope with the restart load and processing the response... There is not any more occurence of it and if the integration is in "cloud" mode only, it goes and fetches with no error the devices from the cloud (even in auto mode, see below... but wanted to make sure it didn't get them locally by forcing the mode).
Auto mode
I enabled debug more for the integration. A typical restart shows the following, no error:
As I've just tested it, there appears to be no error in the HA log nor the debug log of the integration now... :-/ It all just fails silently.
Cloud mode
Same symptoms as auto mode.
Local mode
I guess the first weird thing to me is that, upon enabling this mode, it still goes to load from cloud anyways, before "LOCAL mode start".
Outside of that, the devices show as "unavailable".
If there is anything more I can do to provide information, please let me know how, I'll definitely get to it.
Cheers.