Closed lucafwlavoro closed 1 year ago
Hello, you find a solution? I have same issue with Meross msg100
Same issue here - msg100 stopped working after the 2023 update.
Hello, Not sure if those issues are related to each other but I've right now tested on HA 2023.1 and meross_lan is still working the same way. Are your device keys correctly set in every device configuration? If your devices are still paired to the Meross app, you have to use the 'cloud retrieve' feature in order to recover the correct key from your Meross account. If your devices were working before with a blank device key, keep in mind newer firmwares are closing down on a key algorithm hole which meross_lan was using to bypass the need of the key so it might be your (meross) devices got updated and are no working anymore with the hack.
The 'cloud retrieve' procedure might fail for some cases (there are some issues reporting that) depending sometimes on password complexity or length. Nevertheless, you cannot operate the device without a correct key
Yes same problem here. I just retrieved my API key again just in case but the MTS100 radiator thermostats briefly show correct battery level and then become unavailable.
My error log xxx = my hub unique uuid
2023-01-16 16:16:01.008 WARNING (MainThread) [custom_components.meross_lan] MerossDevice(xxx) error in async_http_request: Unterminated string starting at: line 1 column 4045 (char 4044)
2023-01-16 16:16:06.228 WARNING (MainThread) [custom_components.meross_lan] MerossDevice(xxx) protocol error: namespace = 'Appliance.Hub.SubDevice.Beep' payload = '{"error": {"code": 5000}}'
2023-01-16 16:16:06.995 WARNING (MainThread) [custom_components.meross_lan] MerossDevice(xxx) error in async_http_request: [Errno 104] Connection reset by peer
2023-01-16 16:16:13.519 WARNING (MainThread) [custom_components.meross_lan] MerossDevice(xxx) error in async_http_request: TimeoutError
Did Meross update firmware recently perhaps?
Hello @whiteduck22, the 'Unterminated string starting at: line 1 column 4045 (char 4044)' is very scary...looks like the source components files are corrupted since this is not happening and there are no 4045 length lines in source code :)
Unless the message is related to a received payload which got corrupted somehow.... By the look of it this log was collected when the device was tracing itself (since the 'Appliance.Hub.SubDevice.Beep' message is never used in meross_lan except for tracing) Can you share that trace ? you should find it under the custom_components/meross_lan/traces directory
Yes oddly I was just doing that - here you go:
2023/01/16 - 16:22:04 TX http GET Appliance.System.All {"all": {}}
2023/01/16 - 16:22:04 RX http GETACK Appliance.System.All {"all": {"system": {"hardware": {"type": "msh300", "subType": "un", "version": "4.0.0", "chipType": "mt7686", "uuid": "################################", "macAddress": "#################"}, "firmware": {"version": "4.1.35", "compileTime": "2021/04/30 11:02:02 GMT +08:00", "wifiMac": "#################", "innerIp": "##############", "server": "####################", "port": "###", "userId": "######"}, "time": {"timestamp": 1673886124, "timezone": "Europe/London", "timeRule": [[1603587600, 0, 0], [1616893200, 3600, 1], [1635642000, 0, 0], [1648342800, 3600, 1], [1667091600, 0, 0], [1679792400, 3600, 1], [1698541200, 0, 0], [1711846800, 3600, 1], [1729990800, 0, 0], [1743296400, 3600, 1], [1761440400, 0, 0], [1774746000, 3600, 1], [1792890000, 0, 0], [1806195600, 3600, 1], [1824944400, 0, 0], [1837645200, 3600, 1], [1856394000, 0, 0], [1869094800, 3600, 1], [1887843600, 0, 0], [1901149200, 3600, 1]]}, "online": {"status": 1}}, "digest": {"hub": {"hubId": 3910387657, "mode": 0, "subdevice": [{"id": "0100B63B", "status": 1, "scheduleBMode": 6, "onoff": 1, "lastActiveTime": 1673886063, "mts100v3": {"mode": 3}}, {"id": "0100B5EC", "status": 1, "scheduleBMode": 6, "onoff": 1, "lastActiveTime": 1673885986, "mts100v3": {"mode": 3}}, {"id": "01005F1E", "status": 1, "scheduleBMode": 6, "onoff": 1, "lastActiveTime": 1673886028, "mts100v3": {"mode": 3}}, {"id": "010069D7", "status": 1, "scheduleBMode": 6, "onoff": 1, "lastActiveTime": 1673886003, "mts100v3": {"mode": 3}}, {"id": "01008826", "status": 1, "scheduleBMode": 6, "onoff": 1, "lastActiveTime": 1673885965, "mts100v3": {"mode": 3}}, {"id": "01006E56", "status": 1, "scheduleBMode": 6, "onoff": 1, "lastActiveTime": 1673885953, "mts100v3": {"mode": 3}}, {"id": "01006E1E", "status": 1, "scheduleBMode": 6, "onoff": 1, "lastActiveTime": 1673886088, "mts100v3": {"mode": 3}}, {"id": "01005E26", "status": 1, "scheduleBMode": 6, "onoff": 1, "lastActiveTime": 1673886097, "mts100v3": {"mode": 3}}, {"id": "01000B36", "status": 1, "scheduleBMode": 6, "onoff": 1, "lastActiveTime": 1673886046, "mts100v3": {"mode": 3}}, {"id": "010003EF", "status": 1, "scheduleBMode": 6, "onoff": 1, "lastActiveTime": 1673885998, "mts100v3": {"mode": 3}}, {"id": "010059EA", "status": 1, "scheduleBMode": 6, "onoff": 1, "lastActiveTime": 1673886093, "mts100v3": {"mode": 4}}, {"id": "01006D6A", "status": 1, "scheduleBMode": 6, "onoff": 1, "lastActiveTime": 1673886014, "mts100v3": {"mode": 3}}, {"id": "01001B86", "status": 1, "scheduleBMode": 6, "onoff": 1, "lastActiveTime": 1673885967, "mts100v3": {"mode": 3}}]}}}}
2023/01/16 - 16:22:04 auto LOG DEBUG MerossDevice(xxx) back online!
2023/01/16 - 16:22:04 TX http GET Appliance.Hub.Mts100.All {"all": []}
2023/01/16 - 16:22:04 TX http GET Appliance.System.Runtime {"runtime": {}}
2023/01/16 - 16:22:04 TX http GET Appliance.System.DNDMode {"DNDMode": {}}
2023/01/16 - 16:22:04 RX http GETACK Appliance.System.Runtime {"runtime": {"signal": 83}}
2023/01/16 - 16:22:04 auto LOG INFO MerossDevice(xxx) client connection attempt(0) error in async_http_request: Unterminated string starting at: line 1 column 4045 (char 4044)
2023/01/16 - 16:22:04 TX http GET Appliance.Hub.Mts100.All {"all": []}
2023/01/16 - 16:22:05 RX http GETACK Appliance.System.DNDMode {"DNDMode": {"mode": 0}}
2023/01/16 - 16:22:06 auto LOG INFO MerossDevice(xxx) client connection attempt(1) error in async_http_request: Unterminated string starting at: line 1 column 4045 (char 4044)
2023/01/16 - 16:22:06 TX http GET Appliance.Hub.Mts100.All {"all": []}
2023/01/16 - 16:22:07 auto LOG INFO MerossDevice(xxx) client connection attempt(2) error in async_http_request: Unterminated string starting at: line 1 column 4045 (char 4044)
2023/01/16 - 16:22:07 auto LOG DEBUG MerossDevice(xxx) going offline!
2023/01/16 - 16:22:34 TX http GET Appliance.System.All {"all": {}}
2023/01/16 - 16:22:34 RX http GETACK Appliance.System.All {"all": {"system": {"hardware": {"type": "msh300", "subType": "un", "version": "4.0.0", "chipType": "mt7686", "uuid": "################################", "macAddress": "#################"}, "firmware": {"version": "4.1.35", "compileTime": "2021/04/30 11:02:02 GMT +08:00", "wifiMac": "#################", "innerIp": "##############", "server": "####################", "port": "###", "userId": "######"}, "time": {"timestamp": 1673886154, "timezone": "Europe/London", "timeRule": [[1603587600, 0, 0], [1616893200, 3600, 1], [1635642000, 0, 0], [1648342800, 3600, 1], [1667091600, 0, 0], [1679792400, 3600, 1], [1698541200, 0, 0], [1711846800, 3600, 1], [1729990800, 0, 0], [1743296400, 3600, 1], [1761440400, 0, 0], [1774746000, 3600, 1], [1792890000, 0, 0], [1806195600, 3600, 1], [1824944400, 0, 0], [1837645200, 3600, 1], [1856394000, 0, 0], [1869094800, 3600, 1], [1887843600, 0, 0], [1901149200, 3600, 1]]}, "online": {"status": 1}}, "digest": {"hub": {"hubId": 3910387657, "mode": 0, "subdevice": [{"id": "0100B63B", "status": 1, "scheduleBMode": 6, "onoff": 1, "lastActiveTime": 1673886063, "mts100v3": {"mode": 3}}, {"id": "0100B5EC", "status": 1, "scheduleBMode": 6, "onoff": 1, "lastActiveTime": 1673885986, "mts100v3": {"mode": 3}}, {"id": "01005F1E", "status": 1, "scheduleBMode": 6, "onoff": 1, "lastActiveTime": 1673886028, "mts100v3": {"mode": 3}}, {"id": "010069D7", "status": 1, "scheduleBMode": 6, "onoff": 1, "lastActiveTime": 1673886003, "mts100v3": {"mode": 3}}, {"id": "01008826", "status": 1, "scheduleBMode": 6, "onoff": 1, "lastActiveTime": 1673886146, "mts100v3": {"mode": 3}}, {"id": "01006E56", "status": 1, "scheduleBMode": 6, "onoff": 1, "lastActiveTime": 1673886134, "mts100v3": {"mode": 3}}, {"id": "01006E1E", "status": 1, "scheduleBMode": 6, "onoff": 1, "lastActiveTime": 1673886088, "mts100v3": {"mode": 3}}, {"id": "01005E26", "status": 1, "scheduleBMode": 6, "onoff": 1, "lastActiveTime": 1673886097, "mts100v3": {"mode": 3}}, {"id": "01000B36", "status": 1, "scheduleBMode": 6, "onoff": 1, "lastActiveTime": 1673886046, "mts100v3": {"mode": 3}}, {"id": "010003EF", "status": 1, "scheduleBMode": 6, "onoff": 1, "lastActiveTime": 1673885998, "mts100v3": {"mode": 3}}, {"id": "010059EA", "status": 1, "scheduleBMode": 6, "onoff": 1, "lastActiveTime": 1673886093, "mts100v3": {"mode": 4}}, {"id": "01006D6A", "status": 1, "scheduleBMode": 6, "onoff": 1, "lastActiveTime": 1673886014, "mts100v3": {"mode": 3}}, {"id": "01001B86", "status": 1, "scheduleBMode": 6, "onoff": 1, "lastActiveTime": 1673886148, "mts100v3": {"mode": 3}}]}}}}
Missed a few lines from the start:
2023/01/16 - 16:22:04 TX http GET Appliance.System.All {"all": {}}
2023/01/16 - 16:22:04 RX http GETACK Appliance.System.All {"all": {"system": {"hardware": {"type": "msh300", "subType": "un", "version": "4.0.0", "chipType": "mt7686", "uuid": "################################", "macAddress": "#################"}, "firmware": {"version": "4.1.35", "compileTime": "2021/04/30 11:02:02 GMT +08:00", "wifiMac": "#################", "innerIp": "##############", "server": "####################", "port": "###", "userId": "######"}, "time": {"timestamp": 1673886124, "timezone": "Europe/London", "timeRule": [[1603587600, 0, 0], [1616893200, 3600, 1], [1635642000, 0, 0], [1648342800, 3600, 1], [1667091600, 0, 0], [1679792400, 3600, 1], [1698541200, 0, 0], [1711846800, 3600, 1], [1729990800, 0, 0], [1743296400, 3600, 1], [1761440400, 0, 0], [1774746000, 3600, 1], [1792890000, 0, 0], [1806195600, 3600, 1], [1824944400, 0, 0], [1837645200, 3600, 1], [1856394000, 0, 0], [1869094800, 3600, 1], [1887843600, 0, 0], [1901149200, 3600, 1]]}, "online": {"status": 1}}, "digest": {"hub": {"hubId": 3910387657, "mode": 0, "subdevice": [{"id": "0100B63B", "status": 1, "scheduleBMode": 6, "onoff": 1, "lastActiveTime": 1673886063, "mts100v3": {"mode": 3}}, {"id": "0100B5EC", "status": 1, "scheduleBMode": 6, "onoff": 1, "lastActiveTime": 1673885986, "mts100v3": {"mode": 3}}, {"id": "01005F1E", "status": 1, "scheduleBMode": 6, "onoff": 1, "lastActiveTime": 1673886028, "mts100v3": {"mode": 3}}, {"id": "010069D7", "status": 1, "scheduleBMode": 6, "onoff": 1, "lastActiveTime": 1673886003, "mts100v3": {"mode": 3}}, {"id": "01008826", "status": 1, "scheduleBMode": 6, "onoff": 1, "lastActiveTime": 1673885965, "mts100v3": {"mode": 3}}, {"id": "01006E56", "status": 1, "scheduleBMode": 6, "onoff": 1, "lastActiveTime": 1673885953, "mts100v3": {"mode": 3}}, {"id": "01006E1E", "status": 1, "scheduleBMode": 6, "onoff": 1, "lastActiveTime": 1673886088, "mts100v3": {"mode": 3}}, {"id": "01005E26", "status": 1, "scheduleBMode": 6, "onoff": 1, "lastActiveTime": 1673886097, "mts100v3": {"mode": 3}}, {"id": "01000B36", "status": 1, "scheduleBMode": 6, "onoff": 1, "lastActiveTime": 1673886046, "mts100v3": {"mode": 3}}, {"id": "010003EF", "status": 1, "scheduleBMode": 6, "onoff": 1, "lastActiveTime": 1673885998, "mts100v3": {"mode": 3}}, {"id": "010059EA", "status": 1, "scheduleBMode": 6, "onoff": 1, "lastActiveTime": 1673886093, "mts100v3": {"mode": 4}}, {"id": "01006D6A", "status": 1, "scheduleBMode": 6, "onoff": 1, "lastActiveTime": 1673886014, "mts100v3": {"mode": 3}}, {"id": "01001B86", "status": 1, "scheduleBMode": 6, "onoff": 1, "lastActiveTime": 1673885967, "mts100v3": {"mode": 3}}]}}}}
2023/01/16 - 16:22:04 auto LOG DEBUG MerossDevice(xxx) back online!
2023/01/16 - 16:22:04 TX http GET Appliance.Hub.Mts100.All {"all": []}
Now if this was just me experiencing a problem I could relate it to me adding two extra radiator valves over the weekend but someone else reported this which seemed spooky! Thanks a lot for looking into this
I could remove the two new devices and see if the error goes...
I think I got it... the device is likely 'overflowing' when queried for the status of the valves since meross_lan asks for the full state in 'Appliance.Hub.Mts100.All'. The returned payload might be very huge: for every valve it reports the full state information so..it's a lot of stuff...likely more than the device buffers might handle so the reply from the device is not correctly terminated and 'bum!' the error.. Except for de-installing your latest valve addition I'll try to think about a mitigation for this..let me think about it :)
Thanks a lot! Happy to remove them and confirm if you wish?
Update: Deleting one - doesn't fix it Deleting two - fixed!
So you are right - the payload is too great. I have 13 valves - reduce that down to 11 and all good. Apologies for attaching this to the wrong issue above - timing was too coincidental for me!
No worries...I'll try anyway to prevent this kind of issue and further investigate it. The 'bug' could also be in the way python async http module and the json decoder work (very unlikely but...) so I'll try to simulate a big payload (with 13 valves) and see if it works when 'pumped' into the http client I'll try anyway to reproduce the issue and maybe mitigate this scenario since it could be useful for other users too having a lot of valves on the same hub
Thank you! If you want me to test it then let me know. BTW: I don't live in a mansion but have a couple of rooms both with two radiators each and I finally finished my Meross rollout hence the large number of valves.
Update: It seems the upgrade was not the issue. I recently had to start all over again with my Router and IP Address of the switch was incorrect. I corrected IP address and MSG100 is back online.
@donparlor - If you need it in the future you should find this under integrations. Settings -> Devices and Services -> find the msh300 and click configure - in here you will see "Key Selection Mode" and Cloud Retrieve - enter your username and password and it should populate the key. Make sure your Meross Hub's IP address is right just above that.
ositivo sono impostate correttamente
Hello, Not sure if those issues are related to each other but I've right now tested on HA 2023.1 and meross_lan is still working the same way. Are your device keys correctly set in every device configuration? If your devices are still paired to the Meross app, you have to use the 'cloud retrieve' feature in order to recover the correct key from your Meross account. If your devices were working before with a blank device key, keep in mind newer firmwares are closing down on a key algorithm hole which meross_lan was using to bypass the need of the key so it might be your (meross) devices got updated and are no working anymore with the hack.
The 'cloud retrieve' procedure might fail for some cases (there are some issues reporting that) depending sometimes on password complexity or length. Nevertheless, you cannot operate the device without a correct key
Hi, no new news. After uninstalling meross_lan, I did the "cloud retrive" procedure again and I recovered the key, then I configured in "hack mode" this worked for two days. This morning the thermovalves were no longer available (I have 6 thermovalves). when i try to connect in hack mode (http) i get the following error
2023-01-18 08:59:10.631 DEBUG (MainThread) [custom_components.meross_lan] MerossHttpClient(192.168.1.228): HTTP POST method:(GET) namespace:(Appliance.System.All) 2023-01-18 08:59:14.009 DEBUG (MainThread) [custom_components.meross_lan] MerossHttpClient(192.168.1.228): HTTP Exception (Cannot connect to host 192.168.1.228:80 ssl:default [Connect call failed ('192.168.1.228', 80)])
Hello @lucafwlavoro, This type of error looks like being related to the device completely 'missing' at its configured ip. If you are able to use it from the app and not in meross_lan/HA then it could be its address have changed from 192.168.1.228 and meross_lan/HA was not able to 'detect' the change (only meross_lan pre-releases 2.6.3 and later can actually detect an ip change should that happen)
Hello @whiteduck22, The new 'beta' should be able to talk to your 'crowdy' hub even with 13. Give it a try if and when you can ;)
Hello @krahabb - OK - will do that tomorrow - thank you so much!
Hi @krahabb - v3.0.0-beta installed and seems to be working right now... I'll add the other two devices to my Home Assistant dashboard and see if they are working correctly. Let me know if you need any logs but so far so good thank you!
All 13 devices appear to be happy! Thanks a lot! I will let you know if I see any issues over the weekend.
There are 14 devices including the hub of course ;-)
Here is why the payload was so high - each device has a 7 day schedule:
Message
Target temperature = 22.5
All day
false
Start time
27 January 2023 at 17:00:00
End time
28 January 2023 at 00:00:00
Location
Description
ScheduleUnitTime
15
Schedule
{'id': '01006E1E', 'mon': [[390, 225], [165, 225], [105, 180], [360, 180], [345, 225], [75, 225]], 'tue': [[390, 225], [165, 225], [105, 180], [360, 180], [345, 225], [75, 225]], 'wed': [[390, 225], [165, 225], [105, 180], [360, 180], [345, 225], [75, 225]], 'thu': [[390, 225], [165, 225], [105, 180], [360, 180], [345, 225], [75, 225]], 'fri': [[390, 225], [165, 225], [105, 180], [360, 180], [345, 225], [75, 225]], 'sat': [[390, 150], [90, 150], [300, 215], [300, 215], [285, 215], [75, 215]], 'sun': [[390, 150], [90, 150], [300, 215], [300, 215], [285, 215], [75, 215]]}
@krahabb - if you have a buymeacoffee link let me know!
Thank you for 'buymeacofee', I'm so happy coding that seeing 'things working' is more than a reward! I really love this 'meross_lan community', made up of people like you, which is really (pro)positive and proactive in discussing about issues and features so I'm really good with that
Let's keep up the good work!
@krahabb - sorry to bother you but I just found something a bit odd - I think I caused a problem with the meross hub by adjusting the temperature of too many of the radiators too quickly in the IoS app... In HA it became Unavailable but it is working OK again now.
Strangely with the Beta version the key retrieval option in configuration has vanished - is this because it is the Beta I am running?
These are the config options I see:
Type: Smart Hub (msh300)
UUID: xxxxxxxx
Host: 192.168.x.x
Device host address
192.168.x.x
Device key
xxxxxxxx
Connection protocol
auto
Thanks Ian
The new version has removed the 'key mode choice' and will just prompt you to recover the cloud key if the actual key is not working
Ah that makes sense - sorry! Thank you!
Hello!! I'm here again to figure out how to fix my problem. I have updated HA_core to version 2023. I removed and later installed Meross_lan 2.6.2. When HA is restarted, the devices are recognized but during the Meross LAN configuration phase (user set, hack mode, cloud retrive) none of these choices work. except for Cloud retrieve to request apy_key.
Debug log
DEBUG (MainThread) [custom_components.meross_lan] MerossHttpClient(192.168.1.228): HTTP POST method:(GET) namespace:(Appliance.System.All) 2023-01-12 11:39:43.833 DEBUG (MainThread) [custom_components.meross_lan] MerossHttpClient(192.168.1.228): HTTP Response ({"header":{"messageId":"b10a6e245b534aae92ff74053df5d844","namespace":"Appliance.System.All","method":"ERROR","payloadVersion":1,"from":"/appliance/1811069262031729086034298f17cc6a/publish","timestamp":1673519982,"timestampMs":354,"sign":"843fb585f652fcb561671ad34abbb0b0"},"payload":{"error":{"code":5001,"detail":"sign error"}}} ) 2023-01-12 11:39:43.833 DEBUG (MainThread) [custom_components.meross_lan] Key error on 192.168.1.228 (GET:Appliance.System.All) -> retrying with key-reply hack 2023-01-12 11:39:43.860 DEBUG (MainThread) [custom_components.meross_lan] MerossHttpClient(192.168.1.228): HTTP Exception (Server disconnected)
DEBUG (MainThread) [custom_components.meross_lan] MerossHttpClient(192.168.1.228): HTTP POST method:(GET) namespace:(Appliance.System.All) 2023-01-12 11:36:20.602 DEBUG (MainThread) [custom_components.meross_lan] MerossHttpClient(192.168.1.228): HTTP Exception (Server disconnected)