Open MrIngenieur opened 7 months ago
Any information in the log can you execute the command top in the shell of the host machine
Not really sure how to get the log….
In the iobroker admin under log or protokolle is the cpu load the whole time high in top?
No, it pretty much fluctuates
Lost websocket connection
several times
Not sure if this was with adapter active or off:
2024-04-07 19:46:19.376 - [33mwarn[39m: mercedesme.0 (4714) get state error: Connection is closed.
2024-04-07 19:46:19.376 - [33mwarn[39m: mercedesme.0 (4714) Could not perform strict object check of state mercedesme.0.W1NGM2BB3PA028210.history.tankLevelStatus: DB closed
2024-04-07 19:46:19.376 - [33mwarn[39m: mercedesme.0 (4714) get state error: Connection is closed.
2024-04-07 19:46:19.376 - [33mwarn[39m: mercedesme.0 (4714) Could not perform strict object check of state mercedesme.0.W1NGM2BB3PA028210.history.tankLevelStatus: DB closed
2024-04-07 19:46:19.377 - [33mwarn[39m: mercedesme.0 (4714) get state error: Connection is closed.
2024-04-07 19:46:19.377 - [33mwarn[39m: mercedesme.0 (4714) Could not perform strict object check of state mercedesme.0.W1NGM2BB3PA028210.history.tankLevelStatus: DB closed
2024-04-07 19:46:19.377 - [33mwarn[39m: mercedesme.0 (4714) get state error: Connection is closed.
2024-04-07 19:46:19.377 - [33mwarn[39m: mercedesme.0 (4714) Could not perform strict object check of state mercedesme.0.W1NGM2BB3PA028210.history.tankLevelStatus: DB closed
2024-04-07 19:46:19.377 - [33mwarn[39m: mercedesme.0 (4714) get state error: Connection is closed.
2024-04-07 19:46:19.382 - [31merror[39m: mercedesme.0 (4714) Unhandled promise rejection. This error originated either by throwing inside of an async function without a catch block, or by rejecting a promise which was not handled with .catch().
2024-04-07 19:46:19.382 - [31merror[39m: mercedesme.0 (4714) unhandled promise rejection: DB closed
2024-04-07 19:46:19.384 - [31merror[39m: mercedesme.0 (4714) Error: DB closed
at Redis.sendCommand (/opt/iobroker/node_modules/ioredis/built/redis/index.js:636:24)
at Redis.get (/opt/iobroker/node_modules/ioredis/built/commander.js:122:25)
at StateRedisClient.setState (/opt/iobroker/node_modules/@iobroker/db-states-redis/build/lib/states/statesInRedisClient.js:521:40)
at Mercedesme._setState (/opt/iobroker/node_modules/@iobroker/js-controller-adapter/build/lib/adapter/adapter.js:5537:76)
at runNextTicks (node:internal/process/task_queues:60:5)
at process.processImmediate (node:internal/timers:447:9)
2024-04-07 19:46:19.384 - [31merror[39m: mercedesme.0 (4714) DB closed
2024-04-07 19:46:19.385 - [31merror[39m: mercedesme.0 (4714) Unhandled promise rejection. This error originated either by throwing inside of an async function without a catch block, or by rejecting a promise which was not handled with .catch().
2024-04-07 19:46:19.385 - [31merror[39m: mercedesme.0 (4714) unhandled promise rejection: DB closed
2024-04-07 19:46:19.386 - [31merror[39m: mercedesme.0 (4714) Error: DB closed
at close (/opt/iobroker/node_modules/ioredis/built/redis/event_handler.js:184:25)
at Socket.<anonymous> (/opt/iobroker/node_modules/ioredis/built/redis/event_handler.js:151:20)
at Object.onceWrapper (node:events:632:26)
at Socket.emit (node:events:517:28)
at TCP.<anonymous> (node:net:350:12)
2024-04-07 19:46:19.376 - [33mwarn[39m: mercedesme.0 (4714) get state error: Connection is closed.
2024-04-07 19:46:19.376 - [33mwarn[39m: mercedesme.0 (4714) Could not perform strict object check of state mercedesme.0.W1NGM2BB3PA028210.history.tankLevelStatus: DB closed
2024-04-07 19:46:19.376 - [33mwarn[39m: mercedesme.0 (4714) get state error: Connection is closed.
2024-04-07 19:46:19.376 - [33mwarn[39m: mercedesme.0 (4714) Could not perform strict object check of state mercedesme.0.W1NGM2BB3PA028210.history.tankLevelStatus: DB closed
2024-04-07 19:46:19.377 - [33mwarn[39m: mercedesme.0 (4714) get state error: Connection is closed.
2024-04-07 19:46:19.377 - [33mwarn[39m: mercedesme.0 (4714) Could not perform strict object check of state mercedesme.0.W1NGM2BB3PA028210.history.tankLevelStatus: DB closed
2024-04-07 19:46:19.377 - [33mwarn[39m: mercedesme.0 (4714) get state error: Connection is closed.
2024-04-07 19:46:19.377 - [33mwarn[39m: mercedesme.0 (4714) Could not perform strict object check of state mercedesme.0.W1NGM2BB3PA028210.history.tankLevelStatus: DB closed
2024-04-07 19:46:19.377 - [33mwarn[39m: mercedesme.0 (4714) get state error: Connection is closed.
2024-04-07 19:46:19.382 - [31merror[39m: mercedesme.0 (4714) Unhandled promise rejection. This error originated either by throwing inside of an async function without a catch block, or by rejecting a promise which was not handled with .catch().
2024-04-07 19:46:19.382 - [31merror[39m: mercedesme.0 (4714) unhandled promise rejection: DB closed
2024-04-07 19:46:19.384 - [31merror[39m: mercedesme.0 (4714) Error: DB closed
at Redis.sendCommand (/opt/iobroker/node_modules/ioredis/built/redis/index.js:636:24)
at Redis.get (/opt/iobroker/node_modules/ioredis/built/commander.js:122:25)
at StateRedisClient.setState (/opt/iobroker/node_modules/@iobroker/db-states-redis/build/lib/states/statesInRedisClient.js:521:40)
at Mercedesme._setState (/opt/iobroker/node_modules/@iobroker/js-controller-adapter/build/lib/adapter/adapter.js:5537:76)
at runNextTicks (node:internal/process/task_queues:60:5)
at process.processImmediate (node:internal/timers:447:9)
2024-04-07 19:46:19.384 - [31merror[39m: mercedesme.0 (4714) DB closed
2024-04-07 19:46:19.385 - [31merror[39m: mercedesme.0 (4714) Unhandled promise rejection. This error originated either by throwing inside of an async function without a catch block, or by rejecting a promise which was not handled with .catch().
2024-04-07 19:46:19.385 - [31merror[39m: mercedesme.0 (4714) unhandled promise rejection: DB closed
2024-04-07 19:46:19.386 - [31merror[39m: mercedesme.0 (4714) Error: DB closed
at close (/opt/iobroker/node_modules/ioredis/built/redis/event_handler.js:184:25)
at Socket.<anonymous> (/opt/iobroker/node_modules/ioredis/built/redis/event_handler.js:151:20)
at Object.onceWrapper (node:events:632:26)
at Socket.emit (node:events:517:28)
at TCP.<anonymous> (node:net:350:12)
Maybe the behavior has me suffering more than normal users, as i have 2 instances for 2 cars running. If both have higher CPU Loads, the sum is taking the system down.
2024-04-07 21:57:18.303 debug update for xxxxxxxx: 0
mercedesme.1
2024-04-07 21:57:18.216 debug update for xxxxxx: 0
mercedesme.1
2024-04-07 21:57:15.263 debug {"message":"app twin actor has stopped"}
mercedesme.1
2024-04-07 21:57:15.263 debug {"message":"app twin actor is stopping"}
mercedesme.1
2024-04-07 21:57:15.262 debug {"message":"app twin actor ttl was reached"}
Error: DB closed is a sign for hdd failure you should check your sd card
On my system is the same problem maybe caused by the mercedesme adapter.
I was not able to get a Logfile yet. But I saw in "htop" that mercedesme got a high cpu usage and directly after that the iobroker.js-controller got much over 100% CPU usage.
When I stop mercedesme adapter everything works fine.
Maybe you can help to get a logfile. And if possible in german 🙂
Error: DB closed is a sign for hdd failure you should check your sd card
I don’t have an SD card running. System is on NVMe SSD running totally stable as long as Mercedes Me adapter is switched off.
On my system is the same problem maybe caused by the mercedesme adapter.
I was not able to get a Logfile yet. But I saw in "htop" that mercedesme got a high cpu usage and directly after that the iobroker.js-controller got much over 100% CPU usage.
When I stop mercedesme adapter everything works fine.
Maybe you can help to get a logfile. And if possible in german 🙂
That describes my observations 100%
I have the same problem since this night, mercedesme causes high cpu usage so that javascript stopped working correctly. I had to stop mercedesme
Any suspicious logs?
Maybe there’s also a problem on the Mercedes Backend… I also use EVCC and the Mercedes Integration is also broken since today. But iobroker managed to run some hours longer than EVCC integration in my case.
Seams that i also have got Problems with the Adapter. Getting message: Websocket closed every 5min. CPU load also got higher.
2024-04-07 21:54:27.857 - debug: mercedesme.0 (3962631) Received State Updated
2024-04-07 21:54:27.862 - debug: mercedesme.0 (3962631) Received State Updated
2024-04-07 21:54:27.863 - debug: mercedesme.0 (3962631) {"vinsList":["W1VVLAEZ5P4203066"]}
2024-04-07 21:54:27.873 - debug: mercedesme.0 (3962631) update for W1VVLAEZ5P4203066: 0
2024-04-07 21:54:27.880 - debug: mercedesme.0 (3962631) update for W1VVLAEZ5P4203066: 1
2024-04-07 21:54:27.920 - debug: mercedesme.0 (3962631) Received State Updated
2024-04-07 21:54:27.931 - debug: mercedesme.0 (3962631) {"vinsList":["W1VVLAEZ5P4203066"]}
2024-04-07 21:54:28.609 - debug: mercedesme.0 (3962631) update for W1VVLAEZ5P4203066: 1
2024-04-07 21:54:31.888 - debug: mercedesme.0 (3962631) 1006
2024-04-07 21:54:31.897 - debug: mercedesme.0 (3962631) Websocket closed
2024-04-07 21:55:27.931 - info: mercedesme.0 (3962631) Lost WebSocket connection. Reconnect WebSocket
2024-04-07 21:55:29.934 - debug: mercedesme.0 (3962631) Connect to WebSocket
2024-04-07 21:55:30.296 - debug: mercedesme.0 (3962631) WebSocket connected
I tried to restart the mercedesme adapter. In the first 30 seconds everything was fine but then the CPU usage goes higher.
Nothing in the logs to see. Only this information in htop.
When you put the instance in log level debug is there a lot of updates happening?
I can only imagine this. Websocket is closed. Then there is a reconnect and there are many updates in a row for the same VIN before WebSocket connection is closed again.
But its all with no errors in the debug level
2024-04-07 22:33:00.096 - debug: mercedesme.0 (32355) Websocket closed
2024-04-07 22:33:56.089 - info: mercedesme.0 (32355) Lost WebSocket connection. Reconnect WebSocket
2024-04-07 22:33:58.090 - debug: mercedesme.0 (32355) Connect to WebSocket
2024-04-07 22:33:58.262 - debug: mercedesme.0 (32355) WebSocket connected
2024-04-07 22:34:02.265 - debug: mercedesme.0 (32355) {"message":"Registering User with ciamID: xxxxxxx and App-UUID:xxxxxxd65"}
2024-04-07 22:34:18.268 - debug: mercedesme.0 (32355) {"vinsList":["W1N4M4GB8PW315195"]}
2024-04-07 22:34:18.372 - debug: mercedesme.0 (32355) Received State Updated
2024-04-07 22:34:18.384 - debug: mercedesme.0 (32355) update for W1N4M4GB8PW315195: 0
2024-04-07 22:34:22.273 - debug: mercedesme.0 (32355) {"vinsList":["W1N4M4GB8PW315195"]}
2024-04-07 22:34:22.316 - debug: mercedesme.0 (32355) Received State Updated
2024-04-07 22:34:22.319 - debug: mercedesme.0 (32355) {"message":"app twin actor was initialized"}
2024-04-07 22:34:22.321 - debug: mercedesme.0 (32355) {"vinsList":["W1N4M4GB8PW315195"]}
2024-04-07 22:34:22.373 - debug: mercedesme.0 (32355) Received State Updated
2024-04-07 22:34:26.279 - debug: mercedesme.0 (32355) {"vinsList":["W1N4M4GB8PW315195"]}
2024-04-07 22:34:26.326 - debug: mercedesme.0 (32355) Received State Updated
2024-04-07 22:34:28.288 - debug: mercedesme.0 (32355) update for W1N4M4GB8PW315195: 0
2024-04-07 22:34:28.292 - debug: mercedesme.0 (32355) update for W1N4M4GB8PW315195: 0
2024-04-07 22:34:28.295 - debug: mercedesme.0 (32355) update for W1N4M4GB8PW315195: 0
2024-04-07 22:34:42.288 - debug: mercedesme.0 (32355) {"vinsList":["W1N4M4GB8PW315195"]}
2024-04-07 22:34:42.342 - debug: mercedesme.0 (32355) Received State Updated
2024-04-07 22:34:42.346 - debug: mercedesme.0 (32355) {"message":"app twin actor was initialized"}
2024-04-07 22:34:57.674 - debug: mercedesme.0 (32355) {"vinsList":["W1N4M4GB8PW315195"]}
2024-04-07 22:34:57.694 - debug: mercedesme.0 (32355) Received State Updated
2024-04-07 22:34:57.702 - debug: mercedesme.0 (32355) update for W1N4M4GB8PW315195: 0
2024-04-07 22:35:01.307 - debug: mercedesme.0 (32355) {"vinsList":["W1N4M4GB8PW315195"]}
2024-04-07 22:35:01.329 - debug: mercedesme.0 (32355) Received State Updated
2024-04-07 22:35:07.548 - debug: mercedesme.0 (32355) update for W1N4M4GB8PW315195: 0
2024-04-07 22:35:07.873 - debug: mercedesme.0 (32355) update for W1N4M4GB8PW315195: 0
2024-04-07 22:35:09.316 - debug: mercedesme.0 (32355) 1006
2024-04-07 22:35:09.316 - debug: mercedesme.0 (32355) Websocket closed
2024-04-07 22:35:44.769 - info: host.raspberrypi "system.adapter.mercedesme.0" disabled
2024-04-07 22:35:44.770 - info: host.raspberrypi stopInstance system.adapter.mercedesme.0 (force=false, process=true)
Die GitHub version versucht die reconnects zu minimieren bitte testen ob es Einfluss auf cpu usage hat
Ist die GitHub version neuer? Ich verwende schon die 0.1.8 aus GitHub.
version is gleich commit hash b78f0ad844ca0c0a1cc7526c0f0501443df9610c
Kann das Problem bestätigen.
Kann das hier mit in Zusammenhang stehen?
version is gleich commit hash b78f0ad844ca0c0a1cc7526c0f0501443df9610c
Kanns leider heute nicht mehr testen. Der Katze-Button funktioniert bei mir am Handy nicht mehr. Werds leider erst morgen am PC ausprobieren können.
Bei mir leider keine Besserung:
Installieren via GitHub geht mit admin 6.17.x nicht mehr erst wieder mit 6.15.x gibt es denn ein socket client bei euch
cd /opt/iobroker
npm ls @iobroker/socket-client
Installieren via GitHub geht mit admin 6.17.x nicht mehr erst wieder mit 6.15.x
gibt es denn ein socket client bei euch
cd /opt/iobroker npm ls @iobroker/socket-client
Achso, dann tuts auch am PC nicht.
pi@raspberrypi:~ $ cd /opt/iobroker
pi@raspberrypi:/opt/iobroker $ npm ls @iobroker/socket-client
iobroker.inst@2.0.3 /opt/iobroker
└── (empty)
pi@raspberrypi:/opt/iobroker $
pm ls @iobroker/socket-client iobroker.inst@3.0.0 /opt/iobroker `-- (empty)
Wie krieg ich denn raus welchen commit hash ich am laufen habe? Bei mir ist die Git-Installation schon sauber durchgelaufen.
Habe Admin 6.13.16
ich habe nochmal was in der GitHub version angepasst bitte nochmal via GitHub installieren
ich habe nochmal was in der GitHub version angepasst bitte nochmal via GitHub installieren
Neueste Version die mir angezeigt wird ist die 0.1.8
$ iobroker url TA2k/ioBroker.mercedesme --host iobroker --debug
install TA2k/ioBroker.mercedesme
Installing TA2k/ioBroker.mercedesme... (System call)
upload [0] mercedesme.admin /opt/iobroker/node_modules/iobroker.mercedesme/admin/words.js words.js application/javascript
Process exited with code 0
Auch danach noch…
Ich hab das selbe Problem. System ist ein Pi3B mit Raspbian (Bullseye). Die 0.1.8er Version von GitHub lief seit Monaten ohne Probleme aber ich habe gestern ein Update via APT auf OS Level gemacht und folgendes wurde aktualisiert:
Upgrade:
libblkid-dev:armhf (2.36.1-8+deb11u1, 2.36.1-8+deb11u2),
libsmartcols1:armhf (2.36.1-8+deb11u1, 2.36.1-8+deb11u2),
libmount-dev:ar
mhf (2.36.1-8+deb11u1, 2.36.1-8+deb11u2),
libmount1:armhf (2.36.1-8+deb11u1, 2.36.1-8+deb11u2),
util-linux:armhf (2.36.1-8+deb11u1, 2.36.1-8+deb11u2),
fdisk:armhf (2.36.1-8+deb11u1, 2.36.1-8+deb11u2),
libfdisk1:armhf (2.36.1-8+deb11u1, 2.36.1-8+deb11u2),
nodejs:armhf (18.20.0-1 nodesource1, 18.20.1-1nodesource1),
libuuid1:armhf (2.36.1-8+deb11u1, 2.36.1-8+deb11u2),
uuid-dev:armhf (2.36.1-8+deb11u1, 2.36.1-8+deb11u2 ),
rfkill:armhf (2.36.1-8+deb11u1, 2.36.1-8+deb11u2),
mount:armhf (2.36.1-8+deb11u1, 2.36.1-8+deb11u2),
libblkid1:armhf (2.36.1-8+deb11u1, 2.36.1-8+deb11u2),
bsdutils:armhf (1:2.36.1-8+deb11u1, 1:2.36.1-8+deb11u2),
bsdextrautils:armhf (2.36.1-8+deb11u1, 2.36.1-8+deb11u2)
End-Date: 2024-04-06 22:36:24
Ich tippe mal auf das nodejs update, denn 3h danach hat sich der PI abgeschossen und tut es immer wieder, sobald ich mercedes.me aktiviere.
Ich habe die aktuellste Version von GitHub bereits installiert (0.1.8, 07.04.2024, 23:30h) aber das Problem bleibt bestehen.
Problem bleibt bestehen.
Hier leider auch
Ok nach der Installation wurde die Instanz neugestartet Habe ich erstmal keine Ideen mehr woran es liegt
Sind es denn alle Pi3B mit Raspbian (Bullseye) mit node 18.20 ?
Ich habe In der GitHub das schreiben der iobroker states deaktiviert einfach nochmal installieren
Sieht erstmal ganz gut aus, besser als bei den Versionen vor Mitternacht. Allerdings trat bei mir das Problem gesichert erst nach einigen Stunden auf. Kann Dir morgen früh sagen, ob die Kiste abgebrannt ist, oder nicht ! ;-)
Sind es denn alle Pi3B mit Raspbian (Bullseye) mit node 18.20 ?
Bei mir Pi5 mit neuestem PiOS und node 18.20
habe noch Anpassung gemacht es scheint daran zu liegen das Mercedes viel öfters update liefert
Habe gerade die von GitHub neu installiert, ist jetzt Version 0.1.9 Anbindung an Mercedes läuft, CPU-Last bleibt niedrig Log sieht ganz anders aus Würde sagen PERFEKT
habe noch Anpassung gemacht es scheint daran zu liegen das Mercedes viel öfters update liefert
Habe auch die 0.1.9 und sieht auf den ersten Blick auch stabiler aus. eine der beiden Instanz ging mal auf gelb, nach einem restart scheint es wieder zu funktionieren. also weiter beobachten. vielen dank schonmal für den nächtlichen Einsatz @TA2k
Kann das Problem bestätigen.
Kann das hier mit in Zusammenhang stehen?
Nein, das sind ja nur Bibliotheken für den Client. Damit hat der Server nix zu tun.
Leider zu früh gefreut... Heut Morgen lief der Adapter noch perfekt, jetzt ist die Verbindung zu Mercedes gekappt. Das Log zeigt immer weiter 1006 Websocket closed Connect to Websocket WS error:Error: Unexpected server response: 428
Vielleicht ein Problem/Änderung bei Mercedes?
bitte mal die 0.2.0 testen
bitte mal die 0.2.0 testen
Bei mir ging die 0.1.9 auch nach einiger Zeit auf gelb. 0.2.0 habe ich installiert, Prozessorlast ist bisher im normalen Bereich (ca. 5 Minuten uptime) und noch sind beide Instanzen auf grün. -> weiter beobachten
Mit der Version 0.2.0 ist zwar die CPU-Last gering, aber es gibt keine Kommunikation zu Mercedes
Es kommt auch kein "PONG" mehr
Hier das entsprechende LOG:
2024-04-08 12:44:17.023 - debug: mercedesme.0 (10483) Login 2024-04-08 12:44:17.025 - debug: mercedesme.0 (10483) refreshToken 2024-04-08 12:44:17.386 - debug: mercedesme.0 (10483) {"access_token":"XXXXX","refresh_token":"XXXXX","token_type":"Bearer","expires_in":7199} 2024-04-08 12:44:17.389 - debug: mercedesme.0 (10483) setRefrehToken: XXXXX 2024-04-08 12:44:17.390 - debug: mercedesme.0 (10483) Login successful 2024-04-08 12:44:17.391 - debug: mercedesme.0 (10483) start refresh interval 2024-04-08 12:44:17.732 - debug: mercedesme.0 (10483) {"assignedVehicles":[{"authorizationType":"USER","canReceiveVAC":true,"carline":"118","dealers":{"XXX"}]},"fin":"W1K1186XXX","isOwner":true,"isTemporarilyAccessible":false,"licensePlate":"XXX","mopf":false,"normalizedProfileControlSupport":"UNSUPPORTED","profileSyncSupport":"SUPPORTED","salesRelatedInformation":{"baumuster":{"baumuster":"1186861","baumusterDescription":"CLA 250 e Shooting Brake"},"line":{"code":"P59","description":""},"paint":{"code":"787","description":"mountaingrau metallic"},"upholstery":{"code":"101","description":"Ledernachbildung ARTICO schwarz"}},"tirePressureMonitoringType":"TirePressureMonitoring","trustLevel":3,"vehicleConnectivity":"BUILTIN","vehicleSegment":"DEFAULT","windowsLiftCount":"FourLift"}],"fleets":[]} 2024-04-08 12:44:17.733 - info: mercedesme.0 (10483) Found 1 vehicles 2024-04-08 12:44:17.734 - info: mercedesme.0 (10483) Creating vehicle XXX 2024-04-08 12:44:17.764 - debug: mercedesme.0 (10483) https://bff.emea-prod.mobilesdk.mercedes-benz.com//v1/vehicle/XXX/capabilities/commands 2024-04-08 12:44:17.789 - info: mercedesme.0 (10483) Start Websocket Connection 2024-04-08 12:44:17.791 - debug: mercedesme.0 (10483) Connect to WebSocket 2024-04-08 12:44:17.972 - error: mercedesme.0 (10483) WS error:Error: Unexpected server response: 428 2024-04-08 12:44:17.973 - debug: mercedesme.0 (10483) 1006 2024-04-08 12:44:17.973 - debug: mercedesme.0 (10483) Websocket closed 2024-04-08 12:44:18.023 - debug: mercedesme.0 (10483) {"commands":[{"additionalInformation":null,"XXX "SIGPOS_TYPE"}]}]} 2024-04-08 12:44:18.060 - debug: mercedesme.0 (10483) [{"activeTimes":{"begin":0,"days":[1,2,3,4,5,6,7],"end":1439},"id":2849361,"isActive":false,"name":"Heim","shape":{"circle":{"center":{"latitude":XXX,"longitude":XXX},"radius":18}},"violationType":"LEAVE_AND_ENTER"}] 2024-04-08 12:44:18.065 - debug: mercedesme.0 (10483) {" updatedAt”XXXX"2023-06-29T12:29:00.560Z"} 2024-04-08 12:45:02.792 - debug: mercedesme.0 (10483) Ping 2024-04-08 12:45:47.793 - debug: mercedesme.0 (10483) Ping 2024-04-08 12:46:32.794 - debug: mercedesme.0 (10483) Ping 2024-04-08 12:47:17.794 - debug: mercedesme.0 (10483) Ping 2024-04-08 12:48:02.794 - debug: mercedesme.0 (10483) Ping 2024-04-08 12:48:47.795 - debug: mercedesme.0 (10483) Ping 2024-04-08 12:49:17.792 - info: mercedesme.0 (10483) Try to reconnect 2024-04-08 12:49:17.793 - debug: mercedesme.0 (10483) Connect to WebSocket 2024-04-08 12:49:17.942 - error: mercedesme.0 (10483) WS error:Error: Unexpected server response: 428 2024-04-08 12:49:17.943 - debug: mercedesme.0 (10483) 1006 2024-04-08 12:49:17.945 - debug: mercedesme.0 (10483) Websocket closed 2024-04-08 12:50:02.794 - debug: mercedesme.0 (10483) Ping
Liegt an meiner IP-Adresse... Es scheint so, als ob der Mercedes-Server mich geblockt hat. Stelle ich die Verbindung über einen anderen Zugang her, dann läuft auch der Adapter wieder Ist mir aufgefalle, da sich die App auch nicht mehr aktualisiert hat, es kamen nur noch Push-Mitteilungen
Danke für den Hinweis habe den Hinweis in die 0.2.1 eingefügt.
Meine Vermutung ist zur Zeit da auch die 0.1.8 korrekt laufen würde da mercedes wieder was geändert hat
Ich hatte auch die letzten zwei Tage Probleme mit hoher Last. Komischerweise war bei mir bei TOP immer der Javascript Adapter der Treiber. Allerdings hatte ich auch immer die Meldung des Mercedes Adapters, disconnect. Mal sehen ob es mit der neuen Version besser ist. Als Resultat habe ich mir heute eine neue VM aufgesetzt und alle Scripte durchgesehen. 🙈
Meine Vermutung ist zur Zeit da auch die 0.1.8 korrekt laufen würde da mercedes wieder was geändert hat
also bei EVCC ist die mercedes Integration immer noch ohne Funktion. ioBroker mit 0.2.2 läuft aktuell bei mir stabil.
Ich hatte auch die letzten zwei Tage Probleme mit hoher Last. Komischerweise war bei mir bei TOP immer der Javascript Adapter der Treiber. Allerdings hatte ich auch immer die Meldung des Mercedes Adapters, disconnect. Mal sehen ob es mit der neuen Version besser ist. Als Resultat habe ich mir heute eine neue VM aufgesetzt und alle Scripte durchgesehen. 🙈
Den Effekt hatte ich auch. Teilweise hatte der javascript.0 >150000 input events, was dazu führte das javascript.0 und js-controller 100% CPU load hatten. Der Mercedes Me Adpater ist mir auch erst später als Ursache aufgefallen.
Meine Vermutung ist zur Zeit da auch die 0.1.8 korrekt laufen würde da mercedes wieder was geändert hat
also bei EVCC ist die mercedes Integration immer noch ohne Funktion. ioBroker mit 0.2.2 läuft aktuell bei mir stabil.
Heute um 11 Uhr gabs leider nochmal einen Aussetzer wo die CPU-Last wieder durch die Decke ging und die Werte nicht mehr aktualisiert wurden. Seit Restart ists im Moment wieder alles normal.
Describe the bug
This morning the Adapter went offline, so that i needed to get new tokens. After token update it worked for a while, approx. until 7:15 pm, but now CPU-Load goes to 100% when MercedesMe Adapter is running.
To Reproduce
Steps to reproduce the behavior:
Expected behavior
No unusual CPU Loads from the adapter
Screenshots & Logfiles
If applicable, add screenshots and logfiles to help explain your problem.
Versions:
Additional context
As long as adapter is active, CPU load is super high. When adapter is switched off, everything goes back to normal.