Closed nicedevil007 closed 4 years ago
Try "Settings" -> "Persistent Data" -> "Reset map".
Which exactly actions did you do which lead to this issue?
Unfurtunately I did this 2-3 weeks before? I don't know exactly when I tryed this out. While my node-red Dashboard always show me latest correct map I didn't notice that it isn't working properly.
But yeah the things I can remember were: Turned the highly experimental option on and then switched back to the Home Menu where I first stored the map on the first slot. That was all.
Will try to get a new map with the Reset map option.
Turned the highly experimental option on and then switched back to the Home Menu where I first stored the map on the first slot. That was all.
Then these map issues can't be related to map saving feature. Storing a map is completely safe and can't damage anything - it's only restoring a map that possibly could. So you're probably facing some bug with roborock firmware itself.
I just installed RE6 and I keep seeing the NO_MAP_DATA drawing (very pretty) after a full cleanup. I'll try the "reset map data" option you mentioned.
Are you sure you actually have /etc/hosts and /etc/rc.local files with required lines from deployment section right now? You'll never have a map if those are missing.
I think I have them right. The imagebuilder.sh script I used checks them as it first missed them and warned me for that and then I did it again with having this repo checked out in the right place and it didn't complain about them. Perhaps there are more files needed than the imagebuilder script knows about?
I checked. The files have some extra info, but the content from this repo is in place. The imagebuilder.sh script (from the dustcloud repo) appends them apparently.
If it helps, on the maps page, it also shows disconnected and 0% battery. But the homepage works fine.
Which firmware version do you have installed? Have you changed anything in config.json at /mnt/data/valetudo? Try to stop valetudo, rename config.json to something else and start valetudo again to check if that could be related to broken config. Try to reboot the device just in case if you haven't yet.
Disconnected state means that the robot hasn't connected yet to valetudo after it was (re)started. Usually it takes from some seconds to 5-10 minutes to connect and you can't directly speed up the process. It is possible to check /var/log/upstart/valetudo.log for lines looking like "Robot connected". If there's no such lines, it has never connected by some reason thus you're having no device status and no map (though that really should never happen if files are at the right places).
Wow many questions.
(1) firmware version: 3.5.4 build 001910 (2) didn't change anything manually in files
this is config.json there (again I didn't touch it):
{
"spots": [],
"areas": [],
"ztimers": [],
"mqtt": {
"enabled": false,
"identifier": "rockrobo",
"topicPrefix": "valetudo",
"autoconfPrefix": "homeassistant",
"broker_url": "mqtt://user:pass@foobar.example",
"provideMapData": true,
"caPath": "",
"qos": 0
},
"dummycloud": {
"spoofedIP": "203.0.113.1",
"bindIP": "127.0.0.1"
},
"httpAuth": {
"enabled": false,
"username": "valetudo",
"password": "valetudo"
},
"allowSSHKeyUpload": true,
"map_upload_host": "http://127.0.0.1"
}
(2) I rebooted I think already more than once, but will do again... ok done, didn't work.
(3) let me check that log file see if I can see something.. contents:
Waiting for 30 sec after boot... done.
Loading configuration file: /mnt/data/valetudo/config.json
Dummycloud is spoofing 203.0.113.1:8053 on 127.0.0.1:8053
Webserver running on port 80
Got token from handshake: 5a354b524a424871456d386c41485535
Probed last id = 1001 using get_timezone (3 retries)
Got an expired token. Changing to new
Waiting for 30 sec after boot... done.
Loading configuration file: /mnt/data/valetudo/config.json
Dummycloud is spoofing 203.0.113.1:8053 on 127.0.0.1:8053
Webserver running on port 80
Probed last id = 1001 using get_timezone (3 retries)
Waiting for 30 sec after boot... done.
Loading configuration file: /mnt/data/valetudo/config.json
Dummycloud is spoofing 203.0.113.1:8053 on 127.0.0.1:8053
Webserver running on port 80
Probed last id = 1001 using get_timezone (3 retries)
Waiting for 30 sec after boot... done.
Loading configuration file: /mnt/data/valetudo/config.json
Dummycloud is spoofing 203.0.113.1:8053 on 127.0.0.1:8053
Webserver running on port 80
Probed last id = 1001 using get_status (3 retries)
It doesn't seem to have your line.. But I can make it vacuum. And all the other pages work, except the map tab..
@stevenroose, as I expected, it just doesn't want to connect to valetudo's dummycloud by some reason. Could you check /mnt/data/rockrobo/rrlog/miio.log file for some reasonable content? It might provide some clues. If possible upload it to some file hosting please.
@rand256 the problem was fixed to me, just want to let you know :) you can close this issue if stevenrose is happy as well :)
@nicedevil007 Do you mean fixed when you updated? Or it just went away?..
miioo.log:
[20191121 19:34:31] [I] Set data dir to: /mnt/data/miio/
{"method":"_internal.helper_ready"}
Got _internal.request_dinfo
Got _internal.request_dtoken
Got _internal.req_wifi_conf_status
Unknown cmd: {"method":"_internal.record_offline","params":{"offline_time":0,"offline_reason":0,"offline_ip":0,"offline_port":0}}
generate_token QvhULYedWfG4XD7G
[20191121 20:04:42] [E] set_server_conn_status failed, sock: 19
[20191121 20:31:47] [E] report_keepalive,1157: dev is offline, time: 1574368307
Unknown cmd: {"method":"_internal.record_offline","params":{"offline_time":1574368308,"offline_reason":30,"offline_ip":2174639224,"offline_port":8053}}
Unknown cmd: {"method":"_internal.record_offline","params":{"offline_time":0,"offline_reason":0,"offline_ip":0,"offline_port":0}}
[20191121 20:32:53] [E] report_keepalive,1157: dev is offline, time: 1574368373
Unknown cmd: {"method":"_internal.record_offline","params":{"offline_time":1574368374,"offline_reason":30,"offline_ip":4081147000,"offline_port":8053}}
Unknown cmd: {"method":"_internal.record_offline","params":{"offline_time":0,"offline_reason":0,"offline_ip":0,"offline_port":0}}
[20191121 21:02:55] [E] set_server_conn_status failed, sock: 28
[20191121 21:32:55] [E] set_server_conn_status failed, sock: 28
[20191121 22:02:55] [E] set_server_conn_status failed, sock: 28
[20191121 22:32:55] [E] set_server_conn_status failed, sock: 28
[20191121 23:02:55] [E] set_server_conn_status failed, sock: 28
[20191121 23:32:55] [E] set_server_conn_status failed, sock: 28
[20191122 00:02:55] [E] set_server_conn_status failed, sock: 28
[20191122 00:32:55] [E] set_server_conn_status failed, sock: 28
[20191122 01:02:55] [E] set_server_conn_status failed, sock: 28
[20191122 01:32:55] [E] set_server_conn_status failed, sock: 28
[20191122 02:32:55] [E] set_server_conn_status failed, sock: 28
[20191122 03:02:55] [E] set_server_conn_status failed, sock: 28
[20191122 03:32:55] [E] set_server_conn_status failed, sock: 28
[20191122 04:02:56] [E] set_server_conn_status failed, sock: 28
[20191122 04:32:55] [E] set_server_conn_status failed, sock: 28
[20191122 05:02:55] [E] set_server_conn_status failed, sock: 28
[20191122 05:26:10] [E] report_keepalive,1157: dev is offline, time: 1574400370
Unknown cmd: {"method":"_internal.record_offline","params":{"offline_time":1574400371,"offline_reason":30,"offline_ip":4010169527,"offline_port":8053}}
Unknown cmd: {"method":"_internal.record_offline","params":{"offline_time":0,"offline_reason":0,"offline_ip":0,"offline_port":0}}
[20191122 05:27:36] [E] report_keepalive,1157: dev is offline, time: 1574400456
Unknown cmd: {"method":"_internal.record_offline","params":{"offline_time":1574400458,"offline_reason":30,"offline_ip":4021375802,"offline_port":8053}}
Unknown cmd: {"method":"_internal.record_offline","params":{"offline_time":0,"offline_reason":0,"offline_ip":0,"offline_port":0}}
[20191122 05:28:43] [E] report_keepalive,1157: dev is offline, time: 1574400523
Unknown cmd: {"method":"_internal.record_offline","params":{"offline_time":1574400523,"offline_reason":30,"offline_ip":3976615095,"offline_port":8053}}
Unknown cmd: {"method":"_internal.record_offline","params":{"offline_time":0,"offline_reason":0,"offline_ip":0,"offline_port":0}}
[20191122 05:29:49] [E] report_keepalive,1157: dev is offline, time: 1574400589
Unknown cmd: {"method":"_internal.record_offline","params":{"offline_time":1574400590,"offline_reason":30,"offline_ip":4077278391,"offline_port":8053}}
Unknown cmd: {"method":"_internal.record_offline","params":{"offline_time":0,"offline_reason":0,"offline_ip":0,"offline_port":0}}
[20191122 05:30:56] [E] report_keepalive,1157: dev is offline, time: 1574400656
Unknown cmd: {"method":"_internal.record_offline","params":{"offline_time":1574400656,"offline_reason":30,"offline_ip":3976615095,"offline_port":8053}}
Unknown cmd: {"method":"_internal.record_offline","params":{"offline_time":0,"offline_reason":0,"offline_ip":0,"offline_port":0}}
[20191122 06:00:58] [E] set_server_conn_status failed, sock: 32
[20191122 06:30:58] [E] set_server_conn_status failed, sock: 32
[20191122 07:00:58] [E] set_server_conn_status failed, sock: 32
[20191122 07:30:58] [E] set_server_conn_status failed, sock: 32
[20191122 08:00:58] [E] set_server_conn_status failed, sock: 32
[20191122 08:30:58] [E] set_server_conn_status failed, sock: 32
[20191122 08:55:57] [E] report_keepalive,1157: dev is offline, time: 1574412957
Unknown cmd: {"method":"_internal.record_offline","params":{"offline_time":1574412958,"offline_reason":30,"offline_ip":4064369784,"offline_port":8053}}
Unknown cmd: {"method":"_internal.record_offline","params":{"offline_time":0,"offline_reason":0,"offline_ip":0,"offline_port":0}}
[20191122 08:57:04] [E] report_keepalive,1157: dev is offline, time: 1574413024
Unknown cmd: {"method":"_internal.record_offline","params":{"offline_time":1574413024,"offline_reason":30,"offline_ip":4043723959,"offline_port":8053}}
Unknown cmd: {"method":"_internal.record_offline","params":{"offline_time":0,"offline_reason":0,"offline_ip":0,"offline_port":0}}
[20191122 09:27:06] [E] set_server_conn_status failed, sock: 32
[20191122 09:57:06] [E] set_server_conn_status failed, sock: 32
[20191122 10:27:06] [E] set_server_conn_status failed, sock: 32
[20191122 10:57:06] [E] set_server_conn_status failed, sock: 32
[20191122 11:27:06] [E] set_server_conn_status failed, sock: 32
[20191122 11:57:06] [E] set_server_conn_status failed, sock: 32
[20191122 12:27:06] [E] set_server_conn_status failed, sock: 32
[20191122 12:57:06] [E] set_server_conn_status failed, sock: 32
[20191122 13:16:05] [E] report_keepalive,1157: dev is offline, time: 1574428565
Unknown cmd: {"method":"_internal.record_offline","params":{"offline_time":1574428567,"offline_reason":30,"offline_ip":4097924216,"offline_port":8053}}
Unknown cmd: {"method":"_internal.record_offline","params":{"offline_time":0,"offline_reason":0,"offline_ip":0,"offline_port":0}}
[20191122 13:17:12] [E] report_keepalive,1157: dev is offline, time: 1574428632
Unknown cmd: {"method":"_internal.record_offline","params":{"offline_time":1574428632,"offline_reason":30,"offline_ip":4043723959,"offline_port":8053}}
Unknown cmd: {"method":"_internal.record_offline","params":{"offline_time":0,"offline_reason":0,"offline_ip":0,"offline_port":0}}
[20191122 13:18:18] [E] report_keepalive,1157: dev is offline, time: 1574428698
Unknown cmd: {"method":"_internal.record_offline","params":{"offline_time":1574428699,"offline_reason":30,"offline_ip":4043723959,"offline_port":8053}}
Unknown cmd: {"method":"_internal.record_offline","params":{"offline_time":0,"offline_reason":0,"offline_ip":0,"offline_port":0}}
Unfortunately it looks like device doesn't write to that log about where exactly it tried to connect and failed. Could you please also tell which region is specified in file /mnt/data/miio/device.country
?
For now I may suggest you to try issuing the next two commands at ssh as root user on the device:
iptables -t nat -A OUTPUT -p udp --dport 8053 -j DNAT --to-destination 127.0.0.1:8053
restart rrwatchdoge
Then wait for a minute and check if a map will appear in valetudo. If it will, you'll need to insert the iptables ...
line into /etc/rc.local
before exit 0
, to have it being run at every device reboot.
If everything fails, then the issue is deeper and you'll probably need to rollback to any pre-19xx firmware. As you may notice, 1910 firmware isn't "officially" supported anyway. And that "rollbacking" is a bit tricky, since in post-1900 versions local OTA updates were disabled by xiaomi, so you just can't locally install any firmware as you usually did before. And you then have two options: a.) Simply factory reset the device, then it'll load an older firmware from its low-level recovery. b.) Use valetudo's experimental firmware update interface in Settings -> Info -> Request firmware update, which appeared in RE5.
For the latter you'll need to have firmware image hosted on any available site, probably you'll want to have some simple http server running on your own PC, or just use some direct link to image hosted at http://vacuumz.info site or alike. There could be the catch though: this function to work requires the vacuum to be connected to its own internal "cloud", which your device probably can't accomplish (most likely because of 1910 firmware).
Hey guys,
tryed the experimental feature for saving the map. Now I don't have any map anymore.
Is there a way to completly start from scratch for drawing a new map by the robot instead of resetting and flashing/rooting again?