rand256 / valetudo

Valetudo RE - experimental vacuum software, cloud free
Apache License 2.0
670 stars 74 forks source link

Support for Gen3 Models #333

Open simplesurfer opened 3 years ago

simplesurfer commented 3 years ago

Since Gen2 models are not longer available there is need for Gen3 support by Valetudo RE.

rand256 commented 3 years ago

I don't own any Gen3 device and have no reason nor money to buy one. But if you happen to have a rooted Gen3 and wish to have a RE on it, drop me a mail to rand256 at yandex.com.

rand256 commented 3 years ago

Thanks to @simplesurfer now there's a test RE build that supports T6/S6 and most likely some other Gen3 devices.

Currently it supports all Gen2 features, and also I've added some code for no-mop-zones support that could be set up along with common forbidden zones and virtual walls at Forbidden Markers section on Zones page - but that still needs to be tested as I don't have any compatible device to do it myself.

How to install this version:

  1. Use the dustbuilder to create an image with preinstalled vanilla valetudo and flash it into your device according their instructions;
  2. SSH into the device and stop vanilla valetudo using /etc/init/S11valetudo stop;
  3. Download the test RE build into the device and unpack it;
  4. Overwrite the vanilla valetudo at /usr/local/bin/valetudo with unpacked RE binary (make a copy if you want to have a backup);
  5. Restart the service using /etc/init/S11valetudo start.

Please leave a feedback regarding how this worked for you.

daduke commented 3 years ago

thanks for the build, @simplesurfer! I installed it on my freshly rooted S6 and so far it works great. While the 'original' valetudo doesn't show any maps, this build does.

rand256 commented 3 years ago

@daduke

I installed it on my freshly rooted S6 and so far it works great.

Thanks for testing. Could you also check the no-mop-zones functionality please? Create such zone at Forbidden Markers configuration page (use broom-in-a-box icon), switch power to mop mode (attaching water box not required) and run zoned cleaning around the no-mop-zone. The device should avoid entering the no-mop-zone in this case. But if you switch power to any normal mode and run zoned cleaning again, it should enter the no-mop-zone like it's not there at all.

daduke commented 3 years ago

will do, but first I'll have to get my crib mapped, which is a pain (old building with high thresholds the S6 can't mount..). Also, the S6 rebooted around 04:00 last night and since then, the map just shows "Status: Connecting.." again - any hint? thanks!

daduke commented 3 years ago

the only way I could get it back was to rejoin wifi - not sure I'd like to do that every day :yawning_face:

rand256 commented 3 years ago

Also, the S6 rebooted around 04:00 last night and since then, the map just shows "Status: Connecting.." again - any hint?

Nightly reboots are handled only by firmware blackbox, we can't do much about them.

But "Status: Connecting.." is interesting for sure, as it means that the firmware didn't connect to valetudo (and yes, it initiates this connection itself). Since it's been much time after the reboot, we can only assume that it managed to otherwise connect to xiaomi cloud (or got its IP and keeps trying). I've downloaded pre-built image from dustcloud and didn't see any traces of special dnsmasq instance we used on latest S5 firmware to route all cloud-related dns requests to our local IP. Dunno how it's supposed to work in dustbuilder.

You may want to see how this is solved in @zvldz's gen1/gen2 image builder: download and unpack this archive to the device, and add the next 2 iptables rules from here to /etc/rc.local before 'exit 0' line:

iptables -t nat -A OUTPUT -p udp -m owner ! --uid-owner nobody --dport 53 -j DNAT --to 127.0.0.1:55553
iptables -t nat -A OUTPUT -p tcp -m owner ! --uid-owner nobody --dport 53 -j DNAT --to 127.0.0.1:55553

This will make sure that all xiaomi dns requests would lead to valetudo's cloud emulation, so it won't stuck in 'connecting...' state for too long after reboots.

daduke commented 3 years ago

thanks! I installed dnsmasq + iptables rules, but that didn't help I'm afraid. Does anyone know why rejoining wifi fixes it? maybe we could just do that in rc.local?

pidator commented 3 years ago

There was a hint by @dgiese in the dustcloud telegram channel, have you done this on your robot/are these values set?

#!/bin/bash

echo ssid=\"MySSID\" > /mnt/data/miio/wifi.conf
echo psk=\"MyPassword\" >> /mnt/data/miio/wifi.conf
echo key_mgmt=\"WPA\" >> /mnt/data/miio/wifi.conf
echo uid=0 >> /mnt/data/miio/wifi.conf
echo region=us >> /mnt/data/miio/wifi.conf
echo cfg_by=miot >> /mnt/data/miio/wifi.conf
echo 0 > /mnt/data/miio/device.uid
echo "us" > /mnt/data/miio/device.country
daduke commented 3 years ago

I saw that, did it 2 days ago, but somehow it got deleted again.. It works! thanks a lot! Maps survive a reboot now. awesome.

daduke commented 3 years ago

things are looking good so far. I'll be AFK for 10 days but will report back afterwards.

daduke commented 3 years ago

@daduke

I installed it on my freshly rooted S6 and so far it works great.

Thanks for testing. Could you also check the no-mop-zones functionality please? Create such zone at Forbidden Markers configuration page (use broom-in-a-box icon), switch power to mop mode (attaching water box not required) and run zoned cleaning around the no-mop-zone. The device should avoid entering the no-mop-zone in this case. But if you switch power to any normal mode and run zoned cleaning again, it should enter the no-mop-zone like it's not there at all.

well.. I did as you suggested, but I'm afraid the S6 doesn't care much about my no-mop-zone - it drove right through. Power was in mop mode (but no water attached). I for one don't need this functionality, but it might require some more TLC...

thanks, -d

rand256 commented 3 years ago

but I'm afraid the S6 doesn't care much about my no-mop-zone - it drove right through

Interesting. But was the no-mop-zone at least correctly displayed on the map tab as a light blue polygon?

daduke commented 3 years ago

but I'm afraid the S6 doesn't care much about my no-mop-zone - it drove right through

Interesting. But was the no-mop-zone at least correctly displayed on the map tab as a light blue polygon?

yes it was. It also wasn't completely straightforward to delete. I had to jump between tabs several times until it was gone.

rand256 commented 3 years ago

Weird. Well, from the technical side it is processed in RE almost like common forbidden zones just with a different identifier bit set. And if you saw a no-mop-zone on the map, so it should mean we've set it correctly - in comparison S5 doesn't accept such zones at all. Therefore it's only up to firmware on what to do with it next.

daduke commented 3 years ago

I'll try again for reals with water tank on

herfson commented 3 years ago

Hi, thanks for your efforts.

Some weeks ago, I've managed to get RE running following these instructions, but strangely after a reboot the original valetudo was running again.

I've tried following your instructions, but stopping valetudo did not work as expected: [root@rockrobo init]# /etc/init/S11valetudo stop killall: valetudo: no process killed Any hints? Additionally, the image seems to be no longer there, I get a 404.

rand256 commented 3 years ago

stopping valetudo did not work as expected: [root@rockrobo init]# /etc/init/S11valetudo stop killall: valetudo: no process killed

This means no valetudo process is currently running - nothing to stop.

Additionally, the image seems to be no longer there, I get a 404.

Older test builds are removed when no longer needed. Everything they introduced is now included into the current release build.

herfson commented 3 years ago

stopping valetudo did not work as expected: [root@rockrobo init]# /etc/init/S11valetudo stop killall: valetudo: no process killed

This means no valetudo process is currently running - nothing to stop.

But it is. I have valetudo open in my browser and can control the robot. Just now checked again the same command with the same resuly whilst controlling the robot using the valetudo web interface. Update: Despite the no process killed message, valetudo has been stopped. Odd.

Additionally, the image seems to be no longer there, I get a 404.

Older test builds are removed when no longer needed. Everything they introduced is now included into the current release build.

Understood. Will try your instructions from above with the current release, then. Thanks.

herfson commented 3 years ago

Installation worked fine. I had high hopes this would finally resolve my persistent No Map Data issue. No such luck, unfortunately. Have edited rc.local and hosts file as outlined. Trying to re-run rc-local after editing yields: [root@rockrobo valetudo]# /etc/rc.local ip6tables v1.4.21: Couldn't load targetREJECT:No such file or directory`

Any hint? Please also indicate if I'm spamming the wrong spot here. Don't mean to annoy anyone.

mgre3 commented 3 years ago

@herfson hereby the steps I did to have a fully functioning valetudo RE on my s6. Hopefully I did it correctly..

I did disable ipv6 on the robot:

Open the /etc/sysctl.conf file:

vi /etc/sysctl.conf Add the following lines at the end of the sysctl.conf file:

net.ipv6.conf.all.disable_ipv6 = 1 net.ipv6.conf.default.disable_ipv6 = 1 net.ipv6.conf.lo.disable_ipv6 = 1

run sysctl -p

and comment out the lines in rc.local about ip6tables. Then the problem you described is away but the map is not visible and there is an error in the valetudo upstart log about a connection reject error after the first timesync packed received.

Therefor I was running into this line:

iptables -A OUTPUT -d 203.0.113.1/32 -j REJECT

with this line enabled the robot could not setup a connection to localhost because in the startup script /etc/init/rrwatchdoge.conf is a line

ip addr add 203.0.113.1 dev lo

When I disabled the reject line in rc.local and did the other suggestions in this issue the S6 robot is functioning well (with no mop zones!) without connecting to the outside world. I checked this on the router. But it is kind of scary to remove a reject rule to prevent the robot connecting to the cloud. I don’t know this is the right solution.

rand256 commented 3 years ago

@mgre3, the iptables -A OUTPUT -d 203.0.113.1/32 -j REJECT rule existed in rc.local to prevent possible connections to the outside network except for ports 80 and 8053 which were DNATed to localhost, and that worked fine with vacuumz image builder on gen1-gen2 devices. If the dustbuilder for gen3 devices now produces images that by some reason maps that IP locally, then, well, it should mean you no longer need to REJECT connections to it.

Meanwhile, would you mind to check this test build? As I've heard, by default the local time on S6 is 1970 which isn't synced with anything, and the default scheduled cleaning isn't working there at all. This build adds some workarounds to the scheduled cleaning, but you'll need to allow connections to pool.ntp.org or specify some working NTP server address in valetudo's configuration file (for ntp section to appear in config.json just run the updated build once).

pidator commented 3 years ago

If the dustbuilder for gen3 devices now produces images that by some reason maps that IP locally

Afaik to bypass http dns requests of newer miio client versions (hardcoded IPs - dns catcher isn't working for this), vanilla adds this lo approach and sets up the spoofed ip as a lo interface, see e.g. this commit.

mgre3 commented 3 years ago

@rand256 Forgot to mention, but I am using (ver 2516, 09/2020, stripped-Ubuntu) of dustbuilder as basis.

My date time in ssh is okay already, so I tried to run a scheduled zone cleaning. And it was just running on the right scheduled time. Do you want me to test your new testbuild anyway?

rand256 commented 3 years ago

@pidator, thanks, I'll take a look.

@mgre3, zoned schedule is dependent only on having correct system datetime (and it's valetudo-only feature), but I was told that the full/room schedule support is completely missing from the base firmware on gen3. Could you check that please on a release build? Regarding datetime, have you specifically enabled NTP system-wide or it was already enabled by default there?

mgre3 commented 3 years ago

@rand256 Indeed today the zone schedule was gone (after a reboot) and I can’t plan a scheduled full/room run.

My datetime is working correctly also when I set another date then after a reboot it is correct. Will look into that as well. Some people are complaining about ntp not working and I’ve seen that error also but right now it is working eventually.

Today I tested your test build. I could setup a scheduled run. It starts but directly stops again with no rooms set. Then I tried to set a room and it did clean the room on the scheduled time.

2021-02-05T18:01:06.457Z ntpd called: ntpd: setting time to 2021-02-05 18:01:06.425964 (offset -28.509695s)

--

To respond on the previous messages about the ip 203.0.113.1/32 range set to localhost:

image
rand256 commented 3 years ago

but directly stops again with no rooms set

I'm sorry, that was an inadvertence. In this test build it should properly start a whole house cleaning when there's a schedule with no rooms specified. Thanks for testing!

mgre3 commented 3 years ago

@rand256 Whole house cleaning works now.

I had another problem with the no mop zones. These were just forgotten, don’t know why. Could not reproduce it yet.

And my datetime is set at boot first to Jan 5 then to the right datetime, but there is no logging about that, in the bootlog I see the date time changing, but no reason why.

haivala commented 3 years ago

Does this support S6 Max V?

pidator commented 3 years ago

Does this support S6 Max V?

https://github.com/rand256/valetudo/blob/846d4aa0cb4509e5a8fa0cc6c366fce1b52fcad7/lib/miio/Vacuum.js#L1064

Give it a try... can't say if everything will be working. But the code for initialization is there.

combrs commented 3 years ago

@pidator, thanks, I'll take a look.

If this issue is about Gen3 support, can you edit "installation" readme? It starts from words "The only supported by Valetudo RE devices are Xiaomi Mi Robot Vacuum v1 (Gen.1) and Roborock S5 series (S50,S51,S55 - Gen.2). All newer devices like S5-Max, S6, 1S etc are not supported." Is it true now?

thomas725 commented 3 years ago

I just stumbled upon Valetudo RE, was running original valetudo on my Xiaomi S5 Max for the last half year or so. I tried switching to Valetudo RE by replacing the valetudo binary in /mnt/data/valetudo, but it won't start, giving this error:

# ./valetudo
pkg/prelude/bootstrap.js:1430
      throw error;
      ^

Error: ENOTDIR: not a directory, mkdir '/mnt/data/valetudo/uploads'
    at Object.mkdirSync (fs.js:921:3)
    at Object.mkdirSync (pkg/prelude/bootstrap.js:1271:33)
    at Function.sync (/snapshot/valetudo/node_modules/mkdirp/index.js:72:13)
    at new DiskStorage (/snapshot/valetudo/node_modules/multer/storage/disk.js:21:12)
    at module.exports (/snapshot/valetudo/node_modules/multer/storage/disk.js:65:10)
    at new Multer (/snapshot/valetudo/node_modules/multer/index.js:15:20)
    at multer (/snapshot/valetudo/node_modules/multer/index.js:95:12)
    at Object.<anonymous> (/snapshot/valetudo/lib/webserver/WebServer.js:27:16)
    at Module._compile (internal/modules/cjs/loader.js:999:30)
    at Module._compile (pkg/prelude/bootstrap.js:1510:32) {
  errno: -20,
  syscall: 'mkdir',
  code: 'ENOTDIR',
  path: '/mnt/data/valetudo/uploads'
}