rand256 / valetudo

Valetudo RE - experimental vacuum software, cloud free
Apache License 2.0
664 stars 73 forks source link

Map calibration for valetudo-mapper spot/zone selection #481

Closed maximweb closed 1 year ago

maximweb commented 2 years ago

I have recently flashed my vacuum, installed the valetudo-mapper and now try to integrate it into Home Assistant.

For this I found the xiamoi-vacuum-map-card to be a very appealing frontend. This plugin allows for manual spot and zone selection, sending them to the vacuum. Hence, my other issue regarding goto/zone_clean based on coordinates rather than ID (#480).

Challenge In order to get the coordinate conversion between displayed map and actual robot positions, the plugin requires something like:

calibration_source:
  calibration_points:
    - vacuum:
        x: 23800
        'y': 22600
      map:
        x: 21
        'y': 21
    - vacuum:
        x: 30250
        'y': 29850
      map:
        x: 537
        'y': 601
    - vacuum:
        x: 36700
        'y': 22600
      map:
        x: 1053
        'y': 21

It appears other have struggled with this as well (#210, #223). So far I did not find a neat solution.

**Idea*** It would be awesome if the vacuum could auto-generate such a mapping and serve it with MQTT.

My approach In order to understand what's going on, I reverse-engineered valetduo-mappers map generation in python (see my gist). It includes manual and auto cropping and display of forbidden zones. I basically fetched the map_data_parsed, considered scale, manual and auto cropping to figure out what valetudo-mapper is doing.

Based on only these data I was able to auto-generate calibration points and coordinate conversion.

Here some snippets:

    charger_position_x = int(json_map_data['charger'][0])
    charger_position_y = int(json_map_data['charger'][1])

    img_pixel_x = int(json_map_data['image']['position']['left'])
    img_pixel_y = int(json_map_data['image']['position']['top'])

    dimension_mm = 50
    offset_x = charger_position_x / dimension_mm - img_pixel_x
    offset_y = charger_position_y / dimension_mm - img_pixel_y

and

    def convert_position_to_pixel_unscaled(np_points_position):
        np_pixel = np_points_position.copy()
        np_pixel[:,0] = (np_points_position[:, 0] - charger_position_x) / dimension_mm + offset_x
        np_pixel[:,1] = (np_points_position[:, 1] - charger_position_y) / dimension_mm + offset_y
        return np_pixel

    def convert_pixel_unscaled_to_position(np_points_pixel_unscaled):
        np_positions = np_points_pixel_unscaled.copy()
        np_positions[:,0] = (np_positions[:,0] - offset_x) * dimension_mm + charger_position_x
        np_positions[:,1] = (np_positions[:,1] - offset_y) * dimension_mm + charger_position_y
        return np_positions

And then I had to consider the scale and offset due to cropping as well.

Automation In my python script I select three points (orange circles) automatically to form a large triangle within the cropped image and was able to auto-generate the desired calibration points. The outline of the image (axes) are the dimensions served by map_data_parsed/image/pixels, while the dashed area is the autocrop feature and is identical with what I get displayed from valetudo-mapper image

image

I did not consider rotation or different autocrop if cleaning zones are at the edge of the image. The concept, however, appears to work.

Feature idea It would be awesome if something similar could be integrated into the robot, so that a set of calibration points are auto-generated and served via MQTT. I am happy to assist, but my knowledge of js or the build/deployment for testing purposes are very limited.

Do you think this is a feature worth integrating?

Max

PiotrMachowski commented 2 years ago

Just to clarify: calibration points can be outside the map, so handling autocrop is probably unnecessary

colin715 commented 2 years ago

Any chance this will be merged? I'm struggling getting the calibration points for the vacuum card.

PiotrMachowski commented 2 years ago

It seems to be merged. Can somebody describe how it works so that I can add it to card's documentation?

maximweb commented 2 years ago

It seems to be merged. Can somebody describe how it works so that I can add it to card's documentation?

Unfortunately, I've been very busy lately. Since I started all this, I still feel obliged to test it as soon as I find the time.

I have not tested or even looked at the exact implementation the author used (8db759b).

My version did the following: It broadcasts a set of three points in vacuum and map coordinates via MQTT, topic valetudo/rockrobo/map_calibration_points (here is the only difference I can find at first glance: @rand256 uses map_calibration as mqtt topic name, omitting the _points)

[
  {
    "vacuum": {
      "x": 23550,
      "y": 22450
    },
    "map": {
      "x": 20,
      "y": 20
    }
  },
  {
    "vacuum": {
      "x": 30125,
      "y": 29850
    },
    "map": {
      "x": 546,
      "y": 612
    }
  },
  {
    "vacuum": {
      "x": 36700,
      "y": 22450
    },
    "map": {
      "x": 1072,
      "y": 20
    }
  }
]

In order to integrate it into your card, I added a sensor in HASS's configuration.yaml:

sensor:
    - platform: mqtt
      state_topic: valetudo/rockrobo/map_calibration_points
      name: rockrobo_calibration
      scan_interval: 1

And finally in the card:

calibration_source:
  entity: sensor.rockrobo_calibration

I hope this helps. As soon as I find the time to update to the latest version, I'll let you know.

PiotrMachowski commented 2 years ago

@maximweb thank you!

maximweb commented 1 year ago

I finally found the time to pull the latest docker image of valetudo-mapper.

The auto calibration works!