Open Lash-L opened 11 months ago
Before I went any further I wanted to check with you that images are supported by the card.
It should work, but you have to set it up using camera
key:
map_source:
camera: image.s7_roborock_downstairs
I suppose calibration points won't be added to the core, right?
Ah that did it - Guess I should have read the docs better.
I do plan to try to get calibration points added to core as an extra state attribute on the image entity, I think I'll be able to get that approved
I think I'll be able to get that approved
In case of failure I think it won't be hard to inject appropriate code using a custom integration. Please include me in a potential PR regarding this feature
Please include me in a potential PR regarding this feature
Can do. Anything else that would be helpful for me to expose?
I have just started diving into your code, but one thing that is important with Roborock vacuums is that commands are map specific and room ids are not. So if I want to clean room id 12 on the downstairs map, if I click on that room to clean it, but I have the upstairs map selected as my current map, it will attempt to clean room id 12 upstairs.
Is there a means on the card to set the map you are interacting with as the current map?
Anything else that would be helpful for me to expose?
rooms
section would also be useful - it is used to automatically generate rooms config.
Is there a means on the card to set the map you are interacting with as the current map?
It is possible to add configs for multiple maps using additional_presets
and make card automatically choose appropriate one using matching condition
More info e.g. here: https://github.com/PiotrMachowski/lovelace-xiaomi-vacuum-map-card/issues/248
Roborock core holds the map in a image entity instead of a camera entity, so I got the calibration points from the map parser .calibration() and set up the following:
Could you perhaps give more detail on how you are getting the map coordinates? I am using the new native Roborock integration. I have the image showing up but the calibration is off because I am not sure how to map the dock location to the image map.
@dkirby-ms he modified the code of Roborock integration
is there any ongoing development on this issue? i would appreciate it!
is there any ongoing development on this issue? i would appreciate it!
I am unfortunately a bit blocked. Not much I can do at the moment and core devs have to make decisions.
Is there a recommended workaround for generating calibration points at the moment? Docs point me towards https://github.com/PiotrMachowski/Home-Assistant-custom-components-Xiaomi-Cloud-Map-Extractor but I don't have a Xiaomi account as I've been using the Roborock app.
You can use this integration instead: https://github.com/humbertogontijo/homeassistant-roborock
I did just switch to the official integration because I was having issues with that one and was hoping this would be more stable. Is the humbertogontijo version preferred for the vacuum map card?
@jason-curtis at this moment the official integration doesn't provide data that is necessary to use the map functionality in this card
I as well very much look forward to this card supporting the official Roborock integration
@PiotrMachowski since our original plan failed. Would a service call work?
I.e you could call vacuum: send command with a service that is like get_map_card_info and we get all of the initial info you need like calibration points, room dimensions, etc? Then we could still use the image entity to update what the map looks like?
@Lash-L I think service call should be ok, but I also thought about implementing a dedicated WS API method. The downside of this approach is that it probably won't be possible for users to use it manually.
It would also be nice to make it possible for the card to be notified that something has changed in the calibration (I think this happens quite often during map building). Can it be solved by generating an event when it happens?
I got the calibration points from the map parser .calibration() and set up the following:
type: custom:xiaomi-vacuum-map-card entity: vacuum.s7_roborock map_source: image: image.s7_roborock_downstairs calibration_source: calibration_points: - vacuum: x: 25500 'y': 25500 map: x: 240 'y': 184 - vacuum: x: 35500 'y': 25500 map: x: 440 'y': 184 - vacuum: x: 25500 'y': 35500 map: x: 240 'y': -16 vacuum_platform: humbertogontijo/homeassistant-roborock
Hi! First of all, thanks for the integrations!
@Lash-L Could you better explain how to get the calibration points so users can do it in the mean time? I just moved from HACS integration to Core integration, and I can't find any documentation or tutorial about getting the calibration points from Core integration.
@Lash-L I think service call should be ok, but I also thought about implementing a dedicated WS API method. The downside of this approach is that it probably won't be possible for users to use it manually.
It would also be nice to make it possible for the card to be notified that something has changed in the calibration (I think this happens quite often during map building). Can it be solved by generating an event when it happens?
I think conceptually that's okay - i don't know if i really have time for it right now, whereas a service might be easier.
I could also cache the latest mapdata and the service could get the latest mapdata if you think that would be better?
@Lash-L Could you better explain how to get the calibration points so users can do it in the mean time? I just moved from HACS integration to Core integration, and I can't find any documentation or tutorial about getting the calibration points from Core integration.
@carlos-48 You can't. I modified things inside the actual code base in my development environment
i don't know if i really have time for it right now, whereas a service might be easier.
Don't worry about it, at this moment I'm rewriting Map Extractor, then I'll have to adjust the card, so you have plenty of time.
I think service call should be enough for my purposes
is there any ongoing development on this issue? i would appreciate it!
I am unfortunately a bit blocked. Not much I can do at the moment and core devs have to make decisions.
Hey there, I have a suggestion regarding this. When you was developing the custom integration you had more freedom to do what ever you pleased and at your own saying right? Why not bring the custom integration up to date making it on par with the core integration and that way you can implement all the features at any time without having to go through the core devs themselves. This way the custom integration is always the "same" as the core, but at the same time you are able to implement whatever you want at whatever time. This is a newbie talking btw, I am just thinking out loud!
Cause so far I had 3 options, in my case with a roborock:
i wanted to use this card so bad that i was willing to forsake the roborock app and use xiaomi home instead (and therefore lose camera view and pictures of objects detected) however the newest roborock s8 maxv thats already out for quite a while now, is not supported in the xiaomi home app, therefore i cannot use it in there. now there doesnt seem to be any option available for me to get a vacuum card working.... I already tried the roborock hacs integration instead of the build-in one for home assistant, but it constantly loses connection. Sad to see developers removed the camera from the official integration.
I am left without any working option now it seems...
Sad to see developers removed the camera from the official integration.
The official integration uses an image
entity instead of camera
.
Is there any progress here?
- Use the custom integration (works great with all the features needed but since it is not updated tends to crash and/or generate a ton of errors)
Thanks @pedro639 This was the solution for me. I have been so upset about this change in the core integration and for me its broken. Now, after a switch back to the custom integration, I'm able to use the vacuum cleaner with this vacuum map card again.
The hacs custom integration is not working. I command the robot to do something and it goes out and straight back to the dock The official integration is unfortunately as of now kind of useless without a good card...
Agree with @borgqueenx, the custom roborock integrations are crashing all the time and then there is no good use of this fantastic card
Hi all, just to get things right:
is this all correct? So would it be possible to e.g. create some script that updates this in the config automatically for me and use this map then?
Thanks a lot, looks like an amazing extension :+1: Willing to help to get this running with roborock core.
@simllll
- due tot he "image" vs "video" thingy, we do not have any calibration points
Not really, they have not been added to the HA core because of technical reasons. More info here
- the calibration points can be retrieved with a roborock command
Calibration points are calculated based on map dimensions - they are not returned by any Roborock command
- I can add the calibration points manually in a config file
You can add them manually to the card's configuration. It is also possible to use any entity as a source of calibration.
So would it be possible to e.g. create some script that updates this in the config automatically for me and use this map then?
I don't think it is possible at this moment
Thanks for the clarification @PiotrMachowski , I tried to catch up on the existing issues, I see that a general solution would be the best way to go, but... also will take more time, it's not even a complete proposal yet, so we are far away from an implementation.
Referring to @Lash-L comment (https://github.com/home-assistant/core/pull/105424#issuecomment-2111090482)
another way needs to be found
what about doing it "old-fashined" and just write the additional properties to a yaml file and use this a "input" for the map?
Anykind of IO is probably a no go for me imo.
I have had so little time to actually work on any of my hobby projects - but I would be happy to accept a workaround fix.
What I had in mind is a new command. Something like GET_CALIBRATION
Ideally, this would be getting cached and loaded, but since this shouldn't need to be done frequently, it can just do it all once there.
Although - it might make more sense just to add a function like "update_map_data" on the api object, and then in core when the map data is parsed, it calls .update_map_data(map_data) and stores it there. Then when the GET_CALIBRATION command is sent - it gets the calibration data that is stored.
And as a clarification - GET_CALIBRATION cannot be a new service - but rather a new command inside the roborock python package.
A new command should be ok for me as well - it can be cached e.g. trigger-based template sensor and this new entity will be used by the card as a source of calibration.
Anything I can do to make some progress on this?
Integration repository
https://www.home-assistant.io/integrations/roborock/
Supported features
Checklist
piotr.machowski.dev [at] gmail.com
(Retrieving entities info; please provide your GitHub username in the email)Vacuum entity/entities
Service calls
Other info
Creating this issue here to keep track of my work and ask questions. I plan to do the incorporation myself so I have left service calls and entities blank.
Roborock core holds the map in a image entity instead of a camera entity, so I got the calibration points from the map parser .calibration() and set up the following:
However, the image does not show up
Before I went any further I wanted to check with you that images are supported by the card.