sdwilsh / hass-truenas

TrueNAS integration for Home Assistant
MIT License
38 stars 11 forks source link

Pools as entities #19

Open agospher4eg opened 3 years ago

agospher4eg commented 3 years ago

Hi! You made great integration! I read aiotruenas-client and saw that you can list pools. Can you add pools to integration?

sdwilsh commented 3 years ago

I don't have a lot of time available for this right now, but I would happily review a PR that added it :D

sdwilsh commented 3 years ago

@agospher4eg, what were you looking to have exposed for Pools in the integration?

agospher4eg commented 3 years ago

I want pools with total space and free space

sdwilsh commented 3 years ago

Interestingly enough, while the TrueNAS UI shows this information for pools, it's actually datasets that have this information in ZFS (or at least, the API that TrueNAS has), so it's a bit more involved as the upstream library doesn't yet have support for datasets. Now, I also own that library, so it's not a process problem to get that support, but more of a time problem right now.

sdwilsh commented 3 years ago

And for my, or someone else's, reference in the future, pool.dataset.query gives me all the information I need to accomplish this, and we'd want to query at least these properties:

Weirdly, it doesn't look like the API provides total space, but someone can compute it with used and available numbers (or we can in the integration too).

sdwilsh commented 3 years ago

A PR that needs some work is up in https://github.com/sdwilsh/aiotruenas-client/pull/92 to add this to the library. I need to rethink some stuff there, but it's a solid starting point.

sdwilsh commented 3 years ago

v0.8.0 of the client library is out that includes Dataset support to provide the values you wanted, @agospher4eg. I'm not sure when I'll have time next to actually add support to the integration, but the hardest part of getting in the library is now done.

sdwilsh commented 3 years ago

I just finished merging in v0.8.0 of the library, so we're pretty close to having this. I'm planning to hold off on releasing the next version until we get this. I probably won't have time to do this for a few weeks still, but @colemamd might get to it before then!

agospher4eg commented 3 years ago

Great news! I`m waiting. I can help with testing =)

colemamd commented 3 years ago

I've been looking at this over the last couple of days, and I now have datasets working with id, type, pool_name, compress_ratio, available_bytes, used_bytes, & total_bytes. I just have a couple questions. The bytes attributes are shown in bytes, vice MB, GB, TB, etc. I think the easiest way to handle that is in aiotruenas_client by pulling value vice rawvalue. But then we'd have to deal with total_bytes. There is also no good value for unique_id that I can find. It still works, just can't use the entity registry. Thoughts?

sdwilsh commented 3 years ago

The bytes attributes are shown in bytes, vice MB, GB, TB, etc. I think the easiest way to handle that is in aiotruenas_client by pulling value vice rawvalue. But then we'd have to deal with total_bytes.

Based on the tests at least, it looks like the rawvalue was typically a string, whereas the parsedvalue was an actual number, which is why I went with that instead for the library.

I'm not sure what you mean about having to deal with total_byes.

It's worth noting that using bytes will result in a big, not-human-friendly number, but it's not hard to template that to something human readable if someone wanted to use it on a dashboard.

There is also no good value for unique_id that I can find. It still works, just can't use the entity registry. Thoughts?

There's a native property on the datasets called guid that never changes. I wonder if we can access that somehow via the API. On the flip side, I don't think one can rename a dataset from the UX, so technically id is unique. Someone could rename from the command line, which is where getting the guid would be nice.

colemamd commented 3 years ago

I'm not sure what you mean about having to deal with total_byes.

It's worth noting that using bytes will result in a big, not-human-friendly number, but it's not hard to template that to something human readable if someone wanted to use it on a dashboard.

This is what I meant. Was trying to get a more user-friendly total_bytes, but it can easily be templated if need be.

There's a native property on the datasets called guid that never changes. I wonder if we can access that somehow via the API. On the flip side, I don't think one can rename a dataset from the UX, so technically id is unique. Someone could rename from the command line, which is where getting the guid would be nice.

I was referencing a dump of pool.dataset.query and didn't see a guid at the dataset level, only at the disk level, but you're right, id cannot be altered from the gui, so it is unique. I'll look more into getting guid but I'm not sure that's necessary.

sdwilsh commented 3 years ago

Someone could use the raw zfs commands to rename a dataset, but meh. There is pool.dataset.userprop.query, but I suspect that does not include native properties. We could also file a ticket upstream with iX to get it added to the websocket API for future use.

sdwilsh commented 3 years ago

pool.dataset.userprop.query does not show native properties, alas.

sdwilsh commented 3 years ago

There's an undocumented shell API at websocket/shell we could try to reverse engineer, but I don't think it's worth the investment. It is likely fine to just assume nobody is going to rename it, and if they do, they can deal with the fact that they'll have to delete and resetup the integration.

sdwilsh commented 2 years ago

@colemamd, if you don't have the time to finish this off, I have some time available that I could finish off what you started if you wanted to upload it to your fork.

colemamd commented 2 years ago

I ran out of time a few months ago, started a new job, sorry about that. I have time now, if you give me a few days I can clean up what I had been working on and upload it. I do remember the one thing I was working on is that when you query the datasets, ALL datasets (i.e. child datasets) get pulled. When I was testing on my prod box it was pulling dozens of datasets so I was trying to figure out how to minimize how many get pulled into HA. We could just have all child datasets disabled by default in HA, then the user can go in and enable any that they do want. I should be able to have something uploaded by the end of this week, but I'll keep you updated.

agospher4eg commented 2 years ago

It would be great! Thank you!

sdwilsh commented 2 years ago

I have plenty of projects to work on, so happy to let you finish it up. No worries on the delay, and I hope the new job is working out for you!

colemamd commented 2 years ago

I just uploaded a new branch to my fork, https://github.com/colemamd/hass-truenas/tree/datasets. I'm still trying to figure out how to change the depth of children datasets that are pulled, but the api docs are a bit lacking.

sdwilsh commented 2 years ago

Yeah, what I tend to do is grab an API call, and then take that data (with scripts/invoke_method.py) and then play around with tests to get it into a usable state that I'm happy with. I wish they had better docs.

colemamd commented 2 years ago

That's what I've been doing as well, but I'm having problems finding the right arguments to send with scripts/invoke_method.py for pool.dataset.query. I've been able to do it directly via ws, but get an error when using the script.

sdwilsh commented 2 years ago

Passing arguments can be tricky, and I should probably improve the help/documentation for that script to make it clearer (especially because I think I've forgotten myself). If you have some python code that works, I can figure out what that command line should look like later tonight for you (and add the docs).

colemamd commented 2 years ago

Actually I've got a websocket browser extension I've been using to pass json. {"id":"78430116-ef33-47c9-9715-bbe472f5fad0","msg":"method","method":"pool.dataset.query","query-filters":["query-options.extra.flat",false]} is what I can pass directly to the server and get the appropriate response, but scripts/invoke_method.py gives me a KeyError: 'result' when I pass python3 scripts/invoke_method.py pool.dataset.query --arguments '{"query-filters":["query-options.extra.flat",false]}'.

Edit: Just want to add I believe it's because the scripts/invoke_method.py assumes the params key to be used, but what I'm trying to pass is the query-options key

Edit2: Using python3 scrips/invoke_method.py pool.dataset.query --arguments '[[["type", "=", "FILESYSTEM"]]]' will return all datasets of type FILESYSTEM, so I'm making progress.

agospher4eg commented 2 years ago

Great work! Thank you!

sdwilsh commented 2 years ago

I think @colemamd is planning to add a bit more support for pools here :)

colemamd commented 2 years ago

I've got encryption status working now. What else are you looking for @agospher4eg? As previously mentioned, the API only provides available space and used space from the datasets, not pools.

agospher4eg commented 2 years ago

It was exactly what i wish. Screenshot from 2022-01-12 09-56-00

sdwilsh commented 2 years ago

Alright, @colemamd, we can get merged in what you have, and deal with other additions later then :D