canonical / lxd

Powerful system container and virtual machine manager
https://canonical.com/lxd
GNU Affero General Public License v3.0
4.36k stars 932 forks source link

Support for filtering REST collections #3364

Closed geodb27 closed 4 years ago

geodb27 commented 7 years ago

As for now, when issuing a GET on https://lxd-server:8443/1.0/containers one get a dict listing all containers hosted on the lxd-server. Would it be possible to add the ability to filter the results, with something like this for example : https://lxd-server:8443/1.0/containers?limits.memory=2GB&limits.processes=200 to get a dict filtered by the parameters given ? I've of course tried this, but it doesn't seem to work so far. The idea is to be abble to filter the results before getting an answer, so that the process would cost less when there are many containers hosted.

Hoping that this feature could be helpful to many.

stgraber commented 7 years ago

So it's certainly something we can look at at some point. Implementing the filtering shouldn't be very difficult, the tricky part is to come up with the right language for those filters. I expect we'd want support for more complex logic in there and need a way to differentiate filters for direct object properties (name, architecture, ...) vs nested object properties (config and devices).

So that's going quite a bit of thinking to come up to something that's reasonably generic and future proof.

If your main issue right now is that you're issuing one GET per container listed, to then go and check whether you care about the container or not, you could at least optimize things a bit with:

GET /1.0/containers?recursion=1

Which will get you everything in one shot, reducing the number of requests down to just one which includes all the properties you care about, making filtering quite a bit easier.

geodb27 commented 7 years ago

Thanks a lot for your quick answer stgraber :-) As a matter of fact, and to give you some more example, I have hosts that runs many containers. My work consists in initializing these containers, set them up and then they are up for the co-workers that have to administer them. That's why I've written my own gui to achieve this. But I've fallen into some troubles as I wrote my gui : There is no easy way to filter the container's list, and for now, I have no solution to display the containers that only for one team to this team only. However, I've seen that there is a user.something key value that one can use to add some properties to each container. That's what led me to this idea. Yet, I'll see what I can get out of the ?recursion=1 you pointed me to, because I guess that it will help me for another thing I present in my gui.

42phoenix42 commented 6 years ago

@stgraber hi, I think that at the moment the most necessary filtering is by name, for example /1.0/containers?name=test1,test2, test3 to reduce the response time and payload, when you have many containers

jdeans289 commented 4 years ago

Hi, I'm a student taking a virtualization class at UT Austin, and I'm currently collaborating with one other. @stgraber Is it possible that we can get more information about this issue if it's still available?

stgraber commented 4 years ago

Hi @jdeans289,

I can't say that I've been thinking that much about this issue since it was first reported 2.5 years ago :)

My guess is that we would want a way to pass filter= as part of the REST query, with some language use to filter based on the fields in the associated structs.

As a first step, I suspect we'd want this added to both /1.0/instances and /1.0/images. filter= should let you filter on any properties of the associated structs and let you combine those with basic logic (AND/OR/NOT).

So I should be able to hit /1.0/instances and get all containers which have security.privileged set to true and a status of Running and aren't on cluster03 (location) field. So effectively something like: config.security.privileged=true && status=Running && location!=cluster03

I don't know if there's any standard way to encapsulate that in a REST URL though, so that would need a bit of googling to see what's common practice. On the LXD side, the filter should be extracted when processing those two endpoints, then fed to a generic function which takes both the filter and the structs that are about to be returned and then proceeds to filter the result.

jdeans289 commented 4 years ago

Hi @stgraber

Apologies for not updating more frequently. My partner and I have been working on this issue, and have added filtering to the 1.0/instances endpoint so far. However, we are struggling to figure out how to make our functions general enough to handle both instances and images due to the differences in the structure of the objects. Could there be some generic interface that we are not finding, or would it be acceptable to handle these filters separately? For instances, we are using an instance.Instance interface to get information in order to filter.

stgraber commented 4 years ago

If it's easier, I guess you can write separate filtering code for []api.Instance (and api.InstanceFull) and []api.Image. A generic way would be to effectively have a function taking []interface{} and the filter provided by the user and then make use of reflect to match the filter against the data.

Reflect should let you get a filter like UpdateSource.Protocol=simplestreams and apply that to a []api.Image by looking at the fields available on the api.Image struct and matching that to what's provided by the user. It's somewhat tricky because reflect requires runtime type checking, if you do it wrong, Go will panic. So you'll need to decide what types you're willing to support and properly ignore fields that are of a different type.

https://github.com/lxc/distrobuilder has something similar to that, allowing the user to override arbitrary struct field through the CLI (-o path.key=value). This is handled by SetValue in shared/definition.go and uses some limited reflection to track down the field requested by the user and in this case, set it rather than read it for comparison.

jdeans289 commented 4 years ago

Thank you, this was very helpful! We are finishing up our implementation and will be opening a pull request soon.

stgraber commented 4 years ago

This has now been implemented.