ibm-quantum / Hub-Automation

Apache License 2.0
9 stars 5 forks source link

Enabling the retrieval of backend system names #2

Closed pshires closed 3 years ago

pshires commented 3 years ago

Hello,

The system-usage endpoint requires a backend name to be passed via the options / query parameters. Is it possible to retrieve a list of all backends via an API request so that we could automate the gathering of system-usage data? Just an endpoint alone would be sufficient- I would be happy to then open a PR adding either information about the endpoint to the Readme or an additional script that could be integrated into this client.

Matt-Stypulkoski commented 3 years ago

I believe the backends that are used in the get_analytics_for_users.py script are pulled from a separate csv file named analytics_data.csv. If by automating it you mean pulling all backends and then inputting each backend for each given user, then that would technically be possible, but would require reworking the script a bit. The endpoint to retrieve all available backends would also depend on the level in the Hub/Group/Project hierarchy that you want to retrieve the backends for.

I guess my main question is, are you trying to automatically retrieve all backends for a specific level in that hierarchy to then retrieve analytics for each of those backends? Or is it something else?

pshires commented 3 years ago

To respond to your main question- I am trying to do exactly that. Automatically retrieve all backends for a specific level in the hierarchy and then retrieve analytics for each of those backends. Ideally we could make a get request to an endpoint and receive a JSON response with a list of backends, and then that would allow us to pass the backend name to the system-usage endpoint directly (via correctly defined options- line 216) without the need for a csv file at all.

It might be easier if I just describe exactly what our goal is initially. We are looking to retrieve all of the analytics data available to us via get requests (probably once daily), store that data in a database, and then display that data to users in a meaningful and easily accessible way. This would be a long-running service that requires no manual intervention at all.

I understand that some users of this client may need the csv file workflow, but for our needs we only need the endpoints and then we handle the responses. The usage of the endpoints themselves is straightforward and documented via the source code itself. This being said- if you did decide to provide us with this endpoint and would like for it to be included in the client either as a standalone new script or integrated into get_analytics_for_users.py, I would be happy to do that work myself and open a PR (and make any changes desired, etc).

I realize my response is rather long-winded, but I hope that this provides more clarification as far as what I am actually trying to do.

Matt-Stypulkoski commented 3 years ago

There are endpoints from which you can retrieve a list of backends.

Hub Level (Must be Hub admin): GET Network/{hubName}/devices

Group Level (this one does not have a direct devices endpoint, so you must pull from all Hub data. Must be a Hub admin): GET Network/{hubName} (then parse down to the groups list, pick your specific group, and parse down to the device list in that group)

Project Level (Must be at least Group Admin): GET Network/{hubName}/Groups/{groupName}/Projects/{projectName}/devices

kwehden commented 3 years ago

@pshires, is this something you feel comfortable working with? If so, we can help by reviewing PRs or proposed implementation. Please let us know.

pshires commented 3 years ago

@kwehden , this is exactly what I was hoping to get. I've tested out the endpoints and seen the responses- the info I need is there. Thank you @Matt-Stypulkoski

Would you all like me to open a PR that adds a new script using these endpoints? I'd be happy to do that and make desired changes, etc.

kwehden commented 3 years ago

Absolutely! Please do! We'd be happy to review and add! Please tag myself and @Matt-Stypulkoski for review