Closed skateman closed 4 years ago
Ping @dmetzger57 , @abellotti , @agrare .
Guys, this is an API feature request that is past what the UI team members so far did on the API. We need someone from the API team to implement this. Or at least provide some instructions on how this is to be done in the API so that it will get merged.
Can we get such help, please?
As this is an API request, I look to @gtanzillo and @abellotti to assess when (I didn't say "if" 😄 ) we can help
I kinda started working on the API support but ran into the obstacles I've described above. I would be able to implement this thing in the API with some copy-paste engineering from the UI, but I don't think anyone would merge such :spaghetti:
My idea was to utilize the AuthenticationMixin.validate_credentials_task
method in an /api/providers/validate
endpoint that would return us a task ID. We already have a mechanism to poll a task's status via the API, however, the generation of the input argument of the previously mentioned method is a mess...
There's a method included in each provider to create a MiqTask that checks the credentials, however, the input arguments for this task are different for each provider.
Yeah I never liked that this used raw_connect
because the parameters are different for every provider. Would rather we design on a common ems.connect()
interface that can be used generically.
Related to https://github.com/ManageIQ/manageiq-api/issues/585 ?
I personally think this is extremely important for a solid API experience and greater pluggability on the UI side.
raw_connect
was supposed to be the common interface, but then as new providers came on board, the pattern wasn't followed.
:+1: Yeah agreed
Oh really? Usually all the raw_
methods are the provider specific version of the common interface
I don't think any of the raw_connect methods have the same parameters but I'll have to check
Amazon introduced the "service" name, and at the time, if I recall, we added an options
parameter to deal with provider specifics...then it got refactored...and then I have no idea what happened 😆
Haha okay, well here is what we've got to work with:
./manageiq/app/models/manageiq/providers/embedded_ansible/provider.rb: def self.raw_connect(base_url, username, password, verify_ssl)
./manageiq-providers-amazon/app/models/manageiq/providers/amazon/manager_mixin.rb: def raw_connect(access_key_id, secret_access_key, service, region,
./manageiq-providers-redfish/app/models/manageiq/providers/redfish/manager_mixin.rb: def raw_connect(username, password, host, port, protocol)
./manageiq-providers-kubevirt/app/models/manageiq/providers/kubevirt/infra_manager.rb: def self.raw_connect(opts)
./manageiq-providers-google/app/models/manageiq/providers/google/manager_mixin.rb: def raw_connect(google_project, google_json_key, options, proxy_uri = nil, validate = false)
./manageiq-providers-ovirt/app/models/manageiq/providers/redhat/infra_manager/api_integration.rb: def raw_connect(opts = {})
./manageiq-providers-openshift/app/models/manageiq/providers/openshift/container_manager_mixin.rb: def raw_connect(hostname, port, options)
./manageiq-providers-ansible_tower/app/models/manageiq/providers/ansible_tower/shared/provider.rb: def raw_connect(base_url, username, password, verify_ssl)
./manageiq-providers-vmware/app/models/manageiq/providers/vmware/manager_auth_mixin.rb: def raw_connect(server, port, username, password, api_version = '5.5', validate = false)
./manageiq-providers-vmware/app/models/manageiq/providers/vmware/infra_manager/vim_connect_mixin.rb: def raw_connect(options)
./manageiq-providers-nuage/app/models/manageiq/providers/nuage/manager_mixin.rb: def raw_connect(username, password, endpoint_opts)
./manageiq-providers-openstack/lib/manageiq/providers/openstack/legacy/openstack_handle/handle.rb: def self.raw_connect_try_ssl(username, password, address, port, service = "Compute", options = nil,
./manageiq-providers-openstack/lib/manageiq/providers/openstack/legacy/openstack_handle/handle.rb: def self.raw_connect(username, password, auth_url, service = "Compute", extra_opts = nil)
./manageiq-providers-openstack/app/models/manageiq/providers/openstack/manager_mixin.rb: def raw_connect(password, params, service = "Compute")
./manageiq-providers-lenovo/app/models/manageiq/providers/lenovo/manager_mixin.rb: def raw_connect(username, password, host, port, auth_type, verify_ssl, user_agent_label, validate = false, timeout: nil)
./manageiq-providers-kubernetes/app/models/manageiq/providers/kubernetes/virtualization_manager_mixin.rb: def raw_connect(options)
./manageiq-providers-kubernetes/app/models/manageiq/providers/kubernetes/container_manager.rb: def self.raw_connect(hostname, port, options)
./manageiq-providers-kubernetes/app/models/manageiq/providers/kubernetes/monitoring_manager_mixin.rb: def raw_connect(options)
./manageiq-providers-scvmm/app/models/manageiq/providers/microsoft/infra_manager.rb: def self.raw_connect(connect_params, validate = false)
./manageiq-providers-azure/app/models/manageiq/providers/azure/manager_mixin.rb: self.class.raw_connect(client_id, client_key, azure_tenant_id, subscription, options[:proxy_uri] || http_proxy_uri, provider_region, default_endpoint)
./manageiq-providers-azure/app/models/manageiq/providers/azure/manager_mixin.rb: def raw_connect(client_id, client_key, azure_tenant_id, subscription, proxy_uri = nil, provider_region = nil, endpoint = nil)
./manageiq-providers-foreman/app/models/manageiq/providers/foreman/provider.rb: def self.raw_connect(base_url, username, password, verify_ssl)
So some of them take just an options hash but the vast majority take whatever they need for that provider
@agrare Ok, let's design a consistent interface at the model level somehow and then we can expose that via the /api/providers/validate
that @skateman is proposing. We can take inspiration on how to design that form the if-else constructs at the UI level (since that's basically everything we'll need to do anyway)
bump
@agrare Assigning to you just for tracking...we still need to design this.
@Fryguy and I discussed this and we are thinking something like this:
At the provider class level each provider would define their "parameters_for_create" in a JSON-schema format so taking Amazon as an example:
ManageIQ::Providers::Amazon::CloudManager.params_for_create
Then OPTIONS
on /api/providers
would return the params for the different provider types.
We would add a (names just an example, I'm terrible at naming) /api/providers#verify_credentials
action which would take the filled in parameters and pass them to a new ExtManagementSystem.verify_credentials
method common to all providers.
This will return a task which can be tracked for success/failure.
I'm thinking that the format for params_for_create
will be either DDF like we have in sources-api for the different source_types (https://github.com/ManageIQ/sources-api/blob/master/db/seeds.rb#L23-L31) or something like an openapi spec.
@agrare this is perfect, cc @Hyperkid123
I'm thinking that the format for params_for_create will be either DDF like
Since we plan to use data driven forms for the component i would be great to use the DDF format for the authentication partials.
@Hyperkid123 sounds good to me! I was planning on using the parameters from sources-api for amazon as a starting point ref
Updated the description with a list of TODO items to complete this.
I'll start working on the verify_credentials
API endpoint.
@skateman we created a method on ExtManagementSystem that you can call from the API (https://github.com/ManageIQ/manageiq/pull/19346). It will return a task_id which can be given to the UI to wait for it to finish. It will update the task message with the error if it failed validation.
@Fryguy @agrare we would also need a method for adding a new provider that accepts the same arguments as the validation. Then I'll expose it as an API endpoint, assuming we don't want to break compatibility with the current POST /api/providers
@skateman :+1: can you add a TODO item in the checklist above?
Sure @agrare, both there.
Apologies if this messes up your search strings, but I've changed the title of this issue to better reflect the direction we've gone. This issue has now become an overarching epic to have "API driven, pluggable, provider creation and validation", so that's what I changed it to.
This is a nice-to-have for Jansa. Seems like we are pretty close to the finish line.
@chessbyte we are currently dealing with an issue with prefilling dynamically loaded schemas with values in the UI but it seems that yesterday we have found a solution. I think that was the last major thing that was missing. @skateman?
Yeah, @Hyperkid123 saved the (yester)day.
Added a bullet to the OP about
Add a
service
column to the endpoints table and fix its usage in the providers where it is using role instead.
This is based on the issue @skateman found where certain providers used the role
column incorrectly, which, in turn, makes it impossible to use DDF. In those providers, instead of role
being something like "metrics", instead it has multiple values like "prometheus" and "hawkular", and then there is UI logic to put those both under a "Metrics" tab with a radio button to choose between them. Instead, we will add a new column to endpoints named service
, which will have that multi-value, and then expose that properly over DDF.
Discussed the structure of the DDF payload (specifically in https://github.com/ManageIQ/manageiq-providers-nuage/pull/199, but thinking generically) and I think we need 2 things.
The structure of DDF needs to be the API and not tied to the database structure which can change. In the above PR, initially authentications and endpoints were separate parts of the payload, but I think it should be a single thing. So, I think it should have an "endpoints"
key (or some other term like "connections"
, the name doesn't matter), which is a hash of endpoints whose key is the "role" and whose values are the provider specific values. If there is multiple sub-choices, then "type" is the expected field. Example for nuage:
"endpoints" => {
"default" => {
"username" => String,
"default_userid" => String,
"default_hostname" => String,
"default_api_port" => Integer,
"default_security_protocol" => String,
"password" => String,
},
"events" => {
"type" => String,
"hostname" => String,
"amqp_userid" => String,
"amqp_api_port" => String,
"password" => String,
"server" => String,
"port" => Integer,
"protocol" => String
}
}
Only thing I'm not sure about is the "amqp_" prefix on some of the keys. Ideally it would just be user_id
and api_port
, but I'd not sure DDF can support that when there are multiple.
Additionally, keeping this outer structure consistent may also allow us to make methods like create_from_ddf
a bit more generic.
One problem with this structure is that is doesn't map to the regular API in a direct way. The API is right now supporting create
and edit
in DDF format, but not get
[1], where regular API is used by the UI code, so without the direct mapping the get
code is complicated or non-pluggable. There are 2 approaches to fix this
authtype
and role
values accordingly. However, the existing UI forms, and API would have to change to accommodate this while we are developing out this DDF feature. We are also not sure if anything else using these fields in the API would be affected.get
fashion, mirroring what is expected for create
and edit
. This would fall in line with the DDF structure not following the database (which the API currently does). Additionally, it's strange (at least to me) that we have create
and update
but not get
in DDF format, so this would introduce some consistency there. On the UI code side, this becomes a very simple get, removing the logic when fetching the data. This particular change would require no database changes, as the pluggable model would be responsible for returning the correct data, and thus the old forms should still work. Once the old forms are gone and we can change the backend usage in the database, then updating these model methods would be relatively straightforward.@skateman Thoughts?
your endpoints/authentications schema isn't fully accurate, none of the fields have a default_
or amqp_
prefix. I don't know where it is coming from, check for example the azure or amazon providers. And yes, DDF can support that if they are multiple.
The mapping is indirect, but generic, i.e. it's the same for every authentication/endpoint. It's more like a HTML form attributes limitation. The actual submitting of the form is 1:1 maps to the API/database structure, we're not actually submitting in the DDF format, we're submitting in the same format as the DB structure expects it. I started making an add/edit method for each provider separately but as far as I see, eventually I can merge them into a single common pair of add/edit. The only mapping I'm doing is the reverse for the endpoints/authentications ... again it's a HTML problem, not a DDF one.
I'm definitely for the first solution, the change in the existing UI forms should be minimal as we'd only consume the same information, just from a different field. If that's not acceptable, we could add an additional field that would say events
or whatever (even default duplicated) to keep the backwards compatibility with the API/oldUI.
I am strongly against the second solution, because it goes against the sepration of concerns that we're trying to do across the UI. Also it doesn't protect us from trivial errors like passing down the password field to the client without any embedded restrictions (that the API provides right now). We discussed this with @Hyperkid123 multiple times, he might be able to tell more to us.
your endpoints/authentications schema isn't fully accurate, none of the fields have a default or amqp prefix.
I threw it together by hand, so yes, there are details like that...I'll update.
EDIT: default and amqp are in there, so where? EDIT 2: Also, this is not the schema, this is what a resultant payload looks like that we'd see in verify_credentials or create. Thought it was simpler to present that way.
we're not actually submitting in the DDF format, we're submitting in the same format as the DB structure expects it. ... goes against the sepration of concerns that we're trying to do across the UI
I thought the posted payload was a DDF format and sent directly with a key ddf: true
indicating that it's DDF format. I'm not sure I understand the separation of concerns in this particular case, because we are already posting DDF, and we've already chosen that as our API for schema/create/update, just not get .
the change in the existing UI forms should be minimal
I was under the impression this would be complicated. If this is easy, then yeah, lets go all the way with solution one.
I thought the posted payload was a DDF format and sent directly with a key ddf: true indicating that it's DDF format. I'm not sure I understand the separation of concerns in this particular case, because we are already posting DDF, and we've already chosen that as our API for schema/create/update, just not get .
No, ddf: true
means that we're not using the default API handler which so far doesn't know how to handle endpoints and authentications.
Separation of concerns: form schema + data, we would not want to mix them if possible.
I was expecting that the current REST API is not sufficient for handling the data posted from a DDF form, so I started implementing the create_from_params
and edit_with_params
for each provider separately. There's a ddf: true
param set on the form that helps me distinguish on the API level if we're sending a regular API request or data from a DDF form.
As I am working myself through the various provider forms, I realized that the create_from_params
and edit_with_params
methods are 99.9% the same in each case and can be totally eliminated, i.e. we could just use the already existing API endpoints, without any ddf: true
magic.
The only DDF-specific (and HTTP form) problem is that the nested endpoints/authentications cannot be consumed as an array of objects, so I am converting them to a hash of objects on the frontend. Then I am submitting it in the same way, which is a mistake, I can simply convert them back on the frontend before submitting.
On top of this issue/task, I found another problem when playing around with the Foreman Configuration Manager. Apparently on top of the Configuration Manager (EMS) there's a parent Provider
record that holds the endpoints
and authentications
. I bridged this problem by delegating these two on the plugin level. However, there's an additional issue with the name
field in the context of EMS
vs Provider
.
The ems.name
is never set directly (except :confused: through the REST API) and it's set as #{provider.name} Configuration Manager
. So whenever I edit the EMS through the current UI, I actually load up the provider.name
and set it when posting the form and it automatically propagates the suffixed name to the EMS
. Thanks to the delegation mentioned above, there's no problem in creating/editing the EMS/Provider. However, we load the new DDF form's values through the API and the Provider
isn't really exposed there. So whenever I ask for the name
it will return with the suffixed one. When I submit that form, what object's name
should be updated.
Currently the API doesn't allow the editing of a Provider
, so I could just edit the name of the EMS
and the Provider
would keep its old name. Is this okay?
Oh, I'm also not sure if it's possible to create a foreman provider through the API in the regular way ... my guess is that not :disappointed: because we need to create the parent Provider
and the API doesn't really do that AFAIK
So after some experiments and I'm back with some updates:
Multi-protocol endpoints are fixed now by using a special component that sorts out the protocol selection without actually requiring anything on the backend. The component is called protocol-selector
and it renders as a select-field
. For each defined option you can specify a pivot
option that would select the given option upon the form initializes if the pivot
field is set. Based on the value of this new component, it's possible to set conditions and show/hide specific fields. There's an example implemented for OpenStack here.
Converting of endpoints/authentications is still necessary, my only hope was to use a FieldArray
component. However, it doesn't really support non-homogenous schemas, meaning that for each endpoint/authentication would have the same fields. So I am thinking about a temporary conversion inside the component that's not propagated to the API. We convert from an array of objects to a single object and we convert them back before submitting we convert them back. Basically I'd like to move the conversion from the providers to the frontend. There are two tradeoffs though:
what if we create the foreman::configuration_manager instead of the foreman::provider
However, the cloud provider forms don't depend on Foreman, so I guess I can have them working by the end of this week.
@agrare @Fryguy there's an issue with network manager providers as they're implicitly tied to a parent_manager
, so ManageIQ::Providers::Nuage::NetworkManager.supported_for_create?
will return false
. I know that all the other network managers are tied to a parent manager, but Nuage and also NSX-T aren't.
Kudos and big thank you to @skateman @himdel @agrare @martinpovolny @Hyperkid123 @Fryguy and anybody else involved in getting this epic across the finish line!
The API doesn't provide any kind of credential validation when adding provider, while the UI implements this feature. Therefore, it is possible to add a provider with invalid credentials through the API and we have no endpoint even to check if these credentials are correct.
There's a method included in each provider to create a
MiqTask
that checks the credentials, however, the input arguments for this task are different for each provider. On the UI side this difference is compensated via :fire_engine: :spaghetti: :fire: if-else structures testing which provider is currently used.This goes totally in the opposite direction to our philosophy of having pluggable providers. Ideally we don't want anything in the UI that sends data to the backend based on the provider's type.
We're working with @Hyperkid123 on the API-driven provider forms and this is kinda blocking us.
ping @martinpovolny @Ladas @agrare
TODO:
.raw_connect
) (@agrare https://github.com/ManageIQ/manageiq/pull/19346)ChangeWe decided to name the new method verify_credentials_task and deprecate validate_credentials_task, given the methods on the provider are called.verify_credentials_task
to.raw_connect_task
so that we can add the newvalidate_credentials_task
before the UI is updated without breaking anythingverify_credentials