Open ruflin opened 3 years ago
Pinging @elastic/kibana-core (Team:Core)
Pinging @elastic/kibana-design (Team:Kibana-Design)
In some of our code, we seem to already make the difference: https://github.com/elastic/kibana/blob/master/src/plugins/telemetry/server/collectors/usage/telemetry_usage_collector.ts#L86
The telemetry referenced here isn't actually determining the environment from within Kibana, but rather reading from a config/telemetry.yml
file which ECE, ESS, and ECK populate with their own static data.
The monitoring plugin also has an old cloud "detector" which attempts to determine known cloud services by hitting their APIs and seeing if they get a response, however this appears to be unused (I'm planning to look into it to see if it still actually works).
That said, IMHO neither of these two items represent an official/stable way to learn about the environment.
@elastic/kibana-core I think this warrants further discussion around:
PluginInitializerContext
with the other environment info, or exposed from the cloud plugin?), andThank you @lukeelmers for such a good summary of the existing methods:
I'd vote for building an API inside the cloud
plugin that would allow us to provide such info.
The monitoring plugin also has an old cloud "detector" which attempts to determine known cloud services by hitting their APIs and seeing if they get a response,
Please, be aware that this cloud detector is not limited to Elastic Cloud options. It will detect if an on-premise instance is running on any cloud provider. I don't know if this info is required by Fleet in this case, @ruflin?
We don't need information about any other Cloud providers except ECE and ESS. One thing that could become of interest in the future is to know if we run in ECK.
@afharo there are some discussions around broadening the capabilities or ownership of the cloud
plugin to include some additional navigation changes for Kibana in Cloud and likely RBAC associated with those (cc: @ryankeairns @kobelb). I'm wondering if we should decouple this implementation from the cloud
plugin if it's intended use is to understand the general deployment environment, whether it's self-managed, ECE, ECK or Cloud. I would imagine outside of Cloud provider the cluster / deployment is hosted on, there may be other primitives (using @thesmallestduck's terminology 😉 ) we want to better understand that we aren't today. Maybe we create a new cluster_profile
plugin that we can start building off of?
The monitoring plugin also has an old cloud "detector" which attempts to determine known cloud services by hitting their APIs and seeing if they get a response, however this appears to be unused (I'm planning to look into it to see if it still actually works).
@lukeelmers this was the original intent of opening this issue. Getting this information back so we can have a better understanding of our overall footprint on different Cloud providers, not just ones using the stack directly through our Cloud service or the marketplace, but also self-managed instances.
How we'd go about collecting this information (as part of the "getting started" initiative there were plans to try to collect this info in the cloud UI when a deployment is spinning up, and then agree on a way to hand that data off to Kibana, so getting more clarity on those plans is likely the first step
I think we'll have an easier time of understanding this information if coming from our own Cloud environment, I'm not sure how it would work for deployments that are spun up in other Cloud provider market places. @linyaru and @osmanis might be able to shed some light here. Either way, a consistent mechanism for reporting back the provider (if any) regardless of deployment mechanism will help answer questions from a variety of teams.
Based on all the comments, for the sake of separation of concerns, I think it makes sense to have a few sets of APIs:
aws
, gcp
, azure
, alibaba
, etc., no matter if it's running on ECS/ECK/ECE or on-premises. It can reuse the exiting old cloud "detector" logic.ece
, ecs
, eck
.What do you think?
this was the original intent of opening this issue
A bit confused - the original intent of this issue? There's a separate issue in the telemetry repo to re-enable telemetry for cloud providers using that logic, which is the one I was referring to.
I think we'll have an easier time of understanding this information if coming from our own Cloud environment, I'm not sure how it would work for deployments that are spun up in other Cloud provider market places.
If the end goal is to make this work for marketplace deployments as well, then the "technology picker" which has been discussed isn't really going to help us if it is only implemented in Cloud... in that case we'd need it Kibana-side as well, or we'd need to pursue an approach similar to the old monitoring implementation where we attempt to discover that info ourselves.
@afharo I agree with your overall assessment that it might make sense to separate out where we are collecting this data, even if we end up exposing a generalized API later that gathers a "cluster profile" together in one place.
The next steps I see here are:
ECK
which was mentioned)ESS
and ECE
specifically to unblock Fleet. @ruflin When do you expect this to become a blocker for you?[Edit, pinged the wrong Nicolas, sorry!]
A bit confused - the original intent of this issue? There's a separate issue in the telemetry repo to re-enable telemetry for cloud providers using that logic, which is the one I was referring to.
That's what I get for answering an issue before my morning coffee ☕ . Apologies for the confusion everyone. Happy to help with a concrete list of items outside of Fleet needs.
@ruflin do you mind elaborating on the differences in behavior that we expect from Fleet when it's running in ECE/ESS. I think we're jumping the gun here and immediately talking about implementing something without the proper context.
The initial problem on why this issue was opened is because for example apm-server on ECE and ESS has different defaults (allowlist / blocklist) for the config options. As these config options now move to Fleet, I was thinking we would need to differentiate in Fleet. An other discussion was around default ports which might be different.
But a separate discussion has evolved internally over the last few days where we "challenge" if it should actually be different and the second part is, if there is a difference, it is part of the either the Kibana config or loaded through the Fleet API. This would remove the need for Kibana to know the environment and would mean it is up to the "environment" to put in the correct configuration or make the API calls.
Lets put this issue on hold as maybe we found on other path here. Sorry for filing this "too early".
In Remote clusters app, we also would like to differentiate between ESS and ECE since users will need a different documentation link depending on their environment. ESS: https://www.elastic.co/guide/en/cloud/current/ec-enable-ccs.html ECE: https://www.elastic.co/guide/en/cloud-enterprise/current/ece-enable-ccs.html
Kibana can be run in different environments. Two of them are Elastic Cloud and Elastic Cloud Enterprise. For Fleet it is relevant to know in which environment it is running to expose the correct configuration options.
In some of our code, we seem to already make the difference: https://github.com/elastic/kibana/blob/master/src/plugins/telemetry/server/collectors/usage/telemetry_usage_collector.ts#L86