elastic / kibana

Your window into the Elastic Stack
https://www.elastic.co/products/kibana
Other
19.48k stars 8.04k forks source link

Expose serverless Elasticsearch client from core #169674

Open lukeelmers opened 8 months ago

lukeelmers commented 8 months ago

Moving forward, there will be two flavors of the Elasticsearch JS client: the stateful client that exists today which aims to be compatible with APIs that are shared between stateful and serverless, and a stateless client which will be able to talk to ES APIs that are only available in serverless Elasticsearch.

Currently Kibana isn't using any Serverless-only ES APIs that would require the stateless client, but it is expected that we will have a need for this over time.

We need to define and implement a strategy for handling these two different clients from within core.

Broadly there are at least two directions we could take here:

  1. Expose two different clients from the Elasticsearch service. This means plugins would need to know when to select one client vs the other. Since (I think) the plan is for the serverless client to be a superset of the stateful client, there could be some risk of duplication here where we have the same APIs exposed by both clients... this is something we'd need to confirm with the clients team.
  2. Attempt to hide the fact that we are dealing with two different clients and continue exposing a single Elasticsearch service from core. In classic Kibana we'd use the current client, and in serverless we'd use the new stateless one which would provide access to additional APIs beyond what's available in the current client. Since this would be an implementation detail of core, plugins would not necessarily need to know the difference: Serverless-only plugins could assume serverless APIs are available, but wouldn't need to care about using a particular client.
elasticmachine commented 8 months ago

Pinging @elastic/kibana-core (Team:Core)

pgayvallet commented 8 months ago

Sharing my opinion here:

Expose two different clients from the Elasticsearch service. This means plugins would need to know when to select one client vs the other.

This is ihmo the correct design, but, I think it's important to note that we would need a common interface for the subset (intersection) of the APIs that are exactly the same between the two environment Otherwise it will become a nightmare for developers. (which is why this is something that needs to be discussed/confirmed with the client team as soon as we can ihmo)

So if we assume we have 3 interfaces total (don't mind the names):

That way, most usages of the client could be done without needing to know the current environment. Developers can just use the CommonClient interface in their method signature, e.g

const client: CommonClient = core.elasticsearch.getClient(); // returns a `TraditionalClient` - note that `getClient` would likely be deprecated in favor of `getTraditionalClient`

// do anything in any env given the interface only has APIs compatible with both envs.
client.someCommonApi()

And then use either of the specialized interface when they know they're facing a specific env:

const client:  = core.elasticsearch.getServerlessClient() // returns a `ServerlessClient`

// I did not force cast to CommonClient, so I can use common + serverless APIs
client.someServerlessApi()

just for sugar, we could even introduce a third API, getCommonClient, which would return any of the implementation (it doesn't matter given all would be supporting the APis from the common interface) as a CommonClient

const client:  = core.elasticsearch.getCommonClient() // returns a `CommonClient`

// can only use common APIs (or at least only common APis are defined on the interface)
client.someCommonApi()
  1. Attempt to hide the fact that we are dealing with two different clients and continue exposing a single Elasticsearch service from core. In classic Kibana we'd use the current client, and in serverless we'd use the new stateless one which would provide access to additional APIs beyond what's available in the current clieny

So, just to make sure I understand, it would mean having a single interface and a single implementation (right?), with all APIs, both serverless and traditional, and let the plugin owners having the responsibility to know when it's safe to call serverless-only and/or traditional-only API.

The and a single implementation part is important ihmo, as otherwise it seems very developer error prone (having nullpointer exception when trying to do client.someApiNotAvailableInMyEnv() isn't the same as having the request sent and ES returning an error because it doesn't support it.

If we can have a single implementation for this, then even if less elegant, I agree that it could be an option. In the end, the developers will have to know if they are in serverless or traditional environments anyway when manipulating env-specific APIs, so maybe the way is to just expose everything from a single interface (which is technically what we're doing today - we have a single client exposing some APIs that are only working against traditional ES)

afharo commented 1 month ago

It looks like now we have a Serverless-specific ES client to use: https://github.com/elastic/elasticsearch-serverless-js

We might be able to take this task now.

pgayvallet commented 1 month ago

It looks like now we have a Serverless-specific ES client to use We might be able to take this task now.

The initial idea was the to be able to use a common and 2 specialized interfaces, as described in https://github.com/elastic/kibana/issues/169674#issuecomment-1778837009.

I'm not sure this is the approach that was taken in https://github.com/elastic/elasticsearch-serverless-js, as I don't see anything closely or remotely looking to a "common traditional and serverless client interface" in that package. which would mean we are back on square zero: finding a solution on our own to put squares into triangles and circles?

afharo commented 1 month ago

Q: Why do we need 3 clients? Common + 2 specialized?

I naively think this is automatically done by TS as in:

type ESClient = ClassicESClient | ServerlessESClient;

function myFunctionThatCallsES(esClient: ESClient) {
  await esClient.aCommonAPI(); // TS is OK with this

  await esClient.aServerlessSpecificAPI(); // TS complains because it might not exist if `ESClient` is `ClassicESClient`

  if (isServerlessClient(esClient)) {
    await esClient.aServerlessSpecificAPI(); // TS is now happy
  }
}

Am I missing any point?

afharo commented 1 week ago

I was taking a look at the serverless client, and I noticed one important gotcha:

  import { Client as TraditionalClient } from '@elastic/elasticsearch';
  import { Client as ServerlessClient } from '@elastic/elasticsearch-serverless';

  const ClientConstructor = flavor === 'serverless' ? ServerlessClient : TraditionalClient;
  const client = new ClientConstructor({ ...clientOptions });

  client.cluster.stats(); // TS claims that the API does not exist
  if (client instanceof TraditionalClient) {
    client.cluster.stats(); // TS is now happy
  }

This is exactly as we expected... However... the truth is that API is supported by Serverless, but only to internal clients like Kibana. The Serverkess client correctly hides internal-only APIs to avoid customers hitting a dead end when trying to use an API that isn't supported in that offering.

Another example is the request our ES service uses to validate the connectivity and compatibility of the ES nodes:

client.nodes.info({
  filter_path: ['nodes.*.version', 'nodes.*.http.publish_address', 'nodes.*.ip'],
})

We may need to instantiate the traditional client and expose it as very-internal, trust me, I know what I'm doing to cover these scenarios.

jloleysens commented 1 week ago

The more I read about the current serverless client the less convinced I become that we should be using it directly in Kibana. The tension I'm seeing is in the intentions. The specs that build the serverless clients:

The problem is the current serverless clients only reflect (and implement) the first point.

This means we need some special knowledge of which excluded functionalities are hidden/removed/changed on serverless in order to preserve them. I'm not sure giving access to the traditional client as a "super client" adequately covers this case. It does know about "internal" serverless endpoints (which is "good enough" right now) but it won't know about any changes to their req/res/options or which are truly removed. [EDIT] So it would be the "trust me i know what I'm doing" client, but we may be fighting TS and can already do "super user" using the transport directly.

It might be simplest if we (not necessarily Kibana folks) leverage the knowledge in the elasticsearch spec repo to find all APIs that are visbility=private on serverless to extract a "Kibana serverless client". This way we preserve all information about the routes in traditional vs serverless for Kibana.

IMO exposing this as one Client interface might lead to more headaches than it solves. Maybe 1 base (intersection) client and 2 special clients is the best way? 🀷🏻 weakly held opinion.

afharo commented 1 week ago

++ I'd love it if the serverless client provided a way to use the internal APIs (maybe under client.internal.* to make it more obvious?). Maybe @JoshMock knows about any plans to include these in some way?

At the same time, I'm not sure having elasticsearch-serverless-js to expose these internal APIs is a good thing because it sort of invites folks to use them (there would be nothing stopping Kibana plugins or any other apps from calling elasticsearch.client.asCurrentUser.internal.hiddenAPI()). Ideally, these APIs would only be available in elasticsearch.client.asInternal because, AFAIK, those APIs are only available for the Kibana user. Maybe an explicit elasticsearch-serverless-internal-js is necessary here? Alternatively, we could expose asCurrentUser as Omit<ServerlessClient, 'internal'>, but that wouldn't solve it for customers using the library.


While we come to an agreement on the best way to expose the internal APIs, I think leveraging the traditional client workaround for them is safer than what we have today: Today, plugins are unaware of the differences, and it's up to the developers to be educated (either by looking at the specs or by trial and error) on whether they can use an API or not. The Serverless client helps with that: failing type checks on unavailable APIs, and forcing developers to take a detour and consciously use the internal (Kibana-bounded-only) client. It's still unsafe on those specific calls, but we are safer on everything else.

We'll definitely need a follow-up on those internal usages to adapt them to an actual client that supports Serverless internal APIs. But I think it's best if we don't delay the introduction of the Serverless client for this reasons.

WDYT?

jloleysens commented 1 week ago

It's still unsafe on those specific calls, but we are safer on everything else...WDYT?

Yeah I see your point. I suppose with the type union we have the base intersection covered. Anything that is not compatible with the union you'd have to do a type narrowing before using the client. And exposing a bypass client in some way that is trackable does make sense.

Overall I'd still prefer avoiding this work of the "trust me I know what I'm doing client", but perhaps you're right about the immediate benefit being worth it.

afharo commented 1 week ago

Overall I'd still prefer avoiding this work, but perhaps you're right about the immediate benefit being worth it.

I'm working on a PoC to know if there are other gotchas like if both clients are not in sync, you have a lot of issues in the common APIs. I'll see where it takes me.

JoshMock commented 1 week ago

I'd love it if the serverless client provided a way to use the internal APIs (maybe under client.internal.* to make it more obvious?). Maybe @JoshMock knows about any plans to include these in some way?

There are no plans to include anything internal in the serverless client, as it is intended primarily for public use.

I had a conversation going with @lukeelmers a while back about whether or not a third Kibana-specific version of the client would ever be necessary, especially once a separate serverless client was deemed necessary. We kept putting off the decision because it remained unclear whether the effort was worth it.

From my perspective, it would be ideal to not have to generate even more clients. Whenever I add a feature to the stack client, I have to figure out if/how to port it to serverless. Doing so usually takes an extra day of porting and testing each change. So a third client would increase that per-feature overhead even more.

There are many different ways we could solve this problem, and I'd love to get a better understanding of all your needs regarding internal APIs to see if there's a solution that doesn't add much extra overhead.

pgayvallet commented 1 week ago

I'd love to get a better understanding of all your needs regarding internal APIs to see if there's a solution that doesn't add much extra overhead

Well, simply put, Kibana is by right allowed to use those internal APIs (the kibana user has operator permissions), and is using themin practice (both in Core/Platform functionalities, and from some plugins - management plugins coming to mind first as obvious rightful callers).

So the question is: how do we get to get an interface that trully reflects which Serverless APIs Kibana is allowed to call with the right API signatures?

Because (and correct me if I'm wrong) atm the divergences between the traditional and serverless clients are just APIs being fully removed from the serverless one, right? So for now, the traditional client is a superset of the serverless one (still right?), so in theory we can simply keep using he traditional client's interface for serverless, for now.

However... We assume (and here too, please correct us if we're wrong, we really have no visibility on what you're all intending to do with that serverless client - this discussion being a proof...), that at some point, the divergences will become more important (params/outpout properties differences - such as the "version" field being removed from stats on serverless, signatures mismatch, and so on). At that point, we would no longer be able to safely use the traditional client's interface when performing requests against serverless. But we would still need to be able to perform calls to internal APIs, so using that "public facade" version of the serverless client would still not be viable...

So yeah, I return the question: what should we all do πŸ˜„?

jloleysens commented 1 week ago

Thanks for your response @JoshMock !

There are many different ways we could solve this problem

Are you thinking we could minimise effort by providing the subset of internal serverless APIs Kibana uses? It's tricky to introspect which ES endpoints Kibana uses, not the least because current/past usage is not a reliable indication of future usage needs.

Or are there approaches you had in mind?

better understanding of all your needs regarding internal APIs

Ideally we'd have a way for Kibana developers to get a reliable indication of available ES APIs (and their options, once/if they diverge). Leveraging TypeScript is for sure the best way to do this. I think if we can just get to a place where we have the full interface of available APIs for serverless we can build it such that if they ever diverge we can detect it. Perhaps taking some kind of manual action on Kibana's side (as we currently have a precedent for this anyway, but usually as a temporary measure until clients are released).

JoshMock commented 1 week ago

Because (and correct me if I'm wrong) atm the divergences between the traditional and serverless clients are just APIs being fully removed from the serverless one, right?

Also some properties have been removed from serverless. e.g. force_synthetic_source here. So it's not just that some APIs will be missing, request or response types may also differ. So far it's only been removals, but the possibility of new properties being added is not impossible.

So, it is already the case that you can't trust the stack client's types to accurately represent the same type of request on serverless.

That said, there is an expectation that the stack client should continue to work when pointed at a serverless client. Missing APIs will return a 410 Gone, but any changes in request/response types would be your responsibility to account for at runtime, which I understand could get a little ugly.

My current idea is to provide a separate library for Kibana that does not include its own client implementation, but rather patches client.prototype with any missing APIs at runtime. Getting type definitions to work at dev/build time might take some effort to get working correctly, but that solution would avoid most of the overhead of maintaining a third client.

jloleysens commented 1 week ago

but rather patches client.prototype with any missing APIs at runtime. Getting type definitions to work at dev/build time might take some effort to get working correctly, but that solution would avoid most of the overhead of maintaining a third client

This would be awesome! Are you imagining for this patching logic to live in Kibana?

JoshMock commented 4 days ago

Are you imagining for this patching logic to live in Kibana?

It doesn't necessarily have to. A standalone package could in theory just add functions to Client.prototype when it's required. As long as the IDE can pick up those new APIs' type definitions that's all it would take.

jloleysens commented 3 days ago

It doesn't necessarily have to.

Got it. I was thinking of any reasonable measure that'll help remove the burden of maintaining the "Kibana client". How much effort would this be? Happy to meet and chat about possible approaches πŸ˜„

JoshMock commented 3 days ago

Going to meet with @jloleysens and @afharo next week about Kibana internal API support. :rocket: