Open mshustov opened 5 years ago
Pinging @elastic/kibana-platform
Pinging @elastic/kibana-security
What core services need to support this "scoping" mechanic? Obviously, it works for elasticsearch and other services that wrap it, like SavedObjects. Probably there is something else that I missed.
I can't think of anything that doesn't end up going through the elasticsearch service that needs this ability.
What should be considered as a valid input for asScoped()? Are those use cases for FakeRequest are going to stay in a long-term? Should we operate Request term at all? Probably we could rename it to something less confusing, for example OwnerCredentials to emphasise that something with scope-able interface is a valid input.
As far as I know, they exist because we don't have API Keys for background tasks and we're essentially emulating this behavior. Once we start using api keys, we'll need some mechanism of supplying this api key when using the Elasticsearch client, similar to the "fake request". Renaming it to something like OwnerCredentials
makes sense to me.
Will Alerting plugin use the same FakeRequest pattern to perform access data?
Once we get API keys, they will be doing so. We aren't able to use the hacky solutions like what Reporting does now because of limitations with the token based auth providers.
One other thing we might have to take into consideration is we currently support a elasticsearch.requestHeadersWhitelist
, which allows administrators to choose arbitrary headers which should be forwarded to Elasticsearch. The only usage of this that I'm aware of is when users hand-roll their own authentication using a proxy, similar to the approach described here: https://www.elastic.co/blog/user-impersonation-with-x-pack-integrating-third-party-auth-with-kibana
Not much to add here from my side, just one note:
This check relays heavily on Security that authenticates a user via elasticsearch API and set authorization header accordingly.
The authorization
header is just a specific case of a more general authHeaders
header dictionary that we may be need to pass to Elasticsearch with the request to authenticate the user. These are defined by the handler registered through registerAuth
and represent a part of the ^scope^ of the ^scoped^ request to Elasticsearch (we call it as callAsCurrentUser
right now).
It happened so that scope consists of mainly authentication information right now (+anything UA sent to Kibana and that's allowed by elasticsearch.requestHeadersWhitelist
), but I can't see any reason why we may not want to introduce other ^things^ that will contribute into ^scope^ as well (e.g. something very random/nonsense like handler specified in registerTelemetry
that adds telemetry headers to every request, just for the sake of example).
I mention this because I think that if we want to override just one header (or one specific part of the ^scope^) we can't just do something like this :
asScoped({headers: {authorization}}).callAsCurrentUser()
We'll need to preserve any other headers that could have been added to the ^scope^:
asScoped({headers: {...originalRequest.headers, authorization }}).callAsCurrentUser()
We're doing this in "authc in NP" PR right now and if we have just a few places where we redefine scope that way - not a big deal. But if this boilerplate code will be all over the place, I'd rather reconsider our decision to forbid scope-header overrides on a call basis, e.g. allow this call even if we override authorization
defined by one of the ^scope^ forming interceptors:
scopedClientThatCameFromNPHandler.callAsCurrentUser(...., { headers: {authorization }})
But if this boilerplate code will be all over the place, I'd rather reconsider our decision to forbid scope-header overrides
We didn't forbid them.
Before https://github.com/elastic/kibana/pull/34610: if ES client was scoped to a request, other headers passed directly in callAsCurrentUser
were ignored.
After https://github.com/elastic/kibana/pull/34610: headers passed in callAsCurrentUser
aren't ignored anymore, but scoped request headers have precedence over them.
elasticsearch-js has the opposite logic, but aligning our code with it means we introduce breaking changes.
If we are planning to switch to elasticsearch-js
we can start adopting ElasticseachService already now, still preserve the current behavior for the legacy platform via existing adapter
We didn't forbid them.
I meant we forbid overriding them (default headers, passed through asScoped
) on a call-basis.
elasticsearch-js has the opposite logic, but aligning our code with it means we introduce breaking changes.
I don't think it'd be a breaking change, it seems we never backported #34610 to 7.x.
I don't think it'd be a breaking change, it seems we never backported #34610 to 7.x.
Now in v7: if ES client was scoped to a request, other headers passed directly in callAsCurrentUser were ignored.
if we adopt elasticsearch-js
logic and backport it to 7.x:if ES client was scoped to a request, other headers passed directly in callAsCurrentUser rewrite scoped request headers.
In order to ease migration to the NP, I'd rather keep the current behavior of not allowing these headers to be overridden.
We can change this behavior when we introduce the new elasticsearch-js client along side the legacy one in #35508. We plan to do this in 7.5. This breaks up the migration process for existing plugins so they can deal with this change (and any other breaking changes) all at once.
@Bamieh I am curious if #34610 was not backported to 7.x intentionally. If this was intentional, I am worried about the inconsistency between master and 7.x.
@joshdover it was not intentional. Backported here: https://github.com/elastic/kibana/pull/40429
When Security migrated to the NP they stumbled upon a requirement that FakeRequest
has to emulate KibanaRequest
interface. https://github.com/elastic/kibana/pull/46145/commits/e0c89810dd7da0131beaef99cc2788aaa98859d3#r339997510
It steams from SavedObject allowing plugins to add client wrapper factory
.
And now it becomes a plugin responsibility to handle all possible scope-able
interfaces:
Also, SavedObject service doesn't specify possible types for a Request
, so plugins don't even know what they can receive as an argument.
There are 2 possible solutions:
scope-able
requests. provide something similar to RequestFacade
from the core. https://github.com/restrry/kibana/blob/b9a43a1788486c0d064cbe7aaff1fc7e987f7dfa/x-pack/plugins/spaces/server/spaces_service/spaces_service.ts#L20FakeRequest
usage and provide API allowing plugins to create KibanaRequest
with configured headers.I'm discovering this part of the code, so please tell me if I'm saying anything wrong.
And now it becomes a plugin responsibility to handle all possible scope-able interfaces: KibanaRequest Legacy.Request FakeRequest
I will not talk about 'legacy' request here, as they should become core internal implementation details post 8.0
Also, SavedObject service doesn't specify possible types for a Request, so plugins don't even know what they can receive as an argument.
This is only true with the legacy SO service. NP one expects a KibanaRequest (even if effectively working with other types if force-casted)
forbid FakeRequest usage and provide API allowing plugins to create KibanaRequest with configured headers.
This one seems tricky with current implementation, as we are keeping reference of the original request inside KibanaRequest
We make the assumption that this is the actual original HAPI request, and relies on it on some places, such as
Also current implementation to retrieve the auth headers is based on the fact that the http server / auth hook keep track of them
(this.getAuthHeaders
retrieves the headers from HttpServer.authRequestHeaders
populated by the auth hook)
With current implementation, a 'cloned'
EDIT: this is false, as the reference to the hapi request will be preserved and is used as the registry keyKibanaRequest
( { ...request }
) will not provides the same headers as it's source when used to create a scopedClient, as the 'header registry' (authRequestHeaders) will not be aware of of the clone reference.
As the behavior between actual requests 'received' by the http server and any other kind of requests differ (see the this.getAuthHeaders(request)
part) I think I'm leaning to @restrry first proposal, and introduce an explicit interface for fake request. Would love to avoid this name though.
Maybe
interface AuthorizationContext {
headers: Headers;
}
and then adapt the ES/SO(UI?) APIs to accept KibanaRequest | AuthorizationContext
Another option would be to have the 'request' be the single source of truth regarding authz headers. This would mean getting rid of the httpServer.authRequestHeaders
and injecting the headers to the actual request instead to be able to retrieve them from there.
I would use a slightly different interface in that case:
interface AuthorizationContext {
getAuthorizationHeaders: () => Headers;
}
.This way, we could have ES/SO APIs only accept the new AuthorizationContext
interface.
KibanaRequest
would simply implement it with what is currently done in ClusterClient.getHeaders
, and creating a 'fake' request implementing this interface would be explicit and easy. Also the non-override issue is preserved, as the getAuthorizationHeaders
implementation is internal to KibanaRequest
We would have a fallback mechanism for legacy until 8.0, by checking the presence of getAuthorizationHeaders
from input parameter and fallbacking to 'legacy' (current) implementation if not present
As far as I know, they exist because we don't have API Keys for background tasks and we're essentially emulating this behavior. Once we start using api keys, we'll need some mechanism of supplying this api key when using the Elasticsearch client, similar to the "fake request". Renaming it to something like
OwnerCredentials
makes sense to me.
@kobelb It appears that Alerting is now using API keys. Is there any reason we can't or shouldn't change the Upgrade Assistant and Reporting code to use API keys as well?
Another option would be to have the 'request' be the single source of truth regarding authz headers. This would mean getting rid of the
httpServer.authRequestHeaders
and injecting the headers to the actual request instead to be able to retrieve them from there.
From what I understand the original reason for having the indirect way of getting auth headers was so that we could hide all of that information from plugin code. We have had at least one security vulnerability in the past where a plugin exposed auth credentials to other users. Making this impossible would be ideal.
If we can leverage API keys instead, I believe some of the FakeRequest
concerns go away. All the plugins that need to use Elasticsearch out-of-band from a HTTP request can source API keys from the original requester. The Scoped Elasticsearch client then doesn't need to worry about FakeRequest and can instead accept KibanaRequest | ApiKey
(where ApiKey
is a string) on the asScoped
method.
class Plugin {
private readonly apiKeys = new Map<string, CreateApiKeyResult>();
setup(core, plugins) {
core.http.createRouter().post(
{ path: '/start_job/', },
async (context, req, res) => {
// Note this could be stored in ES if using encrypted saved objects
this.apiKeys.set(jobId, await plugins.security.createApiKey(req));
/* ... */
}
});
}
start(core) {
const esClient = core.elasticsearch.dataClient;
// Very naive polling mechanism for demonstration purposes
setInterval(async () => {
const job = await getJob(esClient);
const apiKeyResult = this.apiKeys.get(job.id);
// TODO: check api key exists and is not expired
// Use API key
const response = await esClient
.asScoped(apiKeyResult.api_key)
.callAsCurrentUser(/* args as normal */);
// TODO: update job
});
}
}
@kobelb It appears that Alerting is now using API keys. Is there any reason we can't or shouldn't change the Upgrade Assistant and Reporting code to use API keys as well?
API Keys are only available when Kibana is communicating with Elasticsearch over TLS. This hasn't been a requirement to use Reporting, so we can't make this switch in a minor version.
From what I understand the original reason for having the indirect way of getting auth headers was so that we could hide all of that information from plugin code. We have had at least one security vulnerability in the past where a plugin exposed auth credentials to other users. Making this impossible would be ideal.
correct. we need to restore this logic https://github.com/elastic/kibana/issues/55670
Also current implementation to retrieve the auth headers is based on the fact that the http server / auth hook keep track of them
getHeaders
falls back to request headers if getAuthHeaders
hasn't got any headers associated with an incoming request. It's required to cover the next case:
One other thing we might have to take into consideration is we currently support a elasticsearch.requestHeadersWhitelist, which allows administrators to choose arbitrary headers which should be forwarded to Elasticsearch. The only usage of this that I'm aware of is when users hand-roll their own authentication using a proxy, similar to the approach described here: https://www.elastic.co/blog/user-impersonation-with-x-pack-integrating-third-party-auth-with-kibana
AuthorizationContext
another option could be - OwnerCredentials
or even just Credentials
and then adapt the ES/SO(UI?) APIs to accept KibanaRequest | AuthorizationContext
We need to separate them because Spaces & Security plugins require KibanaRequest
for work?
AFAIK it's required for https://github.com/elastic/kibana/blob/4a41c42ea574fe9b3638453b410fc32cc8a63bc1/x-pack/plugins/spaces/server/spaces_service/spaces_service.ts#L62
https://github.com/elastic/kibana/blob/4a41c42ea574fe9b3638453b410fc32cc8a63bc1/x-pack/legacy/plugins/reporting/export_types/common/execute_job/get_custom_logo.ts#L27
Any other examples? Probably we can refactor this code and narrow KibanaRequest | AuthorizationContext
to AuthorizationContext
?
The Scoped Elasticsearch client then doesn't need to worry about FakeRequest and can instead accept KibanaRequest | ApiKey
Just to note: FakeRequest uses ApiKey to create AuthorizationContext
compatible interface
https://github.com/elastic/kibana/blob/4a41c42ea574fe9b3638453b410fc32cc8a63bc1/x-pack/legacy/plugins/alerting/server/task_runner/task_runner.ts#L64
For what it's worth, I'd much prefer we find an alternative to what is being done here https://github.com/elastic/kibana/blob/4a41c42ea574fe9b3638453b410fc32cc8a63bc1/x-pack/plugins/spaces/server/spaces_service/spaces_service.ts#L62, @legrego can you provide some context of why this was required?
The two pieces of information from the HTTP request which the security and spaces plugin depend upon is the "base path", as this allows us to determine the current space transparently to end-users, and the Authorization headers, as this is how we later authenticate users against Elasticsearch.
I don't want to interject yet another opinion in regard to the specifics of how we want to implement this if it's not helpful. However, if I can be of assistance or you'd prefer I take a different role than commenting on our general needs, please just let me know.
disclaimer: I haven't caught up on the conversation in this issue yet
For what it's worth, I'd much prefer we find an alternative to what is being done here https://github.com/elastic/kibana/blob/4a41c42ea574fe9b3638453b410fc32cc8a63bc1/x-pack/plugins/spaces/server/spaces_service/spaces_service.ts#L62, @legrego can you provide some context of why this was required?
++ I also want to find an alternative here. We put this logic in place when migrating Spaces to the NP in order to maintain compatibility with Alerting and Reporting, which at the time were still entirely in the LP IIRC, and therefore relied on the LP convention of retrieving the base path, which is via the request.getBasePath()
.
Since these background/async interactions don't have a true HTTP request associated with them, they end up crafting this fake request. Prior to the NP conversion efforts, the way to fake the request was to create an object with the getBasePath
function, along with a couple of other properties.
An update from the alerting side. We're using API keys to get a scoped elasticsearch clients and saved objects client. Some implementation examples can be found here and here.
We've been dealing with the issue of the user needing the manage_api_key
cluster privilege to create API keys. We're removing that requirement in 7.7 with this PR where kibana can now create API keys for any user. There still remains the requirement of a TLS communication with Elasticsearch to use such feature. The alerting APIs, UIs, etc are disabled whenever security is on and TLS is off until I believe 8.0 where TLS will be required to use security.
We're starting to see a pattern where developers integrating with alerting want access to the fake request we use so they can use their own scoped clients (example: https://github.com/elastic/kibana/issues/62886). I'm trying to see what the best approach would be for 7.8 whether its to expose the request we use or if we simply provide their scoped client for now. Looking for some thoughts?
https://github.com/elastic/kibana/issues/39430#issuecomment-552929519 forbid FakeRequest usage and provide API allowing plugins to create KibanaRequest with configured headers.
I don't have full context with this issue but a random thought (may not be good) that could solve having consistent fake requests (extra headers, base path, etc) would be to have some sort of snapshot function on the request that would create an API key if security enabled, capture headers, base path, etc and return whatever we need to store to be able to create a fake request at a future time. In alerting context that would work, we can store an object as is since the mapping is { "type": "binary" }
for encryption purposes so there won't be mapping conflicts. Maybe the storage of this information could be different as well.
Bumping this as it's about to become a blocker for integrations between ML and Alerting (https://github.com/elastic/kibana/issues/64588#issuecomment-622010446).
In some of their services, ML performs checks on spaces and capabilities, and plans on extending that functionality to more (all?) services in the near future (7.8 was the initial goal).
Those services are relied upon by SIEM for ML Rules, and will also be integral to ML's move from watcher to alerting (/cc @jgowdyelastic).
@mikecote Is this a blocker for the ML and Alerting integration? Are we not able to use any other workarounds (like FakeRequest)?
@mikecote Is this a blocker for the ML and Alerting integration? Are we not able to use any other workarounds (like FakeRequest)?
We can use fake request (which would expose the ApiKey in the headers). We're mostly looking for guidance / confirmation that this is the direction we should go for now?
cc @elastic/kibana-security
We can use fake request (which would expose the ApiKey in the headers). We're mostly looking for guidance / confirmation that this is the direction we should go for now?
for now
is the key for me. I'm ok with this in the interim, but I'd rather not get into a situation where we have to maintain BWC with a fake request implementation once scope-able clients come around.
for now
is the key for me. I'm ok with this in the interim, but I'd rather not get into a situation where we have to maintain BWC with a fake request implementation once scope-able clients come around.
Totally agree. Are we planning to make these alerting APIs "stable"-ish for 3rd parties yet? If not, then we can use fakerequest for now and break the API once we introduce scoped clients.
Totally agree. Are we planning to make these alerting APIs "stable"-ish for 3rd parties yet? If not, then we can use fakerequest for now and break the API once we introduce scoped clients.
We're planning to have the server side APIs stable / GA in 7.10.
By exposing a fake request, the best the alerting team could do is making it clear in the documentation what changes are coming and why it's exposed in the meantime. That way developers are aware how much debt they'll take on until 8.0 or something.
This came up in conversation today with @lukeelmers. Some of the initiatives the AppArch team is working on will rely on authenticating via API Keys, so without scope-able clients they'll have to take a similar approach to what Alerting has done with FakeRequests, at least for the time being
Based on the latest discussion in https://github.com/elastic/kibana/issues/87990#issuecomment-826747625, it sounds like alerting may soon be providing executors access to the FakeRequest to facilitate the alerting-from-discover effort.
In this particular case it is because some of the data plugin's services are now scoped using a KibanaRequest
(as they need access to scoped ES clients, SO clients, authenticated user info). So in order for data services to be used in an alert executor, plugins will need this information.
Un-assigning myself as I'm not actively working on it.
As far as I know, they exist because we don't have API Keys for background tasks and we're essentially emulating this behavior. Once we start using api keys, we'll need some mechanism of supplying this api key when using the Elasticsearch client, similar to the "fake request". Renaming it to something like OwnerCredentials makes sense to me.
Elasticsearch relaxed restrictions on API keys usage without HTTPS https://github.com/elastic/elasticsearch/pull/76801
API keys can now be widely adapted in Kibana. We might need to start formalizing the OwnerCredentials
interface.
Every time when Kibana needs to get access to data saved in elasticsearch it should perform a check whether an end user has access to the data.
Regular mode
This check relays heavily on Security that authenticates a user via elasticsearch API and set
authorization
header accordingly. Having user authenticated Kibana performs all requests to elasticsearch on behalf of the user by attaching authorization headers to an incoming request:Note: the whole Request object is not required to hit elasticsearch API, but something with interface
{ headers: { authorization: 'token, [...other authorization headers: string]: string } }
.FakeRequest mode
There are a couple of cases when the current model doesn't play well
headers.authorization
contains valid credentials.In Reporting plugin https://github.com/elastic/kibana/blob/2f029e63d01c8f40ad8eb2ae79870d998c666abc/x-pack/legacy/plugins/reporting/export_types/common/execute_job/decrypt_job_headers.ts#L18
And in Upgrade Assistant https://github.com/elastic/kibana/blob/4c3dfa3389e90c6b3ab7224c21ae84abb95fce40/x-pack/legacy/plugins/upgrade_assistant/server/lib/reindexing/worker.ts#L162
In all those cases we 'mimic' Request object, although we need only
authorization
headers to be present.Discuss
SavedObjects
. Probably there is something else that I missed.asScoped()
? Are those use cases forFakeRequest
are going to stay in a long-term? Should we operateRequest
term at all? Probably we could rename it to something less confusing, for exampleOwnerCredentials
to emphasise that something with scope-able interface is a valid input.FakeRequest
pattern to perform access data? @mikecoteFakeRequest
to perform a call and declare dependencies on the passed credentials.On the other hand, the ergonomic suffers as we have to use
asScoped
and cannot leverage coming Handler RFC API to extend a request for custom needs