Closed tcdowney closed 2 years ago
The problem here is that the CLI uses a completely different client for Log Cache, so none of the things we did to make the Cloud Controller client work apply here. We could fix this, but is it correct to assume that the Log Cache endpoints will always share the same authentication with the CC ones?
We could fix this, but is it correct to assume that the Log Cache endpoints will always share the same authentication with the CC ones?
Yeah, they should. The flow in CF for VMs is this:
CLI sends auth to log-cache's cf-auth-proxy. The cf-auth-proxy hits an internal Cloud Controller API (/internal/v4/log_access/:guid
/ code) and based on the result of that endpoint decides whether or not to grant the CLI access to the logs. We'll need to tweak that flow a bit, but in general need to do something similar based on the user's access to a CFApp
(or us giving them other access on their roles).
cc @acosta11 who's thinking about logging and metrics
Here is log-cache client on the CLI end. Let us know if you need any assistance.
There's WIP for this here: https://github.com/cloudfoundry/cli/compare/master...eirini-forks:pr-support-k8s-for-logging
Unfortunately, the existing tests for the k8s auth integration were failing and, once we addressed that, we found that the existing test for the Authorization header isn't putting the k8s config into a valid state. See this conversation thread where I ask for help from the EMEA team: https://cloudfoundry.slack.com/archives/C0297673ASK/p1646871220202859
Once this issue has been solved we can unblock this story.
@davewalter and I submitted a PR (https://github.com/cloudfoundry/cli/pull/2267) to add the auth header for logcache requests. We're waiting for feedback from the CLI team and from the EMEA folks.
Waiting for a release of the CLI to close this
CLI version 8.4.0
has these changes.
https://github.com/cloudfoundry/cli/releases/tag/v8.4.0
What happened?
We recently introduced an
AuthenticationMiddleware
that validates that clients are sending valid credentials for the cluster in anAuthorization
header with each request. After this change we started seeing that CLI requests to the log-cache endpoints were failing with401
errors.This CLI output looks like this:
Related to Issue https://github.com/cloudfoundry/cf-k8s-controllers/issues/362 and PR https://github.com/cloudfoundry/cf-k8s-controllers/pull/364.
What you expected to happen
We don't support logging yet, so I expected it to only say "Staging app and tracing logs..." during this section:
Acceptance Criteria
GIVEN I have deployed cf-k8s-controllers (with the
/api/v1/read/:guid
log-cache endpoint protected by theAuthenticationMiddleware
use a version of the code from before #364) WHEN Icf push
an app THEN I see that the CF CLI sends a validAuthorization
header to the API and that we don't getFailed to retrieve logs from Log Cache: unexpected status code 401
errors during pushHow to reproduce it (as minimally and precisely as possible)
master
branch of the CF CLI (I used de37cbef85ee1c8464d5ba626f00c9d2b8309179)make build
out
) to yourPATH
cf push
an app and see401
errors during the staging portionAnything else we need to know?
In #364 we omitted the log-cache endpoints from the
AuthenticationMiddleware
as a temporary workaround. We'll need to undo part of this to be able to see the401
errors again.I started a Slack thread about this here: https://cloudfoundry.slack.com/archives/C0297673ASK/p1639505811233100
One potential area to look into is how the CLI is passing the
AccessToken
to the log-cache client versus how it does it for the CF API client: https://github.com/cloudfoundry/cli/blob/8c20e5118d7cb38f52d52b69060f6b77943dce6e/command/log_cache_client.go#L126Environment
Revision of codebase:
kubectl version
):Cloud provider or hardware configuration: GKE Others:
Dev Notes