Closed skylenet closed 3 weeks ago
Please consider investigating the findings and remediating the incidents. Failure to do so may lead to compromising the associated services or software components.
Since your pull request originates from a forked repository, GitGuardian is not able to associate the secrets uncovered with secret incidents on your GitGuardian dashboard. Skipping this check run and merging your pull request will create secret incidents on your GitGuardian dashboard.
🦉 GitGuardian detects secrets in your source code to help developers and security teams secure the modern development process. You are seeing this because you or someone else with access to this repository has authorized GitGuardian to scan your pull request.
⚠️ GitGuardian has uncovered 1 secret following the scan of your pull request.
False positive, it's just user:password
b64 encoded in a test.
As @pk910 mentioned it to me, this will probably not work since the engine is running in its own container, so there's no access to the users config file there. So we might need to:
Option 1: Change this fetch the credentials at the kurtosis cli and then propagate them to the engine API.
Option 2: Mount the docker config dir to the engine container. Which will probably cause more problems due to it not having access to the different credential helpers that each system uses.
Good callout by @pk910. I think Option1
is the way to go here. I did a similar thing to propagate github auth credentials to the engine. Here is that PR: https://github.com/kurtosis-tech/kurtosis/pull/2113. Specifically the github_auth_store
finds the token and passes it to the engine. The engine then passes the token to each created enclave.
@tedim52 I've updated the PR, but I'm strugeling testing this end-to-end.
I'm building the engine with ./engine/scripts/build.sh
and then also the cli with ./cli/scripts/build.sh
. The CLI seems to work fine and I can see my changes there, but somehow I can't get an updated engine container running.
I noticed that when I build the engine it gets some tag locally.. e.g. 04c515-dirty
, I then ran cli/cli/dist/cli_darwin_arm64/kurtosis engine start --version 04c515-dirty
which works fine, and the engine starts. But somehow it doesn't reflect my code changes. E.g. if I change a string from a log line, I don't seem to see that change.
Example: I wanted to change this specific line for debugging purposes:
return stacktrace.Propagate(err, "Tried pulling image '%v' with platform '%v' but failed: %v", imageName, platform), false
To this:
return stacktrace.Propagate(err, "Tried pulling image '%v' with platform '%v' but failed. XXXXX: %v", imageName, platform, imagePullOptions), false
But I don't see that change, even after running ./engine/scripts/build.sh
and using the image tag that it printed ( via kurtosis engine start -version XXXX
)
Any idea?
This is awesome! You likely have to run an engine restart cli/cli/dist/cli_darwin_arm64/kurtosis engine restart --version 04c515-dirty
. Running an engine start on an already running engine won't start it again even if provided a different version. Once you've restarted, run engine status
to ensure the version is 04c515-dirty
.
Yeah, I did something similar. I did stop it and then start it with the version flag. I also verified that it was running the right image using ‘engine status’ and also via ‘docker ps’, and it did. But somehow the code change was still not there. I’ll try to figure out the problem today. I was just wondering if I’m doing anything wrong on the local dev workflow.
On Fri, Oct 25, 2024 at 08:26, Tedi Mitiku @.***(mailto:On Fri, Oct 25, 2024 at 08:26, Tedi Mitiku < wrote:
This is awesome! You likely have to run an engine restart cli/cli/dist/cli_darwin_arm64/kurtosis engine restart --version 04c515-dirty. Running an engine start on an already running engine won't start it again even if provided a different version. Once you've restarted, run engine status to ensure the version is 04c515-dirty.
— Reply to this email directly, view it on GitHub, or unsubscribe. You are receiving this because you authored the thread.Message ID: @.***>
@tedim52 I think I'm starting to understand the problem. I did that change to that stackTrace.Propagate message and was expecting it to show up at some point at the engine
container. Which wasn't the case. So just to make sure that my code changes were reflecting the engine container, I just changed some startup debug log there, and I was able to see that.
So, this means that the part that is trying to get the docker auth from the config.json
is happening somewhere else. This somewhere else seems to be the api
container. On the api
container I can see errors like error reading Docker config file: open /root/.docker/config.json: no such file or directory
. So I was assuming that the engine container would run the pullImage()
function of the docker_manager
, but that doesn't seem to be the case. It seems to be happening on the api
container. Does it make sense to add the docker config.json
with the auth creds to the API container?
fyi @tedim52 , I just added the volume mount also to the api
container. So it has the config.json
available under /root/.docker/config.json
. I've tested this by pulling an image from a private repo on dockerhub and it worked.
So the way it works now is that:
Get all your credentials for any registries based on your local docker config.json
. It should support all ways of obtaining creds, e.g. base64 encoded plain text, os using the docker credential helpers. In my case I'm using "credsStore": "osxkeychain"
locally since I'm on a mac.
Create a "docker config" volume (under /root/.docker
) with a separate config.json
that is generated based on all credentials that it was able to get from the local system. This config.json
will have the credentials in plain text, b64 encoded, so that we don't need to deal with having to support all credential helpers on the engine/api containers.
On imagePull()
we check if an image has a registry defined in the config.json
with any auth information. If that's the case, we use that as the auth config for imagePullOptions
.
It's still possible to provide credentials (username/password) via the ImageSpec. This will override any previous auth.
Some caveats:
Hey @skylenet I think this approach is really good - and yes regarding where the imagePull
takes place - the container-engine-lib
library is used in both the api
and engine
containers and cli
- sometimes the cli/engine will invoke imagePull
command -we use the same imagePull function to pull the kurtosis images needed for starting enclaves so that's smth important to keep in mind but I think for now, mounting to just the api
container is ok.
@tedim52 thank you for the review 🙏 I just updated the PR with the changes that you've requested.
Description
Supports docker credentials from config (
~/.docker/config.json
). Read https://github.com/kurtosis-tech/kurtosis/issues/2503Is this change user facing?
yes
References (if applicable)
Fixes https://github.com/kurtosis-tech/kurtosis/issues/2503