Open 45413 opened 3 years ago
Can you try starting cloud run emulator form the terminal with right options to Minikube:
Do stop cloud-run-dev-internal Minikube profile if it is already running on your local box. You can check it using status bar -> minikube. if it is started or paused do change it to stop.
if you are using auto managed dependencies with Cloud Code, then the minikube that is used is the one we install in cloud-code path. you can check using which -a minikube
(or) it should be under user profile. in my case /Users/<username>/Library/Application Support/cloud-code/installer/google-cloud-sdk/bin/minikube
From VSCode terminal: do start Minikube with mount option passed in. /Users/<username>/Library/Application\ Support/cloud-code/installer/google-cloud-sdk/bin/minikube start -p cloud-run-dev-internal --mount --keep-context --delete-on-failure
.
Once minikube is started. Go to the project and choose "Run on cloud run emulator" from the cloud code menu, this should use the existing emulator that is started.
@sivakku, Due to the fact that the cloud-run-dev-internal
container already existed, it was not enough to simply stop the minikube instance to change any mount options. It also required deleting the existing instance first. I should also point out that I am testing this on MacOS 11.5 / VS Code 1.58.2, as I ran into some Mac specific issues described below.
❯ minikube status -p cloud-run-dev-internal
cloud-run-dev-internal
type: Control Plane
host: Stopped
kubelet: Stopped
apiserver: Stopped
kubeconfig: Stopped
❯ minikube start -p cloud-run-dev-internal --mount --keep-context --delete-on-failure
😄 [cloud-run-dev-internal] minikube v1.22.0 on Darwin 11.5
✨ Using the docker driver based on existing profile
👍 Starting control plane node cloud-run-dev-internal in cluster cloud-run-dev-internal
🚜 Pulling base image ...
🏃 Updating the running docker "cloud-run-dev-internal" container ...
❌ Exiting due to GUEST_MOUNT_CONFLICT: Sorry, docker does not allow mounts to be changed after container creation (previous mount: '', new mount: '/Users:/minikube-host)'
❯ minikube delete -p cloud-run-dev-internal
🔥 Deleting "cloud-run-dev-internal" in docker ...
🔥 Deleting container "cloud-run-dev-internal" ...
🔥 Removing /Users/austinsabel/.minikube/machines/cloud-run-dev-internal ...
💀 Removed all traces of the "cloud-run-dev-internal" cluster.
❯ minikube start -p cloud-run-dev-internal --mount --keep-context --delete-on-failure
😄 [cloud-run-dev-internal] minikube v1.22.0 on Darwin 11.5
✨ Automatically selected the docker driver. Other choices: hyperkit, virtualbox, ssh
👍 Starting control plane node cloud-run-dev-internal in cluster cloud-run-dev-internal
🚜 Pulling base image ...
🔥 Creating docker container (CPUs=2, Memory=1986MB) ...
🐳 Preparing Kubernetes v1.21.2 on Docker 20.10.7 ...
▪ Generating certificates and keys ...
▪ Booting up control plane ...
▪ Configuring RBAC rules ...
🔎 Verifying Kubernetes components...
▪ Using image gcr.io/k8s-minikube/storage-provisioner:v5
🌟 Enabled addons: storage-provisioner, default-storageclass
💗 To connect to this cluster, use: --context=cloud-run-dev-internal
This will successfully mount the "/Users:/minikube-host"
. ~For anyone using MacOS and trying to use the --mount-string "/local/path:/mount/path
you may be surprised to that will fail silently and you will have an empty mount point inside the minikubes instance. This feature does not currently work and is being tracked in Issue #2481 in the kubernetes/minikube repo. There is a work-around in the issue but requires manual intervention everytime the instance is stopped.~
Update see comment below
@45413 - Ok, thanks for pointing the container need to be deleted. @sharifelgamal is helping to look at the --mount-string.
minikube start --mount --mount-string="/src/path:/dest/path"
should be working as expected.
Again this does work normally, but is a known bug only affecting minikube using the docker driver on MacOS as documented in the issue I linked. It mounts without error, but inside the minikube container the mount point is empty. I really just wanted to point it out for any else on Mac that might run into if while testing like I did.
I just tested this on macOS myself and mounting does in fact work:
selgamal ~/mount-me $ pwd
/Users/selgamal/mount-me
selgamal ~/mount-me $ ls
file-a.txt file-b
selgamal ~/mount-me $ minikube version
minikube version: v1.22.0
commit: a03fbcf166e6f74ef224d4a63be4277d017bb62e
selgamal ~/mount-me $ minikube delete
🔥 Deleting "minikube" in docker ...
🔥 Deleting container "minikube" ...
🔥 Removing /Users/selgamal/.minikube/machines/minikube ...
💀 Removed all traces of the "minikube" cluster.
selgamal ~/mount-me $ minikube start --mount --mount-string "/Users/selgamal/mount-me:/foo"
😄 minikube v1.22.0 on Darwin 11.5.1
✨ Automatically selected the docker driver. Other choices: hyperkit, ssh
👍 Starting control plane node minikube in cluster minikube
🚜 Pulling base image ...
💾 Downloading Kubernetes v1.21.2 preload ...
> preloaded-images-k8s-v11-v1...: 502.14 MiB / 502.14 MiB 100.00% 46.65 Mi
🔥 Creating docker container (CPUs=2, Memory=4000MB) ...
🐳 Preparing Kubernetes v1.21.2 on Docker 20.10.7 ...
▪ Generating certificates and keys ...
▪ Booting up control plane ...
▪ Configuring RBAC rules ...
🔎 Verifying Kubernetes components...
▪ Using image gcr.io/k8s-minikube/storage-provisioner:v5
🌟 Enabled addons: storage-provisioner, default-storageclass
🏄 Done! kubectl is now configured to use "minikube" cluster and "default" namespace by default
selgamal ~/mount-me $ minikube ssh
docker@minikube:~$ ls /
Release.key bin boot data dev docker.key etc foo home kic.txt kind lib lib32 lib64 libx32 media mnt opt proc root run sbin srv sys tmp usr var
docker@minikube:~$ ls /foo
file-a.txt file-b
Thanks hate to admit, I feel a little foolish, as it turns out in my initial test I was using the mount path "/data" inside the minikube container. Which is an default persistent volume that is mounted by minikube by default and supersedes a --mount-string
argument. However if I follow your steps exactly and use a non-existing directory like "/foo" for the mount point it works as you describe. Sorry for the confusion.
$ pwd
/Users/test/mount-me
$ ls
file-a.txt file-b
$ minikube version
minikube version: v1.22.0
commit: a03fbcf166e6f74ef224d4a63be4277d017bb62e
$ minikube delete
🙄 "minikube" profile does not exist, trying anyways.
💀 Removed all traces of the "minikube" cluster.
$ minikube start --mount --mount-string "/Users/test/mount-me:/foo"
😄 minikube v1.22.0 on Darwin 11.5
✨ Automatically selected the docker driver. Other choices: hyperkit, virtualbox, ssh
👍 Starting control plane node minikube in cluster minikube
🚜 Pulling base image ...
🔥 Creating docker container (CPUs=2, Memory=1986MB) ...
🐳 Preparing Kubernetes v1.21.2 on Docker 20.10.7 ...
▪ Generating certificates and keys ...
▪ Booting up control plane ...
▪ Configuring RBAC rules ...
🔎 Verifying Kubernetes components...
▪ Using image gcr.io/k8s-minikube/storage-provisioner:v5
🌟 Enabled addons: storage-provisioner, default-storageclass
🏄 Done! kubectl is now configured to use "minikube" cluster and "default" namespace by default
$ minikube ssh
docker@minikube:~$ ls /
Release.key boot dev etc home kind lib32 libx32 mnt proc run srv tmp var
bin data docker.key foo kic.txt lib lib64 media opt root sbin sys usr
docker@minikube:~$ ls /foo
file-a.txt file-b
Keeping this for the feature request to ease out using mounted secrets for CR in minikube.
Use case I want to be able to mount local directories inside a container running in cloud run emulator to simulate the Preview feature of mounting secrets as describe in cloud run documentation here
Feature When utilizing the default
cloud-run-dev-internal
minikube instance for running/debugging in the cloud run emulator, allow options in the launch.json to mount local volumes to the minikube docker container as k8s volumes and additional options to reference those as volumeMounts in the container manifest.Additional, if this could be done directly through Cloud Code - Secrets Manager to allow real secrets to be attached the run/debug instance, and mounted as ephemeral storage for the life of the debug session, this would provide a seamless experience during debugging as attaching secrets in production cloud run instances.