Open tvvignesh opened 4 years ago
@tvvignesh skaffold dev
cleanups all intermediate images locally. However, remote images and layers are not cleaned up.
This could be definitely added this as feature.
Our team also owns Minikube, and i would love to know why your Minikube isn't working for you. Is it because of the nature of your application or minikube installation is difficult.
Thanks Tejal
@tejal29 Thanks for your quick reply. The main reason minikube does not work is that it eats up a lot of resource in my system along with other applications I am running. Also the fact that you have to end up exposing many of your local ports in the cluster through a tunnel if you are working on things like OAuth where services need a public url to be exposed.
Otherwise, its good to work with minikube.
FYI, the current pricing of GCR tells me that $0.026 per GB per month is the price. So, regarding the bills it depends how big your images are and how frequently you are pushing.
I agree with @tejal29 that this could be added as a feature. It is a relatively sizable (medium to large) one to implement though, as the cleanup logic for registries is very different from cleaning up locally. I don't think we'll have the bandwidth for this anytime soon. Community contributions are always welcome. :) This one should definitely start with a design doc first!
In the meantime I would recommend separating out your production (skaffold run
) and dev (skaffold dev
) flows to two separate projects (hence GCR registries) or to two separate image names at least (with profiles maybe?) and setup a GCP Cloud Scheduler task to run a cleanup script daily on the dev project / images.
What do you think?
@balopat Thanks for your reply. I had the pricing concern since I was using skaffold dev
which periodically hotreloads and pushes new images to the repo (not sure if that is in my control). And my images are around 350MB each (I have multiple Dockerfiles -microservices) and close to 10 services. So, I had to go to each and every registry and clean it up and also it took quite some time to push.
What I did to solve this issue right now, is to use skaffold for my CI/CD pipelines and to use telepresence for my dev workflows and that works great. Like, I swap the container which I am developing and do all changes and keep iterating (there is no image pushed to remote). And once the development is done, I commit it to the repo (which is GITLAB in my case) and the GITLAB CI takes care of the pipeline using skaffold.
I am not sure, but if you have any plans on providing a telepresence like functionality with swapping and portforwarding all services in the cluster, I can stick to just one tool for both dev and prod.
My skaffold file looks something like this (removed sensitive details):
apiVersion: skaffold/v1
kind: Config
build:
artifacts:
- image: asia.gcr.io/project-dev/repo-name
deploy:
kubeContext: cluster-dev
kubectl:
manifests:
- kubernetes/dev/svc/*.yml
profiles:
- name: dev-svc
build:
artifacts:
- image: asia.gcr.io/project-dev/repo-name
deploy:
kubeContext: cluster-dev
kubectl:
manifests:
- kubernetes/dev/svc/*.yml
- name: dev-db
build:
tagPolicy:
envTemplate:
template: "{{.IMAGE_NAME}}:latest"
artifacts:
- image: asia.gcr.io/project-dev/postgres
context: kubernetes/dev/db/
deploy:
kubeContext: cluster-dev
kubectl:
manifests:
- kubernetes/dev/db/*.yml
- name: prod-svc
build:
artifacts:
- image: asia.gcr.io/project-prod/repo-name
deploy:
kubeContext: cluster-prod
kubectl:
manifests:
- kubernetes/prod/svc/*.yml
- name: prod-db
build:
artifacts:
- image: asia.gcr.io/project-prod/postgres
context: kubernetes/prod/db/
deploy:
kubeContext: cluster-prod
kubectl:
manifests:
- kubernetes/prod/db/*.yml
And my gitlab pipeline:
stages:
- development
- production
variables:
DOCKER_HOST: tcp://dind.xyz.com:1234
development:
image:
name: gcr.io/k8s-skaffold/skaffold:latest
stage: development
script:
- echo "$GCP_SERVICE_KEY" > gcloud-service-key.json
- gcloud auth activate-service-account --key-file gcloud-service-key.json
- gcloud config set project project-dev
- gcloud config set compute/zone asia-south1-a
- gcloud container clusters get-credentials cluster-dev-1
- kubectl config get-contexts
- skaffold run -p dev-svc
only:
- master
production:
image:
name: gcr.io/k8s-skaffold/skaffold:latest
stage: production
script:
- echo "$GCP_PROD_SERVICE_KEY" > gcloud-service-key.json
- gcloud auth activate-service-account --key-file gcloud-service-key.json
- gcloud config set project project-prod
- gcloud config set compute/zone asia-south1-a
- gcloud container clusters get-credentials cluster-prod-1
- kubectl config get-contexts
- skaffold run -p prod-svc
only:
- production
re remote image cleanup, I wouldn't mind seeing something like this added to skaffold, but realistically our team isn't going to be prioritizing this any time soon. I'll leave this open for now though - contributions certainly welcome here!
Hi. I have been using a remote GKE cluster as my development target since its difficult to run Minikube and test everything on my system. Everything works great with
skaffold dev
pushing images remotely during development and I am able to test it.But, when terminating
skaffold dev
I would like the remote GCR images which were pushed to be cleaned up as well to avoid doing it manually later. Too many tags are getting pushed to GCR because of this.Rather, all tags can be stored in the remote registry permanently only when running
skaffold run
but cleaned up when usingskaffold dev
Is there any other way to avoid issues like these? Thanks. I hope the GCloud bill doesn't shoot up because of this.