helm / helm

The Kubernetes Package Manager
https://helm.sh
Apache License 2.0
27.41k stars 7.17k forks source link

--recreate-pods tag and statefulsets upgrade #1971

Closed eduardobaitello closed 7 years ago

eduardobaitello commented 7 years ago

The --recreate-pods tag is supposed to recreate the pods of statefulsets?

I'm trying to update a statefulset with a NEWCHART, which describes a new image for my statefulset. The only difference with the deployed release is the new image and the chart version. I run helm upgrade RELEASE NEWCHART --recreate-pods, which upgrades my statefulset, but their pod still uses the old image. If I manually delete the pod, it's recreated with the new image, but I want this done automatically in helm upgrade command. It's possible?

helm version: 2.2.0 kubernetes version: 1.5.1

eduardobaitello commented 7 years ago

Looking at the code, seems this flag uses selectors for pod deletions, but I can't understand how it's done, neither find a explanation in docs.

@nmakhotkin , can you help me please?

technosophos commented 7 years ago

Yeah, we need to cover this in the docs

nmakhotkin commented 7 years ago

@eduardobaitello By kubernetes convention, pods need to be created with specific labels and selectors. Later they should be referenced by those selectors. See more at https://kubernetes.io/docs/user-guide/labels/

nmakhotkin commented 7 years ago

@eduardobaitello may be it should be done more accurate than how it is done now. I mean the selecting right pods for restart/recreate.

eduardobaitello commented 7 years ago

@nmakhotkin, thanks for reply.

I'm using a chart with a .yaml file that describes a nginx-slim:0.7 image for my statefulset (created with helm install) Then I run a helm upgrade --recreate-pods with another chart version that have a .yaml file with nginx-slim:0.8 image defined.

The defined selector is like this, but the pod is not deleted in the upgrade process:

---
apiVersion: apps/v1beta1
kind: StatefulSet
metadata:
  name: web
  labels:
    app: nginx
spec:
  selector:
    matchLabels:
      app: nginx
  serviceName: "nginx"
  replicas: 1
  template:
    metadata:
      labels:
        app: nginx
    spec:
      containers:
      - name: nginx
        image: gcr.io/google_containers/nginx-slim:0.7
        imagePullPolicy: Never
        ports:
        - containerPort: 80
          name: web
        volumeMounts:
        - name: www
          mountPath: /usr/share/nginx/html
  volumeClaimTemplates:
  - metadata:
      name: www
      annotations:
        volume.beta.kubernetes.io/storage-class: "storage-nginx"
    spec:
      accessModes: [ "ReadWriteOnce" ]
      resources:
        requests:
          storage: 50Mi

Am I missing something?

nmakhotkin commented 7 years ago

There is a bug somewhere, I think.

nmakhotkin commented 7 years ago

I mean the bug in the logic of recreating pods in helm.

technosophos commented 7 years ago

@nmakhotkin can you reproduce that issue?

nmakhotkin commented 7 years ago

@technosophos I try to reproduce it soon and then tell about results here.

eduardobaitello commented 7 years ago

I'm making further tests, now with helm v2.2.2 (since the change log for v2.2.1 contains fixes for the flag --recreate-pods):

Now my statefulset pod is being recreated at helm upgrade. But, even using --debug flag, no log information about the recreation is printed, which would be very useful if done. However, in helm rollback process the pod is not recreated, which is bad.

The documentation does not yet have relevant information about how to use the flag properly. So I'm still doing something wrong or there's still a bug.

technosophos commented 7 years ago

I'm going to add the bug flag to this one, since that rollback issue is definitely a bug.

ata18 commented 7 years ago

I was able to reproduce this. I did a helm install of memcached which uses statefulsets and tried to upgrade it by changing the image from memcached:1.4.36-alpine to memcached:1.4-alpine with the --recreate-pods option. It worked fine and the pods were recreated. But when I did a rollback, they were not recreated.

ata18 commented 7 years ago

On subsequent manual deletion of those pods, they came up with the expected image which is 1.4.36-apline. So , it looks like rollback doesn't respect the --recreate-pods flag

ata18 commented 7 years ago

Looks like this issue is fixed in the master branch. In the pkg/helm/client.go file, req.Recreate = h.opts.recreate was missing in RollbackRelease method in the older versions.

ata18 commented 7 years ago

@technosophos I guess this issue could be closed.

yongzhang commented 7 years ago

Hi, all I saw this issue was closed but I have the same issue on helm 2.6.0, am I missing something? Here's my issue created here: https://github.com/kubernetes/helm/issues/3359