puppetlabs / pupperware

Container fun time lives here.
Other
183 stars 67 forks source link

k8s, Doesn't work with Helm 3 #205

Closed baurmatt closed 4 years ago

baurmatt commented 4 years ago

Describe the Bug

If I want to install the k8s pupperware Helm chart with Helm 3, the following error occurs:

$ helm upgrade --install --namespace puppetserver puppetserver ./ --set puppetserver.puppeturl='https://github.com/puppetlabs/control-repo.git'
Release "puppetserver" does not exist. Installing it now.
Error: unable to build kubernetes objects from release manifest: error validating "": error validating data: ValidationError(CronJob.spec): unknown field "selector" in io.k8s.api.batch.v1beta1.CronJobSpec

Expected Behavior

Runs with Helm 3

Steps to Reproduce

helm upgrade --install --namespace puppetserver puppetserver ./ --set puppetserver.puppeturl='https://github.com/puppetlabs/control-repo.git'

Environment

$ helm version
version.BuildInfo{Version:"v3.0.2", GitCommit:"19e47ee3283ae98139d98460de796c1be1e3975f", GitTreeState:"clean", GoVersion:"go1.13.5"}

Additional Context

Add any other context about the problem here.

Xtigyro commented 4 years ago

Hey @baurmatt - thank you for reporting it. Support for Helm v3 is still to be planned. It should happen in the next few months.

Let me check with the rest of the team whether we can plan for supporting both Helm versions simultaneously.

Xtigyro commented 4 years ago

@baurmatt Could you please test the early Helm v3 support in the chart: https://github.com/Xtigyro/puppetserver-helm-chart/tree/v300

cretz commented 4 years ago

@Xtigyro - This was happening for me too and I tried this branch (I'm using latest Argo which has Helm 3 and set the targetRevision to that branch) and all has worked for me so far, although I haven't tested much.

Xtigyro commented 4 years ago

@Xtigyro - This was happening for me too and I tried this branch (I'm using latest Argo which has Helm 3 and set the targetRevision to that branch) and all has worked for me so far, although I haven't tested much.

@cretz Very nice - thank you for testing it! I appreciate it.

cretz commented 4 years ago

@Xtigyro - I may have spoken too soon, I get this when setting r10k.code.viaSsh.credentials.existingSecret and r10k.hiera.viaSsh.credentials.existingSecret:

ValidationError(CronJob.spec.jobTemplate.spec.template.spec.volumes[2].secret): unknown field "fsGroup" in io.k8s.api.core.v1.SecretVolumeSource

Unsure if related to Helm version, if not, I can create another issue.

Xtigyro commented 4 years ago

@Xtigyro - I may have spoken too soon, I get this when setting r10k.code.viaSsh.credentials.existingSecret and r10k.hiera.viaSsh.credentials.existingSecret:

ValidationError(CronJob.spec.jobTemplate.spec.template.spec.volumes[2].secret): unknown field "fsGroup" in io.k8s.api.core.v1.SecretVolumeSource

Unsure if related to Helm version, if not, I can create another issue.

@cretz Seems plausible. Will test that. Thanks!

cretz commented 4 years ago

Made fork, removed erroneous fsGroup fields from secret volume source, now getting this for puppet server pod:

MountVolume.SetUp failed for volume "init-compilers-volume" : configmap "init-compilers-config" not found

Presumably because I've left puppetserver.multiCompilers.enabled as its default false

EDIT: After putting conditional around that setting, chart works. Thanks.

Xtigyro commented 4 years ago

@Xtigyro - I may have spoken too soon, I get this when setting r10k.code.viaSsh.credentials.existingSecret and r10k.hiera.viaSsh.credentials.existingSecret:

ValidationError(CronJob.spec.jobTemplate.spec.template.spec.volumes[2].secret): unknown field "fsGroup" in io.k8s.api.core.v1.SecretVolumeSource

Unsure if related to Helm version, if not, I can create another issue.

@cretz Seems plausible. Will test that. Thanks!

@cretz Could you please retest - a fix has been pushed?

cretz commented 4 years ago

@Xtigyro - See the comment above yours with another issue. With puppetserver.multiCompilers.enabled as false, you get that config map error still on your latest branch. I fixed it in my local fork by putting a condition around that volume in the deployment yaml.

Xtigyro commented 4 years ago

@Xtigyro - See the comment above yours with another issue. With puppetserver.multiCompilers.enabled as false, you get that config map error still on your latest branch. I fixed it in my local fork by putting a condition around that volume in the deployment yaml.

@cretz Yes, saw it but fixing any bugs one by one - so we don't introduce anything unexpected. Will work on that tomorrow.

Thanks!

Xtigyro commented 4 years ago

@Xtigyro - See the comment above yours with another issue. With puppetserver.multiCompilers.enabled as false, you get that config map error still on your latest branch. I fixed it in my local fork by putting a condition around that volume in the deployment yaml.

@cretz Yes, saw it but fixing any bugs one by one - so we don't introduce anything unexpected. Will work on that tomorrow.

Thanks!

@cretz Fixed - it had nothing to do with the migration to Helm v3, still good. Thank you!

Xtigyro commented 4 years ago

@baurmatt @cretz Helm v3 support has been released.