teamhephy / slugbuilder

MIT License
2 stars 11 forks source link

Scala apps deploy fail #18

Open UlTriX opened 4 years ago

UlTriX commented 4 years ago

Trying to deploy a scala app using buildpack (builder v2.13.3 and slugbuilder v2.7.3) but it fails when the buildpack tries to write in the base dir (access denied).

Tried the provided buildpacks and custom pointing to latest version same issue.

Line in question that fails of the buildpack cat << EOF > "${BASE_DIR}/export"

Seems like an issue with folder permission and the user that runs the buildpack.

Any ideas of what is happening and some possible solutions?

Thanks in advance.

Cryptophobia commented 4 years ago

Can you provide a complete log of slugbuilder pod after it terminates?

I suspect it has to do with permissions applied to the slugbuilder user or sec profile applied to the pod. What version of kubernetes are you running Hephy on?

UlTriX commented 4 years ago

This is the last lines of log of the slugbuilder pod before it terminates

 [success] Total time: 57 s, completed Mar 5, 2020 4:06:02 PM  [info] Wrote /tmp/scala_buildpack_build_dir/target/scala-2.11/play-getting-started_2.11-1.0-SNAPSHOT.pom  [info] Packaging /tmp/scala_buildpack_build_dir/target/scala-2.11/play-getting-started_2.11-1.0-SNAPSHOT.jar ...  [info] Done packaging.  [info] Packaging /tmp/scala_buildpack_build_dir/target/scala-2.11/play-getting-started_2.11-1.0-SNAPSHOT-web-assets.jar ...  [info] Done packaging.  [info] Packaging /tmp/scala_buildpack_build_dir/target/scala-2.11/play-getting-started_2.11-1.0-SNAPSHOT-sans-externalized.jar ...  [info] Done packaging.  [success] Total time: 1 s, completed Mar 5, 2020 4:06:02 PM -----> Dropping ivy cache from the slug -----> Dropping sbt boot dir from the slug -----> Dropping compilation artifacts from the slug /tmp/buildpacks/12-scala/bin/compile: line 214: //export: Permission denied

Same error is returned in two Kubernetes clusters I control version 1.15.10 and 1.15.5

Cryptophobia commented 4 years ago

Is there already a file/folder called export in the repo you are building?

Cryptophobia commented 4 years ago

This seems to be a step in the buildpack that is very specific to Heroku. You can also just clone the scala buildpack repo and remove those last few lines. Then set BUILDPACK_URL to your cloned version.

Cryptophobia commented 4 years ago

They added this export functionality here in the scale buildpack and I am not sure why? https://github.com/heroku/heroku-buildpack-scala/pull/135

UlTriX commented 4 years ago

Is there already a file/folder called export in the repo you are building?

no there is none

This seems to be a step in the buildpack that is very specific to Heroku. You can also just clone the scala buildpack repo and remove those last few lines. Then set BUILDPACK_URL to your cloned version.

Thank you for idea seems a good workaround. Will create my custom buildpack for now.

They added this export functionality here in the scale buildpack and I am not sure why? heroku/heroku-buildpack-scala#135

Indeed. It seems to me is regarding running different buildpacks over the same app perhaps?

This means building scala apps on latest workflow is broken since the default buildpack points to one that is after that change. Perhaps custom buildpacks are needed for workflow? Is there any way of running the build as root?

Thank you for all the help.

Cryptophobia commented 4 years ago

Another way to try to solve this would be to try creating a psp (pod security policy) for the deis namespace:

kubectl get psp --all-namespaces
NAME             PRIV   CAPS   SELINUX    RUNASUSER   FSGROUP    SUPGROUP   READONLYROOTFS   VOLUMES
deis.privileged   true   *      RunAsAny   RunAsAny    RunAsAny   RunAsAny   false            *

https://docs.bitnami.com/kubernetes/how-to/secure-kubernetes-cluster-psp/

Not sure if it will solve it though. This would mean that minikube or wherever you are running k8s is restricting the pods in some way.

@UlTriX , are you running minikube with a vm driver set?

UlTriX commented 4 years ago

I am going to try to mess around with the security policies.

I am not running in minikube. I tested in two k8s clusters installed in bare metal (one fresh install) and also on local Docker Desktop (k8s activated).