jfrog / charts

JFrog official Helm Charts
https://jfrog.com/integration/helm-repository/
Apache License 2.0
262 stars 447 forks source link

Fails to start on clean install : helm3 #612

Closed snickerson-holon closed 4 years ago

snickerson-holon commented 4 years ago

Is this a request for help?: Possibly, perhaps I am missing something. The instructions were two commands. Further install required not mention in docs?


Is this a BUG REPORT or FEATURE REQUEST? (choose one): Bug Report

Version of Helm and Kubernetes: Helm 3.0.2, Kubernetes Version: v1.16.3

Which chart: jfrog/artifactory-jcr

What happened: Installing to k8s it sits in pending state with restarts

What you expected to happen: artifactory available.

How to reproduce it (as minimally and precisely as possible): Clean install of kubernetes, via rancher in my case Clean install of kubectl/helm3 Added jfrog repo: sudo helm repo add jfrog https://charts.jfrog.io updated repo Created namspace cattle-dog Attempted to install via helm: sudo helm install artifactory --namespace cattle-dog jfrog/artifactory-jcr Artifactory install runs successfully, but sits in pending state with restarts. Helm uninstalled was successful.

Anything else we need to know: Could this have something to do with the client config? "helm repo add http://:/artifactory/ --username --password helm repo update"

eldada commented 4 years ago

Hey @snickerson-holon . The helm setup is right. I assume the "pending state" you note is the pod. You should try figuring out the reason for the pending with

kubectl describe pod -n cattle-dog <pod-name>

Look down at the Events: for clues on what's going on.

snickerson-holon commented 4 years ago

Thanks for the reply. Is JCR a repo, or a way to connect to an Artifactory repo? Being new to artifactory, and repo in general, perhaps I am mis-ready the docs. TBH, I have since attempted to install Artifactory, as well as artifactory-ha, and they fail as well similarly.


  Type     Reason     Age                    From                  Message
  ----     ------     ----                   ----                  -------
  Normal   Scheduled  <unknown>              default-scheduler     Successfully assigned default/holon-artifactory-artifactory-nginx-75885955bc-g75cf to ranch-hand1
  Normal   Pulled     4m35s                  kubelet, ranch-hand1  Container image "alpine:3.10" already present on machine
  Normal   Created    4m35s                  kubelet, ranch-hand1  Created container setup
  Normal   Started    4m35s                  kubelet, ranch-hand1  Started container setup
  Normal   Pulled     4m34s                  kubelet, ranch-hand1  Container image "docker.bintray.io/jfrog/nginx-artifactory-pro:6.16.0" already present on machine
  Normal   Created    4m34s                  kubelet, ranch-hand1  Created container nginx
  Normal   Started    4m34s                  kubelet, ranch-hand1  Started container nginx
  Warning  Unhealthy  2m49s (x9 over 4m9s)   kubelet, ranch-hand1  Liveness probe failed: Get http://10.42.0.76:80/artifactory/webapp/#/login: net/http: request canceled (Client.Timeout exceeded while awaiting headers)
  Warning  Unhealthy  2m39s (x10 over 4m9s)  kubelet, ranch-hand1  Readiness probe failed: Get http://10.42.0.76:80/artifactory/webapp/#/login: net/http: request canceled (Client.Timeout exceeded while awaiting headers)```
eldada commented 4 years ago
  1. Can you confirm Artifactory pod is ok? kubectl get pods -n cattle-dog
  2. If ok, can you kubectl port-forward -n cattle-dog <artifactory-pod> 8081:8081 and open http://localhost:8081 in your browser, do you see the JCR UI?
danielezer commented 4 years ago

@snickerson-holon is this still an issue for you?

willrof commented 4 years ago

Same here, what could it be? Running on Kubeadm (Helm 3.1.1, Kubernetes 1.17.0)

Installation goes without error and exactly like snickerson-holon:

user@lab-1:~$ k get all
NAME                                                   READY   STATUS             RESTARTS   AGE
pod/jfrog-art-oss-artifactory-0                        0/1     Pending            0          21m
pod/jfrog-art-oss-artifactory-nginx-7bf955d486-gvxm9   0/1     CrashLoopBackOff   8          21m
pod/jfrog-art-oss-postgresql-0                         0/1     Pending            0          21m
pod/shell-demo                                         1/1     Running            1          28d

NAME                                        TYPE           CLUSTER-IP      EXTERNAL-IP   PORT(S)                      AGE
service/jfrog-art-oss-artifactory           ClusterIP      10.96.170.223   <none>        8082/TCP,8081/TCP            21m
service/jfrog-art-oss-artifactory-nginx     LoadBalancer   10.96.93.165    <pending>     80:31759/TCP,443:32158/TCP   21m
service/jfrog-art-oss-postgresql            ClusterIP      10.96.236.70    <none>        5432/TCP                     21m
service/jfrog-art-oss-postgresql-headless   ClusterIP      None            <none>        5432/TCP                     21m
service/kubernetes                          ClusterIP      10.96.0.1       <none>        443/TCP                      51d

NAME                                              READY   UP-TO-DATE   AVAILABLE   AGE
deployment.apps/jfrog-art-oss-artifactory-nginx   0/1     1            0           21m

NAME                                                         DESIRED   CURRENT   READY   AGE
replicaset.apps/jfrog-art-oss-artifactory-nginx-7bf955d486   1         1         0       21m

NAME                                         READY   AGE
statefulset.apps/jfrog-art-oss-artifactory   0/1     21m
statefulset.apps/jfrog-art-oss-postgresql    0/1     21m
user@lab-1:~$ k describe pod jfrog-art-oss-artifactory-0
Events:
  Type     Reason            Age                 From               Message
  ----     ------            ----                ----               -------
  Warning  FailedScheduling  41s (x20 over 26m)  default-scheduler  error while running "VolumeBinding" filter plugin for pod "jfrog-art-oss-artifactory-0": pod has unbound immediate PersistentVolumeClaims

user@lab-1:~$ k get pv
No resources found in default namespace.

user@lab-1:~$ k get pvc
NAME                                             STATUS    VOLUME   CAPACITY   ACCESS MODES   STORAGECLASS   AGE
artifactory-volume-jfrog-art-oss-artifactory-0   Pending                                                     28m
data-jfrog-art-oss-postgresql-0                  Pending                                                     28m

user@lab-1:~$ k describe pvc artifactory-volume-jfrog-art-oss-artifactory-0
Events:
  Type    Reason         Age                    From                         Message
  ----    ------         ----                   ----                         -------
  Normal  FailedBinding  3m40s (x102 over 28m)  persistentvolume-controller  no persistent volumes available for this claim and no storage class is set

I can't find installation instructions mentioning that I have to create it, should it have been installed during deploy?

danielezer commented 4 years ago

@willrof most k8s clusters come with a default storage class. Can you check if you have a storage class in your cluster?

willrof commented 4 years ago

It seems that I don't have any set.

user @lab-1:~$ k get storageclass -A
No resources found
danielezer commented 4 years ago

Which type of cluster are you using?

willrof commented 4 years ago

I'm using a Kubeadm Cluster deployed on 2 Debian VMs on GCP. 1 Master and 1 Node as a test environment since i'm getting these errors. the cluster is empty other than jfrog-artifactory.

danielezer commented 4 years ago

Sounds like you have to create a storage class that points to GCE Persistent disks - https://kubernetes.io/docs/concepts/storage/storage-classes/#gce-pd

danielezer commented 4 years ago

@willrof The same issue will happen to you when you try to install any helm chart that requires persistent storage, which is basically every DB/state manager. This is not Artifactory specific...

willrof commented 4 years ago

Yes, you are correct. After setting the default StorageClass it worked! thanks!

danielezer commented 4 years ago

Glad to hear it worked out!