NetApp / trident

Storage orchestrator for containers
Apache License 2.0
761 stars 223 forks source link

DEBU Invoking tunneled command: 'tridentctl -s 127.0.0.1:8000 version -o json' #205

Closed arsenalzp closed 5 years ago

arsenalzp commented 5 years ago

The first step of building container ( previous finished successfully . Here is my long-long debugging messages I got from tridentctl: Run installation: $ ./tridentctl install --trident-image trident:18.10.0-custom -n default -d

DEBU Initialized logging. logLevel=debug DEBU Running outside a pod, creating CLI-based client. DEBU Initialized Kubernetes CLI client. cli=kubectl flavor=k8s namespace=default version=1.11.1+icp DEBU Validated installation environment. installationNamespace=default kubernetesVersion= DEBU Deleted Kubernetes configmap. label="app=trident-installer.netapp.io" namespace=default DEBU Namespace exists. namespace=default DEBU Deleted Kubernetes object by YAML. DEBU Deleted installer cluster role binding. DEBU Deleted Kubernetes object by YAML. DEBU Deleted installer cluster role. DEBU Deleted Kubernetes object by YAML. DEBU Deleted installer service account. DEBU Created Kubernetes object by YAML. INFO Created installer service account. serviceaccount=trident-installer INFO Waiting for object to be created. objectName=clusterRole DEBU Created Kubernetes object by YAML. INFO Created installer cluster role. clusterrole=trident-installer INFO Waiting for object to be created. objectName=clusterRoleBinding DEBU Created Kubernetes object by YAML. INFO Created installer cluster role binding. clusterrolebinding=trident-installer DEBU Created Kubernetes configmap from directory. label="app=trident-installer.netapp.io" name=trident-installer namespace=default path=/root/trident/trident/bin/setup INFO Created installer configmap. configmap=trident-installer INFO Waiting for object to be created. objectName=installerPod DEBU Created Kubernetes object by YAML. INFO Created installer pod. pod=trident-installer INFO Waiting for Trident installer pod to start. DEBU Trident installer pod not yet started, waiting. increment=552.330144ms message="pod not yet started (Pending)" DEBU Trident installer pod not yet started, waiting. increment=1.080381816s message="pod not yet started (Pending)" DEBU Trident installer pod not yet started, waiting. increment=1.31013006s message="pod not yet started (Pending)" DEBU Pod started. phase=Running INFO Trident installer pod started. namespace=default pod=trident-installer DEBU Getting logs. cmd="kubectl --namespace=default logs trident-installer -f" DEBU Initialized logging. logLevel=debug DEBU Running in a pod, creating API-based client. namespace=default DEBU Initialized Kubernetes API client. cli=kubectl flavor=k8s namespace=default version=v1.11.1+icp DEBU Validated installation environment. installationNamespace=default kubernetesVersion=v1.11.1+icp DEBU Parsed requested volume size. quantity=2Gi DEBU Dumping RBAC fields. ucpBearerToken= ucpHost= useKubernetesRBAC=true DEBU Namespace exists. namespace=default DEBU PVC does not exist. pvc=trident DEBU PV does not exist. pv=trident INFO Starting storage driver. backend=/setup/backend.json DEBU config: {"dataLIF":"10.1.3.138","defaults":{"exportPolicy":"default","snapshotPolicy":"default","snapshotReserve":"10","spaceReserve":"none"},"managementLIF":"10.1.3.138","password":"P@ssw0rd!","storageDriverName":"ontap-nas","svm":"svm_docker","username":"vsadmin","version":1} DEBU Storage prefix is absent, will use default prefix. DEBU Parsed commonConfig: {Version:1 StorageDriverName:ontap-nas BackendName: Debug:false DebugTraceFlags:map[] DisableDelete:false StoragePrefixRaw:[] StoragePrefix: SerialNumbers:[] DriverContext: LimitVolumeSize:} DEBU Initializing storage driver. driver=ontap-nas DEBU Addresses found from ManagementLIF lookup. addresses="[10.1.3.138]" hostname=10.1.3.138 DEBU Using specified SVM. SVM=svmdocker DEBU ONTAP API version. Ontapi=1.140 DEBU NodeListSerialNumbers desiredAttributes="desired-attributes: { }\nnode-details-info: node-details-info: { }\ncpu-busytime: nil\ncpu-firmware-release: nil\nenv-failed-fan-count: nil\nenv-failed-fan-message: nil\nenv-failed-power-supply-count: nil\nenv-failed-power-supply-message: nil\nenv-over-temperature: nil\nis-all-flash-optimized: nil\nis-cloud-optimized: nil\nis-diff-svcs: nil\nis-epsilon-node: nil\nis-node-cluster-eligible: nil\nis-node-healthy: nil\nmaximum-aggregate-size: nil\nmaximum-number-of-volumes: nil\nmaximum-volume-size: nil\nnode: nil\nnode-asset-tag: nil\nnode-location: nil\nnode-model: nil\nnode-nvram-id: nil\nnode-owner: nil\nnode-serial-number: \nnode-storage-configuration: nil\nnode-system-id: nil\nnode-uptime: nil\nnode-uuid: nil\nnode-vendor: nil\nnvram-battery-status: nil\nproduct-version: nil\nvm-system-disks: nil\nvmhost-info: nil\n\n" err="" info="node-details-info: { }\ncpu-busytime: nil\ncpu-firmware-release: nil\nenv-failed-fan-count: nil\nenv-failed-fan-message: nil\nenv-failed-power-supply-count: nil\nenv-failed-power-supply-message: nil\nenv-over-temperature: nil\nis-all-flash-optimized: nil\nis-cloud-optimized: nil\nis-diff-svcs: nil\nis-epsilon-node: nil\nis-node-cluster-eligible: nil\nis-node-healthy: nil\nmaximum-aggregate-size: nil\nmaximum-number-of-volumes: nil\nmaximum-volume-size: nil\nnode: nil\nnode-asset-tag: nil\nnode-location: nil\nnode-model: nil\nnode-nvram-id: nil\nnode-owner: nil\nnode-serial-number: \nnode-storage-configuration: nil\nnode-system-id: nil\nnode-uptime: nil\nnode-uuid: nil\nnode-vendor: nil\nnvram-battery-status: nil\nproduct-version: nil\nvm-system-disks: nil\nvmhost-info: nil\n" response="netapp: { }\nversion,attr: \nxmlns,attr: \nresults: results: { }\nstatus,attr: failed\nreason,attr: Unable to find API: system-node-get-iter\nerrno,attr: 13005\nattributes-list: attributes-list: { }\nnode-details-info: []\n\nnext-tag: nil\nnum-records: 0\n\n" WARN Could not determine controller serial numbers. API status: failed, Reason: Unable to find API: system-node-get-iter, Code: 13005 DEBU Configuration defaults Encryption=false ExportPolicy=default FileSystemType=ext4 LimitAggregateUsage= LimitVolumeSize= NfsMountOptions="-o nfsvers=3" SecurityStyle=unix Size=1G SnapshotDir=false SnapshotPolicy=default SnapshotReserve=10 SpaceReserve=none SplitOnClone=false StoragePrefix=trident UnixPermissions=---rwxrwxrwx DEBU Data LIFs dataLIFs="[10.1.3.138]" DEBU Found NAS LIFs. dataLIFs="[10.1.3.138]" DEBU Addresses found from hostname lookup. addresses="[10.1.3.138]" hostname=10.1.3.138 DEBU Found matching Data LIF. hostNameAddress=10.1.3.138 DEBU Configured EMS heartbeat. intervalHours=24 DEBU Read storage pools assigned to SVM. pools="[node3_aggr1]" svm=svm_docker DEBU Read aggregate attributes. aggregate=node3_aggr1 mediaType=hdd DEBU Storage driver initialized. driver=ontap-nas INFO Storage driver loaded. driver=ontap-nas INFO Starting Trident installation. namespace=default DEBU Parsed YAML into unstructured object. group=rbac.authorization.k8s.io kind=ClusterRoleBinding version=v1 DEBU Found API resource. group=rbac.authorization.k8s.io kind=ClusterRoleBinding resource=clusterrolebindings version=v1 DEBU Deleting object. kind=ClusterRoleBinding name=trident namespace=default DEBU Deleted object by YAML. name=trident DEBU Deleted cluster role binding. DEBU Parsed YAML into unstructured object. group=rbac.authorization.k8s.io kind=ClusterRole version=v1 DEBU Found API resource. group=rbac.authorization.k8s.io kind=ClusterRole resource=clusterroles version=v1 DEBU Deleting object. kind=ClusterRole name=trident namespace=default DEBU Deleted object by YAML. name=trident DEBU Deleted cluster role. DEBU Parsed YAML into unstructured object. group= kind=ServiceAccount version=v1 DEBU Found API resource. group= kind=ServiceAccount resource=serviceaccounts version=v1 DEBU Deleting object. kind=ServiceAccount name=trident namespace=default DEBU Deleted object by YAML. name=trident DEBU Deleted service account. DEBU Parsed YAML into unstructured object. group= kind=ServiceAccount version=v1 DEBU Found API resource. group= kind=ServiceAccount resource=serviceaccounts version=v1 DEBU Creating object. kind=ServiceAccount name=trident namespace=default DEBU Created object by YAML. name=trident INFO Created service account. DEBU Parsed YAML into unstructured object. group=rbac.authorization.k8s.io kind=ClusterRole version=v1 DEBU Found API resource. group=rbac.authorization.k8s.io kind=ClusterRole resource=clusterroles version=v1 DEBU Creating object. kind=ClusterRole name=trident namespace=default DEBU Created object by YAML. name=trident INFO Created cluster role. DEBU Parsed YAML into unstructured object. group=rbac.authorization.k8s.io kind=ClusterRoleBinding version=v1 DEBU Found API resource. group=rbac.authorization.k8s.io kind=ClusterRoleBinding resource=clusterrolebindings version=v1 DEBU Creating object. kind=ClusterRoleBinding name=trident namespace=default DEBU Created object by YAML. name=trident INFO Created cluster role binding. DEBU Parsed YAML into unstructured object. group= kind=PersistentVolumeClaim version=v1 DEBU Found API resource. group= kind=PersistentVolumeClaim resource=persistentvolumeclaims version=v1 DEBU Creating object. kind=PersistentVolumeClaim name=trident namespace=default DEBU Created object by YAML. name=trident INFO Created PVC. DEBU Attempting volume create. backend=ontapnas_10.1.3.138 size=2147483648 storage_class= storage_pool=node3_aggr1 volume= DEBU Checking aggregate limits aggregate=node3_aggr1 limitAggregateUsage= requestedSize=2.147483648e+09 DEBU No limits specified DEBU Limits limitVolumeSize= DEBU No limits specified, not limiting volume size DEBU Creating Flexvol. aggregate=node3_aggr1 encryption=false exportPolicy=default name=trident_trident securityStyle=unix size=2147483648 snapshotDir=false snapshotPolicy=default snapshotReserve=10 spaceReserve=none unixPermissions=---rwxrwxrwx DEBU SVM root volume has no load-sharing mirrors. rootVolume=svm_docker_root DEBU Parsed YAML into unstructured object. group= kind=PersistentVolume version=v1 DEBU Found API resource. group= kind=PersistentVolume resource=persistentvolumes version=v1 DEBU Creating object. kind=PersistentVolume name=trident namespace=default DEBU Created object by YAML. name=trident INFO Created PV. pv=trident INFO Waiting for PVC to be bound. pvc=trident DEBU PVC not yet bound, waiting. increment=552.330144ms pvc=trident DEBU PVC not yet bound, waiting. increment=1.080381816s pvc=trident DEBU PVC not yet bound, waiting. increment=1.31013006s pvc=trident DEBU PVC not yet bound, waiting. increment=1.582392691s pvc=trident DEBU PVC not yet bound, waiting. increment=2.340488664s pvc=trident DEBU Logged EMS message. driver=ontap-nas DEBU PVC not yet bound, waiting. increment=4.506218855s pvc=trident DEBU Parsed YAML into unstructured object. group=extensions kind=Deployment version=v1beta1 DEBU Found API resource. group=extensions kind=Deployment resource=deployments version=v1beta1 DEBU Creating object. kind=Deployment name=trident namespace=default DEBU Created object by YAML. name=trident INFO Created Trident deployment. INFO Waiting for Trident pod to start. DEBU Trident pod not yet running, waiting. increment=282.818509ms DEBU Trident pod not yet running, waiting. increment=492.389441ms DEBU Trident pod not yet running, waiting. increment=671.590708ms DEBU Trident pod not yet running, waiting. increment=1.351538765s DEBU Trident pod not yet running, waiting. increment=2.569756966s INFO Trident pod started. namespace=default pod=trident-546cc97b45-vzkx4 INFO Waiting for Trident REST interface. DEBU Invoking tunneled command: 'tridentctl -s 127.0.0.1:8000 version -o json' DEBU REST interface not yet up, waiting. increment=656.819981ms DEBU Invoking tunneled command: 'tridentctl -s 127.0.0.1:8000 version -o json' DEBU REST interface not yet up, waiting. increment=535.697904ms DEBU Invoking tunneled command: 'tridentctl -s 127.0.0.1:8000 version -o json' DEBU REST interface not yet up, waiting. increment=990.739338ms DEBU Invoking tunneled command: 'tridentctl -s 127.0.0.1:8000 version -o json' DEBU REST interface not yet up, waiting. increment=1.380473169s DEBU Invoking tunneled command: 'tridentctl -s 127.0.0.1:8000 version -o json' DEBU REST interface not yet up, waiting. increment=2.45250242s DEBU Invoking tunneled command: 'tridentctl -s 127.0.0.1:8000 version -o json' DEBU REST interface not yet up, waiting. increment=2.973082793s DEBU Invoking tunneled command: 'tridentctl -s 127.0.0.1:8000 version -o json' DEBU REST interface not yet up, waiting. increment=4.516962922s DEBU Invoking tunneled command: 'tridentctl -s 127.0.0.1:8000 version -o json' DEBU REST interface not yet up, waiting. increment=10.07288354s DEBU Invoking tunneled command: 'tridentctl -s 127.0.0.1:8000 version -o json' DEBU REST interface not yet up, waiting. increment=9.20786441s DEBU Invoking tunneled command: 'tridentctl -s 127.0.0.1:8000 version -o json' DEBU REST interface not yet up, waiting. increment=13.516432903s DEBU Invoking tunneled command: 'tridentctl -s 127.0.0.1:8000 version -o json' DEBU REST interface not yet up, waiting. increment=24.821091939s DEBU Invoking tunneled command: 'tridentctl -s 127.0.0.1:8000 version -o json' DEBU REST interface not yet up, waiting. increment=46.305312214s DEBU Invoking tunneled command: 'tridentctl -s 127.0.0.1:8000 version -o json' DEBU REST interface not yet up, waiting. increment=1m21.749486247s DEBU Invoking tunneled command: 'tridentctl -s 127.0.0.1:8000 version -o json' ERRO Trident REST interface was not available after 180.00 seconds. FATA Install failed; unable to upgrade connection: container not found ("trident-main"); unable to upgrade connection: container not found ("trident-main"); use 'tridentctl logs' to learn more. Resolve the issue; use 'tridentctl uninstall' to clean up; and try again.

_$./tridentctl logs

trident log: time="2019-01-11T15:41:34Z" level=debug msg=Environment PATH="/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin" time="2019-01-11T15:41:34Z" level=debug msg=Environment HOSTNAME=trident-546cc97b45-vzkx4 time="2019-01-11T15:41:34Z" level=debug msg=Environment KUBERNETES_PORT_443_TCP_PORT=443 time="2019-01-11T15:41:34Z" level=debug msg=Environment KUBERNETES_PORT_443_TCP_ADDR=192.168.100.1 time="2019-01-11T15:41:34Z" level=debug msg=Environment KUBERNETES_SERVICE_HOST=192.168.100.1 time="2019-01-11T15:41:34Z" level=debug msg=Environment KUBERNETES_SERVICE_PORT=443 time="2019-01-11T15:41:34Z" level=debug msg=Environment KUBERNETES_SERVICE_PORT_HTTPS=443 time="2019-01-11T15:41:34Z" level=debug msg=Environment KUBERNETES_PORT="tcp://192.168.100.1:443" time="2019-01-11T15:41:34Z" level=debug msg=Environment KUBERNETES_PORT_443_TCP="tcp://192.168.100.1:443" time="2019-01-11T15:41:34Z" level=debug msg=Environment KUBERNETES_PORT_443_TCP_PROTO=tcp time="2019-01-11T15:41:34Z" level=debug msg=Environment PORT=8000 time="2019-01-11T15:41:34Z" level=debug msg=Environment BIN=trident_orchestrator time="2019-01-11T15:41:34Z" level=debug msg=Environment CLI_BIN=tridentctl time="2019-01-11T15:41:34Z" level=debug msg=Environment ETCDV3="http://localhost:8001" time="2019-01-11T15:41:34Z" level=debug msg=Environment K8S= time="2019-01-11T15:41:34Z" level=debug msg=Environment TRIDENT_IP=localhost time="2019-01-11T15:41:34Z" level=debug msg=Environment HOME=/root time="2019-01-11T15:41:34Z" level=info msg="Running Trident storage orchestrator." binary=/usr/local/bin/trident_orchestrator build_time="Fri Jan 11 16:31:05 EET 2019" version=18.10.0-custom+0f168eef71dd978df90130fb19a83f35add54c60 time="2019-01-11T15:41:34Z" level=debug msg=Flag name=debug value=true time="2019-01-11T15:41:34Z" level=debug msg=Flag name=etcd_v3 value="http://127.0.0.1:8001" time="2019-01-11T15:41:34Z" level=debug msg=Flag name=k8s_pod value=true time="2019-01-11T15:41:34Z" level=debug msg="Trident is configured with an etcdv3 client without TLS."

Dockerfile presents in attachment. Dockerfile

kangarlou commented 5 years ago

I suspect the etcd container isn't coming up properly. Try tridentctl logs -l etcd.

kangarlou commented 5 years ago

Are you using an etcd image for ppc?

arsenalzp commented 5 years ago

@kangarlou,sure,I use ppc packages provided by IBM and called IBM Private Cloud CE. But their etcd uses 4001 port,so I have to change it in Dockerfile and adjust Trident configuration to force it to use external etcd. I'll try to fix it tomorrow and I hope that help.

arsenalzp commented 5 years ago

@kangarlou , unfortunately , I'm unable to deploy Trident with external etcd:

$ kubectl logs trident-844c7d79c7-8kk8v -n trident time="2019-01-12T08:10:37Z" level=debug msg=Environment PATH="/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin" time="2019-01-12T08:10:37Z" level=debug msg=Environment HOSTNAME=trident-844c7d79c7-8kk8v time="2019-01-12T08:10:37Z" level=debug msg=Environment KUBERNETES_PORT="tcp://192.168.100.1:443" time="2019-01-12T08:10:37Z" level=debug msg=Environment KUBERNETES_PORT_443_TCP="tcp://192.168.100.1:443" time="2019-01-12T08:10:37Z" level=debug msg=Environment KUBERNETES_PORT_443_TCP_PROTO=tcp time="2019-01-12T08:10:37Z" level=debug msg=Environment KUBERNETES_PORT_443_TCP_PORT=443 time="2019-01-12T08:10:37Z" level=debug msg=Environment KUBERNETES_PORT_443_TCP_ADDR=192.168.100.1 time="2019-01-12T08:10:37Z" level=debug msg=Environment KUBERNETES_SERVICE_HOST=192.168.100.1 time="2019-01-12T08:10:37Z" level=debug msg=Environment KUBERNETES_SERVICE_PORT=443 time="2019-01-12T08:10:37Z" level=debug msg=Environment KUBERNETES_SERVICE_PORT_HTTPS=443 time="2019-01-12T08:10:37Z" level=debug msg=Environment PORT=8000 time="2019-01-12T08:10:37Z" level=debug msg=Environment BIN=trident_orchestrator time="2019-01-12T08:10:37Z" level=debug msg=Environment CLI_BIN=tridentctl time="2019-01-12T08:10:37Z" level=debug msg=Environment ETCDV3="http://localhost:8001" time="2019-01-12T08:10:37Z" level=debug msg=Environment K8S= time="2019-01-12T08:10:37Z" level=debug msg=Environment TRIDENT_IP=localhost time="2019-01-12T08:10:37Z" level=debug msg=Environment HOME=/root time="2019-01-12T08:10:37Z" level=info msg="Running Trident storage orchestrator." binary=/usr/local/bin/trident_orchestrator build_time="Сб янв 12 09:42:32 EET 2019" version=18.10.0-custom+0f168eef71dd978df90130fb19a83f35add54c60 time="2019-01-12T08:10:37Z" level=debug msg=Flag name=debug value=true time="2019-01-12T08:10:37Z" level=debug msg=Flag name=etcd_v3 value="https://mycluster.icp:4001" time="2019-01-12T08:10:37Z" level=debug msg=Flag name=etcd_v3_cacert value=/etc/cfc/conf/etcd/ca.pem time="2019-01-12T08:10:37Z" level=debug msg=Flag name=etcd_v3_cert value=/etc/cfc/conf/etcd/server.pem time="2019-01-12T08:10:37Z" level=debug msg=Flag name=etcd_v3_key value=/etc/cfc/conf/etcd/server-key.pem time="2019-01-12T08:10:37Z" level=debug msg=Flag name=k8s_pod value=true time="2019-01-12T08:10:37Z" level=debug msg="Trident is configured with an etcdv3 client without TLS." time="2019-01-12T08:10:37Z" level=warning msg="Trident's etcdv3 client should be configured with TLS!" > time="2019-01-12T08:11:07Z" level=fatal msg="Unable to create the etcd V3 client. Unavailable etcd cluster"

Try to get health status:

$etcdctl --debug --no-sync --endpoints https://mycluster.icp:4001 --ca-file /etc/cfc/conf/etcd/ca.pem --cert-file /etc/cfc/conf/etcd/server.pem --key-file /etc/cfc/conf/etcd/server-key.pem cluster-health Cluster-Endpoints: https://mycluster.icp:4001 cURL Command: curl -X GET https://mycluster.icp:4001/v2/members member b0bea0af5b3040e is healthy: got healthy result from https://10.1.2.125:4001 cluster is healthy

Here is my trident-deployment-external-etcd.yaml :

$ cat trident-deployment-external-etcd.yaml apiVersion: extensions/v1beta1 kind: Deployment metadata: name: trident labels: app: trident.netapp.io spec: replicas: 1 template: metadata: labels: app: trident.netapp.io spec: serviceAccount: trident containers:

  • name: trident-main image: trident:18.10.0-custom command:
    • /usr/local/bin/trident_orchestrator args:
    • -etcd_v3
    • https://mycluster.icp:4001
    • -etcd_v3_cacert
    • /etc/cfc/conf/etcd/ca.pem
    • -etcd_v3_cert
    • /etc/cfc/conf/etcd/server.pem
    • -etcd_v3_key
    • /etc/cfc/conf/etcd/server-key.pem
    • -
arsenalzp commented 5 years ago

Success!

$bin/tridentctl -n trident get backend +---------------------+----------------+--------+---------+ | NAME | STORAGE DRIVER | ONLINE | VOLUMES | +---------------------+----------------+--------+---------+ | ontapnas_10.1.3.138 | ontap-nas | true | 0 | +---------------------+----------------+--------+---------+

But, I have a problem with storage classes:

$kubectl create -f storage-class-basic.yaml -n trident error: error validating "storage-class-basic.yaml": error validating data: ValidationError(StorageClass): unknown field "backendType" in io.k8s.api.storage.v1.StorageClass; if you choose to ignore these errors, turn validation off with --validate=false

Could somebody help me please ?

arsenalzp commented 5 years ago

After closely Docs reading I found problem and fixed it. We can close this issue

$ kubectl exec -it task-pv-pod -n trident -- df -h /usr/share/nginx/html

Filesystem Size Used Avail Use% Mounted on 10.1.3.138:/trident_trident_basic_de5a6 922M 192K 922M 1% /usr/share/nginx/html

$ kubectl get pvc -aw -n trident

Flag --show-all has been deprecated, will be removed in an upcoming release NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE basic Bound trident-basic-de5a6 1Gi RWO basic 9m trident Bound trident 2Gi RWO 15h

kangarlou commented 5 years ago

Great to see you figured it out!