Closed paravatha closed 4 years ago
I trust the inbound and outbound protocols and ports are correctly set on AWS.
@ktsakalozos Yes, I opened wide range of ports for my home IPs. I am able to connect to things like apache server and nginx server
@paravatha: The dashboard should be available on port 80. When you're SSHed into the AWS instance, can you successfully get a response with curl localhost:80
? If so, after setting up security groups to allow that, can you access the dashboard via the public IP?
@knkski getting this error.
$ curl localhost:80
<html>
<head><title>404 Not Found</title></head>
<body>
<center><h1>404 Not Found</h1></center>
<hr><center>openresty/1.15.8.1</center>
</body>
</html>
Do you want me to perform the steps mentioned on ? https://github.com/ubuntu/microk8s/issues/713#issuecomment-612529637
The ingress seems fine to me
$ kubectl get all -n ingress
I0502 00:41:52.339269 28473 request.go:621] Throttling request took 1.193625589s, request: GET:https://127.0.0.1:16443/apis/kubeflow.org/v1beta1?timeout=32s
I0502 00:42:03.139260 28473 request.go:621] Throttling request took 1.193870668s, request: GET:https://127.0.0.1:16443/apis/apiregistration.k8s.io/v1beta1?timeout=32s
I0502 00:42:13.539146 28473 request.go:621] Throttling request took 1.192369381s, request: GET:https://127.0.0.1:16443/apis/dex.coreos.com/v1?timeout=32s
I0502 00:42:23.939196 28473 request.go:621] Throttling request took 1.193673045s, request: GET:https://127.0.0.1:16443/apis/kubeflow.org/v1alpha1?timeout=32s
NAME READY STATUS RESTARTS AGE
pod/nginx-ingress-microk8s-controller-pqsfk 1/1 Running 0 39m
NAME DESIRED CURRENT READY UP-TO-DATE AVAILABLE NODE SELECTOR AGE
daemonset.apps/nginx-ingress-microk8s-controller 1 1 1 1 1 <none> 39m
$ sudo microk8s.status --wait-ready
microk8s is running
addons:
dashboard: enabled
dns: enabled
ingress: enabled
istio: enabled
kubeflow: enabled
metallb: enabled
prometheus: enabled
storage: enabled
cilium: disabled
fluentd: disabled
gpu: disabled
helm: disabled
helm3: disabled
host-access: disabled
jaeger: disabled
knative: disabled
linkerd: disabled
metrics-server: disabled
rbac: disabled
registry: disabled
Will enabling juju help?
In my environment microk8s (1378) also reproduces. It is also strange behavior for localhost:443 for port 443 it logged on nginx-ingress. for port 80 I cannot find logged component.
$ curl localhost:80
<html>
<head><title>404 Not Found</title></head>
<body>
<center><h1>404 Not Found</h1></center>
<hr><center>openresty/1.15.8.1</center>
</body>
</html>
$ curl localhost:443
<html>
<head><title>400 The plain HTTP request was sent to HTTPS port</title></head>
<body>
<center><h1>400 Bad Request</h1></center>
<center>The plain HTTP request was sent to HTTPS port</center>
<hr><center>openresty/1.15.8.1</center>
</body>
</html>
$ kubectl logs nginx-ingress-microk8s-controller-w6nnt -n ingress
172.31.29.233 - [172.31.29.233] - - [02/May/2020:10:06:49 +0000] "GET / HTTP/1.1" 400 261 "-" "curl/7.65.3" 81 0.000 [] [] - - - - d14bcc775fc58e6e50c4a20114032acc
127.0.0.1 - [127.0.0.1] - - [02/May/2020:11:01:38 +0000] "GET / HTTP/1.1" 400 261 "-" "curl/7.65.3" 77 0.000 [] [] - - - - 5754b77f3d2f3b55f01dbf8c629522c2
127.0.0.1 - [127.0.0.1] - - [02/May/2020:11:06:00 +0000] "GET / HTTP/1.1" 400 261 "-" "curl/7.65.3" 77 0.000 [] [] - - - - cf6823db4c3eeb78c8c79fa6f5755b87
127.0.0.1 - [127.0.0.1] - - [02/May/2020:11:07:56 +0000] "GET / HTTP/1.1" 400 261 "-" "curl/7.65.3" 77 0.000 [] [] - - - - 04dd2da061eca666bb19f1a3546562ee
@sakaia: You're getting that error about HTTP request to an HTTPS port because you'll need to specify HTTPS in the curl command:
curl https://localhost:443
If you do that, do you get a response?
@paravatha, same thing for you. Apologies about the bad command earlier, you'll also want to try curl https://localhost:443
instead of curl localhost:80
.
If you do that, do you get a response? still 404
$ curl https://localhost:443 -k
<html>
<head><title>404 Not Found</title></head>
<body>
<center><h1>404 Not Found</h1></center>
<hr><center>openresty/1.15.8.1</center>
</body>
I add kubectl describe and logs for nginx-ingress-microk8s-controller. Private IP is 172.31.24.135
$ kubectl describe pods nginx-ingress-microk8s-controller-fqkfz -n ingress
Name: nginx-ingress-microk8s-controller-fqkfz
Namespace: ingress
Priority: 0
Node: ip-172-31-24-135/172.31.24.135
Start Time: Tue, 05 May 2020 12:48:06 +0000
Labels: controller-revision-hash=59cb5dd586
name=nginx-ingress-microk8s
pod-template-generation=1
Annotations: <none>
Status: Running
IP: 172.31.24.135
IPs:
IP: 172.31.24.135
Controlled By: DaemonSet/nginx-ingress-microk8s-controller
Containers:
nginx-ingress-microk8s:
Container ID: containerd://8a5fb91e96ce223bd40ab08733bf03e995b36df85a748c79a1209acd17a5bd37
Image: quay.io/kubernetes-ingress-controller/nginx-ingress-controller-amd64:0.25.1
Image ID: sha256:2b8ed1f2046d4b37c18cca2ecc4f435b6618d2d198c0c8bf617954e863cc5832
Ports: 80/TCP, 443/TCP
Host Ports: 80/TCP, 443/TCP
Args:
/nginx-ingress-controller
--configmap=$(POD_NAMESPACE)/nginx-load-balancer-microk8s-conf
--publish-status-address=127.0.0.1
State: Running
Started: Tue, 05 May 2020 12:48:44 +0000
Ready: True
Restart Count: 0
Liveness: http-get http://:10254/healthz delay=30s timeout=5s period=10s #success=1 #failure=3
Environment:
POD_NAME: nginx-ingress-microk8s-controller-fqkfz (v1:metadata.name)
POD_NAMESPACE: ingress (v1:metadata.namespace)
Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from nginx-ingress-microk8s-serviceaccount-token-ltns6 (ro)
Conditions:
Type Status
Initialized True
Ready True
ContainersReady True
PodScheduled True
Volumes:
nginx-ingress-microk8s-serviceaccount-token-ltns6:
Type: Secret (a volume populated by a Secret)
SecretName: nginx-ingress-microk8s-serviceaccount-token-ltns6
Optional: false
QoS Class: BestEffort
Node-Selectors: <none>
Tolerations: node.kubernetes.io/disk-pressure:NoSchedule
node.kubernetes.io/memory-pressure:NoSchedule
node.kubernetes.io/network-unavailable:NoSchedule
node.kubernetes.io/not-ready:NoExecute
node.kubernetes.io/pid-pressure:NoSchedule
node.kubernetes.io/unreachable:NoExecute
node.kubernetes.io/unschedulable:NoSchedule
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Scheduled 36m default-scheduler Successfully assigned ingress/nginx-ingress-microk8s-controller-fqkfz to ip-172-31-24-135
Normal Pulling 36m kubelet, ip-172-31-24-135 Pulling image "quay.io/kubernetes-ingress-controller/nginx-ingress-controller-amd64:0.25.1"
Normal Pulled 36m kubelet, ip-172-31-24-135 Successfully pulled image "quay.io/kubernetes-ingress-controller/nginx-ingress-controller-amd64:0.25.1"
Normal Created 36m kubelet, ip-172-31-24-135 Created container nginx-ingress-microk8s
Normal Started 36m kubelet, ip-172-31-24-135 Started container nginx-ingress-microk8s
$kubectl logs nginx-ingress-microk8s-controller-fqkfz -n ingress
-------------------------------------------------------------------------------
NGINX Ingress controller
Release: 0.25.1
Build: git-5179893a9
Repository: https://github.com/kubernetes/ingress-nginx/
nginx version: openresty/1.15.8.1
-------------------------------------------------------------------------------
W0505 12:48:44.564909 7 flags.go:221] SSL certificate chain completion is disabled (--enable-ssl-chain-completion=false)
nginx version: openresty/1.15.8.1
W0505 12:48:44.571340 7 client_config.go:541] Neither --kubeconfig nor --master was specified. Using the inClusterConfig. This might not work.
I0505 12:48:44.571776 7 main.go:183] Creating API client for https://10.152.183.1:443
I0505 12:48:44.579223 7 main.go:227] Running in Kubernetes cluster version v1.18+ (v1.18.2-41+b5cdb79a4060a3) - git (clean) commit b5cdb79a4060a307d0c8a56a128aadc0da31c5a2 - platform linux/amd64
I0505 12:48:44.819799 7 main.go:102] Created fake certificate with PemFileName: /etc/ingress-controller/ssl/default-fake-certificate.pem
I0505 12:48:44.843133 7 nginx.go:274] Starting NGINX Ingress controller
I0505 12:48:44.877734 7 event.go:258] Event(v1.ObjectReference{Kind:"ConfigMap", Namespace:"ingress", Name:"nginx-load-balancer-microk8s-conf", UID:"7df99111-ff03-4607-864f-02fa0f1d3970", APIVersion:"v1", ResourceVersion:"446", FieldPath:""}): type: 'Normal' reason: 'CREATE' ConfigMap ingress/nginx-load-balancer-microk8s-conf
I0505 12:48:46.043853 7 nginx.go:318] Starting NGINX process
I0505 12:48:46.043969 7 leaderelection.go:235] attempting to acquire leader lease ingress/ingress-controller-leader-nginx...
I0505 12:48:46.044734 7 controller.go:133] Configuration changes detected, backend reload required.
I0505 12:48:46.067287 7 leaderelection.go:245] successfully acquired lease ingress/ingress-controller-leader-nginx
I0505 12:48:46.067496 7 status.go:86] new leader elected: nginx-ingress-microk8s-controller-fqkfz
I0505 12:48:46.138084 7 controller.go:149] Backend successfully reloaded.
I0505 12:48:46.138138 7 controller.go:158] Initial sync, sleeping for 1 second.
I0505 13:13:35.871339 7 event.go:258] Event(v1.ObjectReference{Kind:"Ingress", Namespace:"kubeflow", Name:"ambassador", UID:"f50471dd-e4d3-4168-a480-52cdb16535c1", APIVersion:"networking.k8s.io/v1beta1", ResourceVersion:"8579", FieldPath:""}): type: 'Normal' reason: 'CREATE' Ingress kubeflow/ambassador
I0505 13:13:35.873453 7 controller.go:133] Configuration changes detected, backend reload required.
I0505 13:13:36.032997 7 controller.go:149] Backend successfully reloaded.
I0505 13:13:46.102534 7 status.go:296] updating Ingress kubeflow/ambassador status from [] to [{127.0.0.1 }]
I0505 13:13:46.114432 7 event.go:258] Event(v1.ObjectReference{Kind:"Ingress", Namespace:"kubeflow", Name:"ambassador", UID:"f50471dd-e4d3-4168-a480-52cdb16535c1", APIVersion:"networking.k8s.io/v1beta1", ResourceVersion:"8673", FieldPath:""}): type: 'Normal' reason: 'UPDATE' Ingress kubeflow/ambassador
$ kubectl get services -A
NAMESPACE NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
controller-uk8s controller-service ClusterIP 10.152.183.35 <none> 17070/TCP 43m
default kubernetes ClusterIP 10.152.183.1 <none> 443/TCP 44m
kube-system dashboard-metrics-scraper ClusterIP 10.152.183.59 <none> 8000/TCP 43m
kube-system heapster ClusterIP 10.152.183.173 <none> 80/TCP 43m
kube-system kube-dns ClusterIP 10.152.183.10 <none> 53/UDP,53/TCP,9153/TCP 43m
kube-system kubernetes-dashboard ClusterIP 10.152.183.102 <none> 443/TCP 43m
kube-system monitoring-grafana ClusterIP 10.152.183.84 <none> 80/TCP 43m
kube-system monitoring-influxdb ClusterIP 10.152.183.64 <none> 8083/TCP,8086/TCP 43m
kubeflow ambassador LoadBalancer 10.152.183.160 10.64.140.43 80:30223/TCP 41m
kubeflow ambassador-operator ClusterIP 10.152.183.240 <none> 30666/TCP 41m
kubeflow argo-controller-operator ClusterIP 10.152.183.109 <none> 30666/TCP 41m
kubeflow argo-ui ClusterIP 10.152.183.5 <none> 8001/TCP 40m
kubeflow argo-ui-operator ClusterIP 10.152.183.106 <none> 30666/TCP 41m
kubeflow cert-manager-controller ClusterIP 10.152.183.149 <none> 9402/TCP 19m
kubeflow cert-manager-controller-operator ClusterIP 10.152.183.16 <none> 30666/TCP 40m
kubeflow cert-manager-webhook ClusterIP 10.152.183.32 <none> 6443/TCP 18m
kubeflow cert-manager-webhook-operator ClusterIP 10.152.183.58 <none> 30666/TCP 40m
kubeflow dex-auth ClusterIP 10.152.183.40 <none> 5556/TCP 34m
kubeflow dex-auth-operator ClusterIP 10.152.183.196 <none> 30666/TCP 39m
kubeflow jupyter-controller-operator ClusterIP 10.152.183.73 <none> 30666/TCP 39m
kubeflow jupyter-web ClusterIP 10.152.183.83 <none> 5000/TCP 37m
kubeflow jupyter-web-operator ClusterIP 10.152.183.46 <none> 30666/TCP 39m
kubeflow katib-controller ClusterIP 10.152.183.60 <none> 443/TCP 39m
kubeflow katib-controller-operator ClusterIP 10.152.183.74 <none> 30666/TCP 39m
kubeflow katib-db ClusterIP 10.152.183.100 <none> 3306/TCP 38m
kubeflow katib-db-endpoints ClusterIP None <none> <none> 38m
kubeflow katib-db-operator ClusterIP 10.152.183.97 <none> 30666/TCP 39m
kubeflow katib-manager ClusterIP 10.152.183.198 <none> 6789/TCP 37m
kubeflow katib-manager-operator ClusterIP 10.152.183.34 <none> 30666/TCP 39m
kubeflow katib-ui ClusterIP 10.152.183.19 <none> 8000/TCP 39m
kubeflow katib-ui-operator ClusterIP 10.152.183.232 <none> 30666/TCP 39m
kubeflow kubeflow-dashboard ClusterIP 10.152.183.191 <none> 8082/TCP 37m
kubeflow kubeflow-dashboard-operator ClusterIP 10.152.183.147 <none> 30666/TCP 39m
kubeflow kubeflow-profiles ClusterIP 10.152.183.144 <none> 8081/TCP 39m
kubeflow kubeflow-profiles-operator ClusterIP 10.152.183.7 <none> 30666/TCP 39m
kubeflow metacontroller ClusterIP 10.152.183.197 <none> 9999/TCP 38m
kubeflow metacontroller-operator ClusterIP 10.152.183.17 <none> 30666/TCP 39m
kubeflow metadata-api ClusterIP 10.152.183.56 <none> 8080/TCP 36m
kubeflow metadata-api-operator ClusterIP 10.152.183.219 <none> 30666/TCP 39m
kubeflow metadata-db ClusterIP 10.152.183.48 <none> 3306/TCP 38m
kubeflow metadata-db-endpoints ClusterIP None <none> <none> 38m
kubeflow metadata-db-operator ClusterIP 10.152.183.26 <none> 30666/TCP 39m
kubeflow metadata-envoy ClusterIP 10.152.183.135 <none> 9090/TCP,9091/TCP 37m
kubeflow metadata-envoy-operator ClusterIP 10.152.183.202 <none> 30666/TCP 39m
kubeflow metadata-grpc ClusterIP 10.152.183.104 <none> 8080/TCP 36m
kubeflow metadata-grpc-operator ClusterIP 10.152.183.30 <none> 30666/TCP 39m
kubeflow metadata-ui ClusterIP 10.152.183.253 <none> 3000/TCP 36m
kubeflow metadata-ui-operator ClusterIP 10.152.183.54 <none> 30666/TCP 39m
kubeflow minio ClusterIP 10.152.183.226 <none> 9000/TCP 39m
kubeflow minio-endpoints ClusterIP None <none> <none> 39m
kubeflow minio-operator ClusterIP 10.152.183.14 <none> 30666/TCP 39m
kubeflow modeldb-backend ClusterIP 10.152.183.110 <none> 8085/TCP,8080/TCP 34m
kubeflow modeldb-backend-operator ClusterIP 10.152.183.85 <none> 30666/TCP 39m
kubeflow modeldb-db ClusterIP 10.152.183.195 <none> 3306/TCP 39m
kubeflow modeldb-db-endpoints ClusterIP None <none> <none> 38m
kubeflow modeldb-db-operator ClusterIP 10.152.183.28 <none> 30666/TCP 39m
kubeflow modeldb-store ClusterIP 10.152.183.243 <none> 8086/TCP 35m
kubeflow modeldb-store-operator ClusterIP 10.152.183.199 <none> 30666/TCP 36m
kubeflow modeldb-ui ClusterIP 10.152.183.145 <none> 3000/TCP 34m
kubeflow modeldb-ui-operator ClusterIP 10.152.183.204 <none> 30666/TCP 36m
kubeflow oidc-gatekeeper ClusterIP 10.152.183.227 <none> 8080/TCP 35m
kubeflow oidc-gatekeeper-operator ClusterIP 10.152.183.63 <none> 30666/TCP 36m
kubeflow pipelines-api ClusterIP 10.152.183.185 <none> 8887/TCP,8888/TCP 33m
kubeflow pipelines-api-operator ClusterIP 10.152.183.164 <none> 30666/TCP 37m
kubeflow pipelines-db ClusterIP 10.152.183.55 <none> 3306/TCP 37m
kubeflow pipelines-db-endpoints ClusterIP None <none> <none> 37m
kubeflow pipelines-db-operator ClusterIP 10.152.183.12 <none> 30666/TCP 38m
kubeflow pipelines-persistence-operator ClusterIP 10.152.183.75 <none> 30666/TCP 37m
kubeflow pipelines-scheduledworkflow-operator ClusterIP 10.152.183.115 <none> 30666/TCP 37m
kubeflow pipelines-ui ClusterIP 10.152.183.205 <none> 3000/TCP 34m
kubeflow pipelines-ui-operator ClusterIP 10.152.183.203 <none> 30666/TCP 37m
kubeflow pipelines-viewer-operator ClusterIP 10.152.183.183 <none> 30666/TCP 37m
kubeflow pipelines-visualization ClusterIP 10.152.183.239 <none> 8888/TCP 35m
kubeflow pipelines-visualization-operator ClusterIP 10.152.183.24 <none> 30666/TCP 35m
kubeflow pytorch-operator-operator ClusterIP 10.152.183.31 <none> 30666/TCP 38m
kubeflow seldon-core ClusterIP 10.152.183.222 <none> 8080/TCP,9876/TCP 36m
kubeflow seldon-core-operator ClusterIP 10.152.183.43 <none> 30666/TCP 37m
kubeflow tf-job-operator-operator ClusterIP 10.152.183.235 <none> 30666/TCP 38m
@sakaia are you trying as per the instructions here? https://github.com/juju-solutions/bundle-kubeflow
No. And thank you for your suggestion.
After your suggestion, I try to install it with following commands. But status is still installing after 20 minutes.
sudo snap install juju --classic
sudo snap install juju-wait --classic
sudo snap install juju-helpers --classic --edge
git clone https://github.com/juju-solutions/bundle-kubeflow.git
cd bundle-kubeflow
sudo snap install microk8s --classic
sudo usermod -aG microk8s $USER
newgrp microk8s
sudo snap alias microk8s.kubectl kubectl
python3 scripts/cli.py microk8s setup --controller uk8s
python3 scripts/cli.py deploy-to uk8s
Outputs after 20 min following output still keeping
DEBUG:root:cert-manager-controller/0 workload status is maintenance since 2020-05-06 11:01:12+00:00
DEBUG:root:cert-manager-webhook/0 workload status is maintenance since 2020-05-06 11:24:19+00:00
DEBUG:root:cert-manager-webhook/0 juju agent status is executing since 2020-05-06 11:24:16+00:00
@sakaia , you have to try the cloud (aws) option.. Not the uk8s https://github.com/juju-solutions/bundle-kubeflow/issues/197
Thank you for your suggestion.
It uses on minio for secret key and access key? I am asking for access key setting.
@knkski
I ran this and I am getting 302 redirect
curl -vv http://10.64.140.43.xip.io
Rebuilt URL to: http://10.64.140.43.xip.io/
Trying 10.64.140.43...
TCP_NODELAY set
Connected to 10.64.140.43.xip.io (10.64.140.43) port 80 (#0)
GET / HTTP/1.1 Host: 10.64.140.43.xip.io User-Agent: curl/7.58.0 Accept: /
< HTTP/1.1 302 Found < content-length: 333 < content-type: text/plain < location: http://10.64.140.43.xip.io:80/dex/auth?client_id=authservice-oidc&redirect_uri=http%3A%2F%2F10.64.140.43.xip.io%3A80%2Foidc%2Flogin%2Foidc&response_type=code&scope=profile+email+groups+openid&state=MTU4OTA1NDgyNHxFd3dBRUhvd1ZHaGlXR1pSTm5CWlUxRXpiakk9fHTklXX8U-EqagWp2xfPFbRNG5qashKbiyKZfUOSalqF < date: Sat, 09 May 2020 20:07:04 GMT < server: envoy < Found.
Connection #0 to host 10.64.140.43.xip.io left intact
@knkski @sakaia Just a thought.. As we are updating the public IP in ambassador
microk8s.kubectl edit -n kubeflow ingress/ambassador # Set spec.rules[0].host to the public IP of the VM, with
xip.ioappended, e.g.
1.2.3.4.xip.io``
Do we have to update the same for these 2?
@paravatha: No, ambassador is sitting in front of those two services and will route requests to them as necessary.
In regards to the 302 Found, it looks like you are properly accessing the dashboard, as that will redirect to a login page. Are you able to access that URL with a browser and log in?
@sakaia: It looks like the service that got created for you in microk8s is 10.64.140.43
. You'll want to go to http://10.64.140.43.xip.io instead of http://localhost to access Kubeflow.
Any updates on this?
I'm also getting this output on curl.
curl -vv http://10.64.140.43.xip.io
* Rebuilt URL to: http://10.64.140.43.xip.io/
* Trying 10.64.140.43...
* TCP_NODELAY set
* Connected to 10.64.140.43.xip.io (10.64.140.43) port 80 (#0)
> GET / HTTP/1.1
> Host: 10.64.140.43.xip.io
> User-Agent: curl/7.58.0
> Accept: */*
>
< HTTP/1.1 302 Found
< content-length: 333
< content-type: text/plain
< location: http://10.64.140.43.xip.io:80/dex/auth?client_id=authservice-oidc&redirect_uri=http%3A%2F%2F10.64.140.43.xip.io%3A80%2Foidc%2Flogin%2Foidc&response_type=code&scope=profile+email+groups+openid&state=MTU5MzY4NTY2NXxFd3dBRUhoNU5XNWhZMjVLUWxOMVJuQlBhSGs9fKBZyTDuJdZppczMu_tFEe7DbWT_380CI8PmC-UUgfSd
< date: Thu, 02 Jul 2020 10:27:45 GMT
< server: envoy
<
<a href="http://10.64.140.43.xip.io:80/dex/auth?client_id=authservice-oidc&redirect_uri=http%3A%2F%2F10.64.140.43.xip.io%3A80%2Foidc%2Flogin%2Foidc&response_type=code&scope=profile+email+groups+openid&state=MTU5MzY4NTY2NXxFd3dBRUhoNU5XNWhZMjVLUWxOMVJuQlBhSGs9fKBZyTDuJdZppczMu_tFEe7DbWT_380CI8PmC-UUgfSd">Found</a>.
* Connection #0 to host 10.64.140.43.xip.io left intact
And it's not accessible on browser
Was getting similar error but can confirm that following steps work and i can access all the application urls from within the dashboard. Just had to combine the answers @ both these links https://github.com/kubeflow/manifests/issues/974 https://stackoverflow.com/questions/60973804/microk8s-broken-k8s-dashboard-and-kubeflow-dashboard
Ping your AWS EC2 instance and fetch the IP address
ping ec2-XX-XX-XX-XXX.ap-south-1.compute.amazonaws.com
Output
A.B.C.D
Add following line to /etc/hosts in local/laptop
A.B.C.D 10.64.140.43.xip.io
In AWS instance, find port for ambassador
microk8s.kubectl get services -n kubeflow | grep ambassador
Output:
ambassador LoadBalancer 10.152.183.172 10.64.140.43 80:31582/TCP 88m
ambassador-operator ClusterIP 10.152.183.169 <none> 30666/TCP 88m
In local/laptop navigate to
http://ec2-XX-XX-XX-XXX.ap-south-1.compute.amazonaws.com:31582/
Output:
Your browser should re-direct to http://10.64.140.43.xip.io/dex/auth/local?req=xxxxxxxxxxxxx
and show a login page
Login with credentials you received after activating kubeflow withmicrok8s.enable kubeflow
Output:
Kubeflow dashboard is available @ http://10.64.140.43.xip.io/?ns=admin
Environment: AMI: Ubuntu Server 20.04 microk8s (sudo snap find microk8s): v1.18.4
Note: The method seems works only until a reboot.
@bipinm Thanks for the update. I tried with the below setup and it worked for me as well AMI: Ubuntu 18.04 Size : t3a.xlarge (4 cpu, 16GB) microk8s : v1.18.6
I am trying to setup kubeflow on latest/stable: v1.18.1 Followed these instructions for setup on AWS EC2 instance (Ubuntu 18.04 LTS) https://ubuntu.com/kubeflow/install I am able to access parts of kubeflow UI from my laptop with the below commands
I provided mode details here https://github.com/kubeflow/manifests/issues/974#issuecomment-612546805 (The user here uses kfctl and in my case microk8s uses juju) I am looking for documentation about how setup and access kubeflow UI from my laptop I looked at #713, but the instructions are not clear. Appreciate if you could provide more detailed setup instructions