Closed lao-white closed 3 years ago
@lao-white: Are you able to run this command from the machine you're experiencing this error with MicroK8s on?
curl -v https://api.jujucharms.com/charmstore/v5/~kubeflow-charmers/ambassador-88/icon.svg
@lao-white: Are you able to run this command from the machine you're experiencing this error with MicroK8s on?
curl -v https://api.jujucharms.com/charmstore/v5/~kubeflow-charmers/ambassador-88/icon.svg
Yes ,it works well
@lao-white: Can you post the output from env KUBEFLOW_DEBUG=true microk8s.enable kubeflow
?
I have the same problem
@hpcaicom: A check was recently added for this in #1477. That's only on the edge snap, so in the meantime, what does this command output for you?
microk8s.kubectl run --rm -it --restart=Never --image=ubuntu connectivity-check -- bash -c "apt update && apt install -y curl && curl https://api.jujucharms.com/charmstore/v5/~kubeflow-charmers/ambassador-88/icon.svg"
have the same problem
Any workaround to skip this.
@hpcaicom: can you post the output from running that command? Also, can you post the tarball generated by microk8s.inspect
?
There isn't really a way to skip this, as that check is making sure that you have connectivity at all from within your cluster. It looks like you don't have any internet access from within the cluster, so that problem has to be resolved first.
inspection-report-20200815_194148.tar.gz output.txt I am sure I have the internet access, and I can download icon.svg success.
@hpcaicom: It looks like that inspection report was done on a freshly-installed MicroK8s instance. Can you run:
KUBEFLOW_DEBUG=true microk8s enable kubeflow
And then post the output from that command, as well as an inspection report generated after that command has finished running?
The report attached in last post is collected after I run the microk8s enable kubeflow command
I do it again, and here is the new report.
[root ~]# KUBEFLOW_DEBUG=true microk8s enable kubeflow Couldn't contact api.jujucharms.com Please check your network connectivity before enabling Kubeflow. Failed to enable kubeflow
@hpcaicom: Can you post the output from snap info microk8s
?
snap info microk8s name: microk8s summary: Lightweight Kubernetes for workstations and appliances publisher: Canonical✓ store-url: https://snapcraft.io/microk8s contact: https://github.com/ubuntu/microk8s license: unset description: | MicroK8s is the smallest, simplest, pure production Kubernetes for clusters, laptops, IoT and Edge, on Intel and ARM. One command installs a single-node K8s cluster with carefully selected add-ons on Linux, Windows and macOS. MicroK8s requires no configuration, supports automatic updates and GPU acceleration. Use it for offline development, prototyping, testing, to build your CI/CD pipeline or your IoT apps. commands:
@hpcaicom: It looks like there's a bug in that version of microk8s, can you run this and then try to enable Kubeflow again?
sudo snap switch microk8s --channel=latest/candidate
sudo snap refresh
@knkski: I've got the same issue. Using the latest microk8s version did not fix the problem.
@Philipp-ai could you please attach the tarball created by microk8s inspect
?
@ktsakalozos Thanks for the fast reply, I added the report
@Philipp-ai how did you deploy MicroK8s? I do not understand how snap list
shows:
microk8s v1.19.1 1677 1.16 canonical* classic
Would it be possible to sudo snap remove microk8s --purge; sudo snap install microk8s --classic
and report back the actual error you get when enabling kubeflow?
@ktsakalozos I deployed microk8s with sudo snap install microk8s --classic
but later used sudo snap switch microk8s --channel=1.18/stable
(for 1.18, 1.17 and 1.16) to install kubeflow using different versions of microk8s (as well not working showing different or no error messages).
I unistalled microk8s using sudo snap remove microk8s
(I think snap does not know --purge) and reinstalled it using sudo snap install microk8s --classic
and get the same error:
microk8s.enable kubeflow Enabling dns... Enabling storage... Enabling dashboard... Enabling ingress... Enabling metallb:10.64.140.43-10.64.140.49... Waiting for DNS and storage plugins to finish setting up Couldn't contact api.jujucharms.com Please check your network connectivity before enabling Kubeflow. Failed to enable kubeflow
@Philipp-ai are you behind a proxy/firewall of any kind? Something is blocking your connection to the juju store.
Please avoid doing snap switch
and snap refresh
. These commands will try to use the current cluster configuration with older Kubernetes versions. This may not always be possible and especially in the case of switching to older k8s versions. It is always better to remove and reinstall the snap.
@ktsakalozos yes, I'm using a cloud based VM.
curl -v https://api.jujucharms.com/charmstore/v5/~kubeflow-charmers/ambassador-88/icon.svg
works without problems, so without microk8s I can access the juju store.
sudo ufw allow in on cni0 && sudo ufw allow out on cni0 sudo ufw default allow routed
toconfigure the firewall did not help
Removing microk8s and installing the old 1.18/stable version seems to fix the error.
@ktsakalozos Hi, I have the same issue. Can you help me with my case? Here is my report:
inspection-report-20200918_135432.tar.gz
I use sudo snap install microk8s --classic
to install and my version is v1.19.0-34+09a4aa08bb9e93
.
curl
and wget
could return results, but ping api.jujucharms.com
lost all the packets and returned nothing.
Is it the problem of my network?
@ktsakalozos I have the same issue. Ubuntu 20.04, microk8s v1.19.0 (rev 1667, 1.19/stable).
Trying to set it up via https://github.com/juju-solutions/bundle-kubeflow/#setup-microk8s for now. EDIT: It worked using the instructions from the link.
After some tries with different versions, I could make it work (on v1.19 stable) with:
sudo iptables -P FORWARD ACCEPT
microk8s inspect
told me that, but the "permanent" solution did not work alone (sudo apt install iptables-persistent
). So the iptables
rule was needed. I am on Debian 10.
Same issue here:
Couldn't contact api.jujucharms.com Please check your network connectivity before enabling Kubeflow. Failed to enable kubeflow
I am using Ubuntu 20.04.1 LTS, microk8s v1.19.2.
Above curl
command works.
ping api.jujucharms.com
doesn't return anything. (Pinging other domains works.)
Same issue on Ubuntu 20.04 5.4.0-42-generic
. Installed using sudo snap install microk8s --classic
which installed v1.19.2
.
> microk8s inspect
Inspecting Certificates
Inspecting services
Service snap.microk8s.daemon-cluster-agent is running
Service snap.microk8s.daemon-containerd is running
Service snap.microk8s.daemon-apiserver is running
Service snap.microk8s.daemon-apiserver-kicker is running
Service snap.microk8s.daemon-proxy is running
Service snap.microk8s.daemon-kubelet is running
Service snap.microk8s.daemon-scheduler is running
Service snap.microk8s.daemon-controller-manager is running
Copy service arguments to the final report tarball
Inspecting AppArmor configuration
Gathering system information
Copy processes list to the final report tarball
Copy snap list to the final report tarball
Copy VM name (or none) to the final report tarball
Copy disk usage information to the final report tarball
Copy memory usage information to the final report tarball
Copy server uptime to the final report tarball
Copy current linux distribution to the final report tarball
Copy openSSL information to the final report tarball
Copy network configuration to the final report tarball
Inspecting kubernetes cluster
Inspect kubernetes cluster
Building the report tarball
Report tarball is at /var/snap/microk8s/1710/inspection-report-20201014_223724.tar.gz
Then microk8s.enable dns dashboard storage
and microk8s.enable gpu
.
Finally KUBEFLOW_DEBUG=true microk8s enable kubeflow
and had:
...
...
+ microk8s-juju.wrapper --debug add-model kubeflow microk8s
22:42:26 INFO juju.cmd supercommand.go:83 running juju [2.7.6 4da406fb326d7a1255f97a7391056641ee86715b gc go1.12.17]
22:42:26 DEBUG juju.cmd supercommand.go:84 args: []string{"/snap/microk8s/1710/bin/juju", "--debug", "add-model", "kubeflow", "microk8s"}
22:42:26 INFO juju.juju api.go:67 connecting to API addresses: [10.152.183.17:17070]
22:42:26 DEBUG juju.api apiclient.go:1092 successfully dialed "wss://10.152.183.17:17070/api"
22:42:26 INFO juju.api apiclient.go:624 connection established to "wss://10.152.183.17:17070/api"
22:42:26 INFO cmd authkeys.go:114 Adding contents of "/var/snap/microk8s/1710/juju/share/juju/ssh/juju_id_rsa.pub" to authorized-keys
22:42:26 INFO cmd authkeys.go:114 Adding contents of "/home/yann/.ssh/id_rsa.pub" to authorized-keys
22:42:26 INFO cmd addmodel.go:301 Added 'kubeflow' model on microk8s/localhost with credential 'microk8s' for user 'admin'
22:42:26 DEBUG juju.api monitor.go:35 RPC connection died
22:42:26 INFO cmd supercommand.go:525 command finished
+ microk8s-juju.wrapper --debug deploy cs:kubeflow-213 --channel stable --overlay /tmp/tmpua01ynf1
Kubeflow could not be enabled:
22:42:26 INFO juju.cmd supercommand.go:83 running juju [2.7.6 4da406fb326d7a1255f97a7391056641ee86715b gc go1.12.17]
22:42:26 DEBUG juju.cmd supercommand.go:84 args: []string{"/snap/microk8s/1710/bin/juju", "--debug", "deploy", "cs:kubeflow-213", "--channel", "stable", "--overlay", "/tmp/tmpua01ynf1"}
22:42:26 INFO juju.juju api.go:67 connecting to API addresses: [10.152.183.17:17070]
22:42:26 DEBUG juju.api apiclient.go:1092 successfully dialed "wss://10.152.183.17:17070/model/b1eb84e4-d7dd-4677-8877-6b10cbe34e10/api"
22:42:26 INFO juju.api apiclient.go:624 connection established to "wss://10.152.183.17:17070/model/b1eb84e4-d7dd-4677-8877-6b10cbe34e10/api"
22:42:26 INFO juju.juju api.go:67 connecting to API addresses: [10.152.183.17:17070]
22:42:26 DEBUG juju.api apiclient.go:1092 successfully dialed "wss://10.152.183.17:17070/api"
22:42:26 INFO juju.api apiclient.go:624 connection established to "wss://10.152.183.17:17070/api"
22:42:27 DEBUG juju.cmd.juju.application deploy.go:1442 cannot interpret as local charm: file does not exist
22:42:27 DEBUG juju.cmd.juju.application deploy.go:1294 cannot interpret as a redeployment of a local charm from the controller
22:42:27 DEBUG httpbakery client.go:243 client do GET https://api.jujucharms.com/charmstore/v5/kubeflow-213/meta/any?channel=stable&include=id&include=supported-series&include=published {
22:42:28 DEBUG httpbakery client.go:245 } -> error <nil>
22:42:28 DEBUG httpbakery client.go:243 client do GET https://api.jujucharms.com/charmstore/v5/bundle/kubeflow-213/archive {
22:42:28 DEBUG httpbakery client.go:245 } -> error <nil>
22:42:28 INFO cmd deploy.go:1546 Located bundle "cs:bundle/kubeflow-213"
22:42:28 DEBUG juju.api monitor.go:35 RPC connection died
ERROR cannot deploy bundle: the provided bundle has the following errors:
empty charm path
invalid charm URL in application "modeldb-db": cannot parse URL "": name "" not valid
22:42:28 DEBUG cmd supercommand.go:519 error stack:
/workspace/_build/src/github.com/juju/juju/cmd/juju/application/bundle.go:88: the provided bundle has the following errors:
empty charm path
invalid charm URL in application "modeldb-db": cannot parse URL "": name "" not valid
/workspace/_build/src/github.com/juju/juju/cmd/juju/application/bundle.go:140:
/workspace/_build/src/github.com/juju/juju/cmd/juju/application/deploy.go:898: cannot deploy bundle
/workspace/_build/src/github.com/juju/juju/cmd/juju/application/deploy.go:1548:
Command '('microk8s-juju.wrapper', '--debug', 'deploy', 'cs:kubeflow-213', '--channel', 'stable', '--overlay', '/tmp/tmpua01ynf1')' returned non-zero exit status 1
Failed to enable kubeflow
Hi @dhassault , could you install the 1.19/candidate
channel and enable kubeflow from there?
sudo snap remove microk8s
sudo snap install microk8s --classic --channel=1.19/candidate
sudo snap install microk8s --classic --channel=1.19/candidate
Hi @ktsakalozos , I have the same issue even after reinstall 1.19/candidate. The output as below: Enabling dns... Enabling storage... Enabling dashboard... Enabling ingress... Enabling metallb:10.64.140.43-10.64.140.49... Waiting for DNS and storage plugins to finish setting up Couldn't contact api.jujucharms.com from within the Kubernetes cluster Please check your network connectivity before enabling Kubeflow. Failed to enable kubeflow
My OS is unbuntu 18.04
@hpcaicom: It looks like there's a bug in that version of microk8s, can you run this and then try to enable Kubeflow again?
sudo snap switch microk8s --channel=latest/candidate sudo snap refresh
This works for me.
I'm on Ubuntu 18.04. With
snap install microk8s --classic
microk8s enable kubeflow
or
snap install microk8s --classic --channel=candidate
microk8s enable kubeflow
I got different errors. The later one was the same as this issue. When then I did
sudo snap switch microk8s --channel=latest/candidate
sudo snap refresh
as @knkski suggested microk8s enable kubeflow
works.
@hpcaicom: It looks like there's a bug in that version of microk8s, can you run this and then try to enable Kubeflow again?
sudo snap switch microk8s --channel=latest/candidate sudo snap refresh
This works for me.
I'm on Ubuntu 18.04. With
snap install microk8s --classic microk8s enable kubeflow
or
snap install microk8s --classic --channel=candidate microk8s enable kubeflow
I got different errors. The later one was the same as this issue. When then I did
sudo snap switch microk8s --channel=latest/candidate sudo snap refresh
as @knkski suggested
microk8s enable kubeflow
works.
Hi @tengteng, yes, the commands work for me as well!
sudo snap switch microk8s --channel=latest/candidate sudo snap refresh microk8s enable kubeflow
The above commands work on ubuntu 18.04
microk8s.kubectl run --rm -it --restart=Never --image=ubuntu connectivity-check -- bash -c "apt update && apt install -y curl && curl https://api.jujucharms.com/charmstore/v5/~kubeflow-charmers/ambassador-88/icon.svg"
@knkski I think the issue is the proxy is not being set inside the pod container. I created an issue for this #1719
I believe that this issue is fixed in #1635, which introduces handling around the calico networking. That fix will be available in 1.20/stable, otherwise if you'd like to try it out sooner, it will be available in latest/edge
as soon as CD is done pushing that out. Closing this issue since it should be fixed, but feel free to reopen if the Couldn't contact api.jujucharms.com
error happens again.
I got the same error message: Couldn't contact api.jujucharms.com from within the Kubernetes cluster.
os: Ubuntu 18.04.5 LTS microk8s: v1.19.4 rev1827
$ snap list
$ microk8s kubectl get all --all-namespaces
$ microk8s enable kubeflow
+ microk8s-status.wrapper --wait-ready
microk8s is running
high-availability: no
datastore master nodes: 127.0.0.1:19001
datastore standby nodes: none
addons:
enabled:
dashboard # The Kubernetes dashboard
dns # CoreDNS
ha-cluster # Configure high availability on the current node
ingress # Ingress controller for external access
metallb # Loadbalancer for your Kubernetes cluster
metrics-server # K8s Metrics Server for API access to service metrics
storage # Storage class; allocates storage from host directory
disabled:
ambassador # Ambassador API Gateway and Ingress
cilium # SDN, fast with full network policy
fluentd # Elasticsearch-Fluentd-Kibana logging and monitoring
gpu # Automatic enablement of Nvidia CUDA
helm # Helm 2 - the package manager for Kubernetes
helm3 # Helm 3 - Kubernetes package manager
host-access # Allow Pods connecting to Host services smoothly
istio # Core Istio service mesh services
jaeger # Kubernetes Jaeger operator with its simple config
knative # The Knative framework on Kubernetes.
kubeflow # Kubeflow for easy ML deployments
linkerd # Linkerd is a service mesh for Kubernetes and other frameworks
multus # Multus CNI enables attaching multiple network interfaces to pods
portainer # Portainer UI for your Kubernetes cluster
prometheus # Prometheus operator for monitoring and logging
rbac # Role-Based Access Control for authorisation
registry # Private image registry exposed on localhost:32000
traefik # traefik Ingress controller for external access
+ microk8s-kubectl.wrapper -nkube-system rollout status deployment.apps/calico-kube-controllers
deployment "calico-kube-controllers" successfully rolled out
Enabling dns...
+ microk8s-enable.wrapper dns
Addon dns is already enabled.
Enabling storage...
+ microk8s-enable.wrapper storage
Addon storage is already enabled.
Enabling dashboard...
+ microk8s-enable.wrapper dashboard
Addon dashboard is already enabled.
Enabling ingress...
+ microk8s-enable.wrapper ingress
Addon ingress is already enabled.
Enabling metallb:10.64.140.43-10.64.140.49...
+ microk8s-enable.wrapper metallb:10.64.140.43-10.64.140.49
Addon metallb is already enabled.
+ microk8s-status.wrapper --wait-ready
microk8s is running
high-availability: no
datastore master nodes: 127.0.0.1:19001
datastore standby nodes: none
addons:
enabled:
dashboard # The Kubernetes dashboard
dns # CoreDNS
ha-cluster # Configure high availability on the current node
ingress # Ingress controller for external access
metallb # Loadbalancer for your Kubernetes cluster
metrics-server # K8s Metrics Server for API access to service metrics
storage # Storage class; allocates storage from host directory
disabled:
ambassador # Ambassador API Gateway and Ingress
cilium # SDN, fast with full network policy
fluentd # Elasticsearch-Fluentd-Kibana logging and monitoring
gpu # Automatic enablement of Nvidia CUDA
helm # Helm 2 - the package manager for Kubernetes
helm3 # Helm 3 - Kubernetes package manager
host-access # Allow Pods connecting to Host services smoothly
istio # Core Istio service mesh services
jaeger # Kubernetes Jaeger operator with its simple config
knative # The Knative framework on Kubernetes.
kubeflow # Kubeflow for easy ML deployments
linkerd # Linkerd is a service mesh for Kubernetes and other frameworks
multus # Multus CNI enables attaching multiple network interfaces to pods
portainer # Portainer UI for your Kubernetes cluster
prometheus # Prometheus operator for monitoring and logging
rbac # Role-Based Access Control for authorisation
registry # Private image registry exposed on localhost:32000
traefik # traefik Ingress controller for external access
+ microk8s-kubectl.wrapper -nkube-system rollout status ds/calico-node
daemon set "calico-node" successfully rolled out
Waiting for DNS and storage plugins to finish setting up
+ microk8s-kubectl.wrapper wait --for=condition=available -nkube-system deployment/coredns deployment/hostpath-provisioner --timeout=10m
deployment.apps/coredns condition met
deployment.apps/hostpath-provisioner condition met
Couldn't contact api.jujucharms.com from within the Kubernetes cluster
Please check your network connectivity before enabling Kubeflow.
Failed to enable kubeflow
$ microk8s inspect
I've tried in 1.20/stable, but there's still same error.
DNS and storage setup complete. Checking connectivity... Couldn't contact api.jujucharms.com from within the Kubernetes cluster Please check your network connectivity before enabling Kubeflow.
inspection-report-20201216_094714.tar.gz kubeflow_installation_log.txt
I believe that this issue is fixed in #1635, which introduces handling around the calico networking. That fix will be available in 1.20/stable, otherwise if you'd like to try it out sooner, it will be available in
latest/edge
as soon as CD is done pushing that out. Closing this issue since it should be fixed, but feel free to reopen if theCouldn't contact api.jujucharms.com
error happens again.
Hi,
I have the same issue regarding microk8s enable kubeflow
. I have Centos 7 and microk8s (1.20/stable) v1.20.0 from Canonical✓ installed.
I get this error:
DNS and storage setup complete. Checking connectivity... Couldn't contact api.jujucharms.com Please check your network connectivity before enabling Kubeflow.
The host where microk8s is installed is behind a corporate proxy. I have already set the proxy in /etc/environment
and
/var/snap/microk8s/current/args/containerd-env
. I have set both http_proxy and https_proxy in these files.
I have also stopped and started microk8s after editting these files in order to apply changes.
It didn't work. How can I enable kubeflow behind a proxy? Thanks in advance
I got the same error message: Couldn't contact api.jujucharms.com from within the Kubernetes cluster.
os: Ubuntu 18.04.5 LTS microk8s: v1.19.4 rev1827
$ snap list
$ microk8s kubectl get all --all-namespaces
$ microk8s enable kubeflow
+ microk8s-status.wrapper --wait-ready microk8s is running high-availability: no datastore master nodes: 127.0.0.1:19001 datastore standby nodes: none addons: enabled: dashboard # The Kubernetes dashboard dns # CoreDNS ha-cluster # Configure high availability on the current node ingress # Ingress controller for external access metallb # Loadbalancer for your Kubernetes cluster metrics-server # K8s Metrics Server for API access to service metrics storage # Storage class; allocates storage from host directory disabled: ambassador # Ambassador API Gateway and Ingress cilium # SDN, fast with full network policy fluentd # Elasticsearch-Fluentd-Kibana logging and monitoring gpu # Automatic enablement of Nvidia CUDA helm # Helm 2 - the package manager for Kubernetes helm3 # Helm 3 - Kubernetes package manager host-access # Allow Pods connecting to Host services smoothly istio # Core Istio service mesh services jaeger # Kubernetes Jaeger operator with its simple config knative # The Knative framework on Kubernetes. kubeflow # Kubeflow for easy ML deployments linkerd # Linkerd is a service mesh for Kubernetes and other frameworks multus # Multus CNI enables attaching multiple network interfaces to pods portainer # Portainer UI for your Kubernetes cluster prometheus # Prometheus operator for monitoring and logging rbac # Role-Based Access Control for authorisation registry # Private image registry exposed on localhost:32000 traefik # traefik Ingress controller for external access + microk8s-kubectl.wrapper -nkube-system rollout status deployment.apps/calico-kube-controllers deployment "calico-kube-controllers" successfully rolled out Enabling dns... + microk8s-enable.wrapper dns Addon dns is already enabled. Enabling storage... + microk8s-enable.wrapper storage Addon storage is already enabled. Enabling dashboard... + microk8s-enable.wrapper dashboard Addon dashboard is already enabled. Enabling ingress... + microk8s-enable.wrapper ingress Addon ingress is already enabled. Enabling metallb:10.64.140.43-10.64.140.49... + microk8s-enable.wrapper metallb:10.64.140.43-10.64.140.49 Addon metallb is already enabled. + microk8s-status.wrapper --wait-ready microk8s is running high-availability: no datastore master nodes: 127.0.0.1:19001 datastore standby nodes: none addons: enabled: dashboard # The Kubernetes dashboard dns # CoreDNS ha-cluster # Configure high availability on the current node ingress # Ingress controller for external access metallb # Loadbalancer for your Kubernetes cluster metrics-server # K8s Metrics Server for API access to service metrics storage # Storage class; allocates storage from host directory disabled: ambassador # Ambassador API Gateway and Ingress cilium # SDN, fast with full network policy fluentd # Elasticsearch-Fluentd-Kibana logging and monitoring gpu # Automatic enablement of Nvidia CUDA helm # Helm 2 - the package manager for Kubernetes helm3 # Helm 3 - Kubernetes package manager host-access # Allow Pods connecting to Host services smoothly istio # Core Istio service mesh services jaeger # Kubernetes Jaeger operator with its simple config knative # The Knative framework on Kubernetes. kubeflow # Kubeflow for easy ML deployments linkerd # Linkerd is a service mesh for Kubernetes and other frameworks multus # Multus CNI enables attaching multiple network interfaces to pods portainer # Portainer UI for your Kubernetes cluster prometheus # Prometheus operator for monitoring and logging rbac # Role-Based Access Control for authorisation registry # Private image registry exposed on localhost:32000 traefik # traefik Ingress controller for external access + microk8s-kubectl.wrapper -nkube-system rollout status ds/calico-node daemon set "calico-node" successfully rolled out Waiting for DNS and storage plugins to finish setting up + microk8s-kubectl.wrapper wait --for=condition=available -nkube-system deployment/coredns deployment/hostpath-provisioner --timeout=10m deployment.apps/coredns condition met deployment.apps/hostpath-provisioner condition met Couldn't contact api.jujucharms.com from within the Kubernetes cluster Please check your network connectivity before enabling Kubeflow. Failed to enable kubeflow
$ microk8s inspect
I also got the same error message: Couldn't contact api.jujucharms.com from within the Kubernetes cluster.
Solution:
microk8s enable kubeflow
sudo snap remove microk8s --purge
sudo snap install microk8s --classic --channel=latest/edge && sudo snap refresh
microk8s enable dns dashboard storage gpu
microk8s enable kubeflow
os: Ubuntu 20.04.1 LTS
microk8s: v1.20.1 rev1894
Hi,
I have the same issue regarding microk8s enable kubeflow. i'm using multipass tried 1.18 1.19 and 1.20 verisons
I get the same error:
DNS and storage setup complete. Checking connectivity... Couldn't contact api.jujucharms.com Please check your network connectivity before enabling Kubeflow. tried to forward the ip table, tried to use a non corporate network nothing works so far. Any ideas :,(
Name: microk8s
summary: Lightweight Kubernetes for workstations and appliances
publisher: Canonical✓
store-url: https://snapcraft.io/microk8s
contact: https://github.com/ubuntu/microk8s
license: unset
description: |
MicroK8s is the smallest, simplest, pure production Kubernetes for clusters, laptops, IoT and
Edge, on Intel and ARM. One command installs a single-node K8s cluster with carefully selected
add-ons on Linux, Windows and macOS. MicroK8s requires no configuration, supports automatic
updates and GPU acceleration. Use it for offline development, prototyping, testing, to build your
CI/CD pipeline or your IoT apps.
commands:
- microk8s.add-node
- microk8s.cilium
- microk8s.config
- microk8s.ctr
- microk8s.dashboard-proxy
- microk8s.disable
- microk8s.enable
- microk8s.helm
- microk8s.helm3
- microk8s.inspect
- microk8s.istioctl
- microk8s.join
- microk8s.juju
- microk8s.kubectl
- microk8s.leave
- microk8s.linkerd
- microk8s
- microk8s.refresh-certs
- microk8s.remove-node
- microk8s.reset
- microk8s.start
- microk8s.status
- microk8s.stop
services:
microk8s.daemon-apiserver: simple, enabled, active
microk8s.daemon-apiserver-kicker: simple, enabled, active
microk8s.daemon-cluster-agent: simple, enabled, active
microk8s.daemon-containerd: simple, enabled, active
microk8s.daemon-controller-manager: simple, enabled, active
microk8s.daemon-etcd: simple, enabled, active
microk8s.daemon-flanneld: simple, enabled, active
microk8s.daemon-kubelet: simple, enabled, active
microk8s.daemon-proxy: simple, enabled, active
microk8s.daemon-scheduler: simple, enabled, active
snap-id: EaXqgt1lyCaxKaQCU349mlodBkDCXRcg
tracking: 1.18/stable
refresh-date: today at 02:06 +08
channels:
1.20/stable: v1.20.1 2021-01-12 (1910) 217MB classic
1.20/candidate: v1.20.2 2021-01-14 (1921) 217MB classic
1.20/beta: v1.20.2 2021-01-14 (1921) 217MB classic
1.20/edge: v1.20.2 2021-01-13 (1921) 217MB classic
latest/stable: v1.20.1 2021-01-12 (1910) 217MB classic
latest/candidate: v1.20.2 2021-01-14 (1920) 217MB classic
latest/beta: v1.20.2 2021-01-14 (1920) 217MB classic
latest/edge: v1.20.2 2021-01-13 (1920) 217MB classic
dqlite/stable: –
dqlite/candidate: –
dqlite/beta: –
dqlite/edge: v1.16.2 2019-11-07 (1038) 189MB classic
1.19/stable: v1.19.5 2020-12-15 (1856) 216MB classic
1.19/candidate: v1.19.7 2021-01-14 (1922) 216MB classic
1.19/beta: v1.19.7 2021-01-14 (1922) 216MB classic
1.19/edge: v1.19.7 2021-01-13 (1922) 216MB classic
1.18/stable: v1.18.15 2021-01-15 (1939) 199MB classic
1.18/candidate: v1.18.15 2021-01-14 (1939) 199MB classic
1.18/beta: v1.18.15 2021-01-14 (1939) 199MB classic
1.18/edge: v1.18.15 2021-01-14 (1939) 199MB classic
1.17/stable: v1.17.17 2021-01-15 (1916) 177MB classic
1.17/candidate: v1.17.17 2021-01-14 (1916) 177MB classic
1.17/beta: v1.17.17 2021-01-14 (1916) 177MB classic
1.17/edge: v1.17.17 2021-01-13 (1916) 177MB classic
1.16/stable: v1.16.15 2020-09-12 (1671) 179MB classic
1.16/candidate: v1.16.15 2020-09-04 (1671) 179MB classic
1.16/beta: v1.16.15 2020-09-04 (1671) 179MB classic
1.16/edge: v1.16.15 2020-09-02 (1671) 179MB classic
1.15/stable: v1.15.11 2020-03-27 (1301) 171MB classic
1.15/candidate: v1.15.11 2020-03-27 (1301) 171MB classic
1.15/beta: v1.15.11 2020-03-27 (1301) 171MB classic
1.15/edge: v1.15.11 2020-03-26 (1301) 171MB classic
1.14/stable: v1.14.10 2020-01-06 (1120) 217MB classic
1.14/candidate: ↑
1.14/beta: ↑
1.14/edge: v1.14.10 2020-03-26 (1303) 217MB classic
1.13/stable: v1.13.6 2019-06-06 (581) 237MB classic
1.13/candidate: ↑
1.13/beta: ↑
1.13/edge: ↑
1.12/stable: v1.12.9 2019-06-06 (612) 259MB classic
1.12/candidate: ↑
1.12/beta: ↑
1.12/edge: ↑
1.11/stable: v1.11.10 2019-05-10 (557) 258MB classic
1.11/candidate: ↑
1.11/beta: ↑
1.11/edge: ↑
1.10/stable: v1.10.13 2019-04-22 (546) 222MB classic
1.10/candidate: ↑
1.10/beta: ↑
1.10/edge: ↑
installed: v1.18.15 (1939) 199MB classic
@rexad I faced the same issue before. Below are the solutions I tried:
@kosehy thanks for the fast reply just installing reinstalling multiple version of micork8s, all more or less the same issue
After some tries with different versions, I could make it work (on v1.19 stable) with:
sudo iptables -P FORWARD ACCEPT
microk8s inspect
told me that, but the "permanent" solution did not work alone (sudo apt install iptables-persistent
). So theiptables
rule was needed. I am on Debian 10.
https://github.com/ubuntu/microk8s/issues/1439#issuecomment-702393298
Got into the same issue on Ubuntu 20.04.1 with microk8s v1.20.1 (1.20/stable) and the above solution solved it (while ufw is disabled). This is consistent with one of the common issues.
I was able to reproduce this issue on one particular machine, and running these two commands fixed it for me (I also have ufw disabled like @stefannae):
sudo iptables -t filter -A FORWARD -s 10.1.0.0/16 -m comment --comment "generated for MicroK8s pods" -j ACCEPT
sudo iptables -t filter -A FORWARD -d 10.1.0.0/16 -m comment --comment "generated for MicroK8s pods" -j ACCEPT
I'm unfortunately now unable to reproduce the issue to explore why adding those two rules fixed it. If anybody runs into this issue again and running the above commands doesn't fix it, can you post the output here from these commands?
sudo iptables-legacy-save | grep '10\.'
sudo iptables-save | grep '10\.'
sudo journalctl -u snap.microk8s.daemon-kubelet -n 4000
@stefannae: what version of iptables are you using (iptables --version
)?
@knkski iptables v1.8.4 (legacy)
none of the above workarounds solve this.
Sane here
@mujdatdinc @mingyyy: Can you run microk8s inspect
and post the tarball that it generates, along with the stdout from the command?
None of the above mentioned solutions work , @knkski can you please check over this?
None of the above mentioned solutions work , @knkski can you please check over this?
inspection-report-20200728_120622.tar.gz