canonical / microk8s

MicroK8s is a small, fast, single-package Kubernetes for datacenters and the edge.
https://microk8s.io
Apache License 2.0
8.4k stars 767 forks source link

Networking microk8s following ubuntu tutorial #341

Closed placidchat closed 4 years ago

placidchat commented 5 years ago

I'm following this tutorial https://tutorials.ubuntu.com/tutorial/install-a-local-kubernetes-with-microk8s#1

But is there a bridged network that gets created during the snap microk8s install cbr0. There isn't any. `Chain PREROUTING (policy ACCEPT 0 packets, 0 bytes) pkts bytes target prot opt in out source destination
0 0 KUBE-PORTALS-CONTAINER all -- 0.0.0.0/0 0.0.0.0/0 / handle ClusterIPs; NOT E: this must be before the NodePort rules / 0 0 KUBE-NODEPORT-CONTAINER all -- 0.0.0.0/0 0.0.0.0/0 ADDRTYPE match dst-type LOCAL / handle service NodePorts; NOTE: this must be the last rule in the chain / 0 0 DOCKER all -- 0.0.0.0/0 0.0.0.0/0 ADDRTYPE match dst-type LOCAL

Chain INPUT (policy ACCEPT 0 packets, 0 bytes) pkts bytes target prot opt in out source destination

Chain OUTPUT (policy ACCEPT 0 packets, 0 bytes) pkts bytes target prot opt in out source destination
10072 595K KUBE-PORTALS-HOST all -- 0.0.0.0/0 0.0.0.0/0 / handle ClusterIPs; NOTE: th is must be before the NodePort rules / 9757 572K KUBE-NODEPORT-HOST all -- 0.0.0.0/0 0.0.0.0/0 ADDRTYPE match dst-type LOCAL / handle service NodePorts; NOTE: this must be the last rule in the chain / 0 0 DOCKER all -- 0.0.0.0/0 !127.0.0.0/8 ADDRTYPE match dst-type LOCAL

Chain POSTROUTING (policy ACCEPT 0 packets, 0 bytes) pkts bytes target prot opt in out source destination
369 22034 KUBE-POSTROUTING all -- 0.0.0.0/0 0.0.0.0/0 / kubernetes postrouting rules / 0 0 MASQUERADE all -- !docker0 172.17.0.0/16 0.0.0.0/0
250 18890 MASQUERADE all --
0.0.0.0/0 !10.152.183.0/24 / kubenet: SNAT for outbound traffic from cluster */ ADDRTYPE match dst-type !LOCAL

Chain DOCKER (2 references) pkts bytes target prot opt in out source destination
0 0 RETURN all -- docker0 * 0.0.0.0/0 0.0.0.0/0

Chain KUBE-MARK-DROP (0 references) pkts bytes target prot opt in out source destination
0 0 MARK all -- 0.0.0.0/0 0.0.0.0/0 MARK or 0x8000

Chain KUBE-MARK-MASQ (0 references) pkts bytes target prot opt in out source destination
0 0 MARK all -- 0.0.0.0/0 0.0.0.0/0 MARK or 0x4000

Chain KUBE-NODEPORT-CONTAINER (1 references) pkts bytes target prot opt in out source destination

Chain KUBE-NODEPORT-HOST (1 references) pkts bytes target prot opt in out source destination

Chain KUBE-PORTALS-CONTAINER (1 references) pkts bytes target prot opt in out source destination
0 0 REDIRECT tcp -- 0.0.0.0/0 10.152.183.1 / default/kubernetes:https / tcp dpt :443 redir ports 38715 0 0 REDIRECT tcp -- 0.0.0.0/0 10.152.183.56 / kube-system/kubernetes-dashboard: / tcp dpt:443 redir ports 40565 0 0 REDIRECT tcp -- 0.0.0.0/0 10.152.183.70 / kube-system/monitoring-grafana: / tcp dpt:80 redir ports 45491 0 0 REDIRECT tcp -- 0.0.0.0/0 10.152.183.201 / kube-system/monitoring-influxdb:htt p / tcp dpt:8083 redir ports 45709 0 0 REDIRECT tcp -- 0.0.0.0/0 10.152.183.201 / kube-system/monitoring-influxdb:api / tcp dpt:8086 redir ports 41271 0 0 REDIRECT tcp -- 0.0.0.0/0 10.152.183.184 / kube-system/heapster: / tcp dpt:80 redir ports 45569

Chain KUBE-PORTALS-HOST (1 references) pkts bytes target prot opt in out source destination
0 0 DNAT tcp -- 0.0.0.0/0 10.152.183.1 / default/kubernetes:https / tcp dpt:443 to:192.168.3.100:38715 0 0 DNAT tcp -- 0.0.0.0/0 10.152.183.56 / kube-system/kubernetes-dashboard: / tcp dpt:443 to:192.168.3.100:40565 0 0 DNAT tcp -- 0.0.0.0/0 10.152.183.70 / kube-system/monitoring-grafana: / tcp dpt:80 to:192.168.3.100:45491 0 0 DNAT tcp -- 0.0.0.0/0 10.152.183.201 / kube-system/monitoring-influxdb:http / tcp dpt:8083 to:192.168.3.100:45709 0 0 DNAT tcp -- 0.0.0.0/0 10.152.183.201 / kube-system/monitoring-influxdb:api / tcp dpt:8086 to:192.168.3.100:41271 0 0 DNAT tcp -- 0.0.0.0/0 10.152.183.184 / kube-system/heapster: / tcp dpt:80 to:192.168.3.100:45569

Chain KUBE-POSTROUTING (1 references) pkts bytes target prot opt in out source destination
0 0 MASQUERADE all -- 0.0.0.0/0 0.0.0.0/0 / kubernetes service traffic requiring SNAT / mark match 0x4000/0x4000 ` Are these rules conflicting with previous firewall forwarding rules set by docker? Also I've got these messages in the logs

' Mar 01 13:11:08 ubs microk8s.daemon-proxy[4966]: W0301 13:11:08.811303 4966 server.go:194] WARNING: all flags other than --config, --write-config-to, and --cleanup are deprecated. Please begin using a config file ASAP. Mar 01 13:11:09 ubs microk8s.daemon-proxy[4966]: W0301 13:11:09.010578 4966 node.go:103] Failed to retrieve node info: Get http://127.0.0.1:8080/api/v1/nodes/ubs: dial tcp 127.0.0.1:8080: connect: connection refused Mar 01 13:11:09 ubs microk8s.daemon-proxy[4966]: I0301 13:11:09.010631 4966 server_others.go:221] Using userspace Proxier. Mar 01 13:11:09 ubs microk8s.daemon-proxy[4966]: I0301 13:11:09.052922 4966 server_others.go:247] Tearing down inactive rules. Mar 01 13:11:09 ubs microk8s.daemon-proxy[4966]: E0301 13:11:09.062576 4966 proxier.go:395] Error removing pure-iptables proxy rule: error checking rule: exit status 2: iptables v1.6.1: Couldn't find target KUBE-EXTERNAL-SERVICES' Mar 01 13:11:09 ubs microk8s.daemon-proxy[4966]: Tryiptables -h' or 'iptables --help' for more information. Mar 01 13:11:09 ubs microk8s.daemon-proxy[4966]: E0301 13:11:09.064993 4966 proxier.go:395] Error removing pure-iptables proxy rule: error checking rule: exit status 2: iptables v1.6.1: Couldn't find target KUBE-SERVICES' Mar 01 13:11:09 ubs microk8s.daemon-proxy[4966]: Tryiptables -h' or 'iptables --help' for more information. Mar 01 13:11:09 ubs microk8s.daemon-proxy[4966]: E0301 13:11:09.066967 4966 proxier.go:395] Error removing pure-iptables proxy rule: error checking rule: exit status 2: iptables v1.6.1: Couldn't find target KUBE-SERVICES' Mar 01 13:11:09 ubs microk8s.daemon-proxy[4966]: Tryiptables -h' or 'iptables --help' for more information. Mar 01 13:11:09 ubs microk8s.daemon-proxy[4966]: E0301 13:11:09.069172 4966 proxier.go:395] Error removing pure-iptables proxy rule: error checking rule: exit status 2: iptables v1.6.1: Couldn't find target KUBE-SERVICES' Mar 01 13:11:09 ubs microk8s.daemon-proxy[4966]: Tryiptables -h' or 'iptables --help' for more information. Mar 01 13:11:09 ubs microk8s.daemon-proxy[4966]: E0301 13:11:09.073988 4966 proxier.go:395] Error removing pure-iptables proxy rule: error checking rule: exit status 2: iptables v1.6.1: Couldn't find target KUBE-FORWARD' Mar 01 13:11:09 ubs microk8s.daemon-proxy[4966]: Tryiptables -h' or 'iptables --help' for more information. Mar 01 13:11:09 ubs microk8s.daemon-proxy[4966]: E0301 13:11:09.076076 4966 proxier.go:395] Error removing pure-iptables proxy rule: error checking rule: exit status 2: iptables v1.6.1: Couldn't find target KUBE-SERVICES' Mar 01 13:11:09 ubs microk8s.daemon-proxy[4966]: Tryiptables -h' or 'iptables --help' for more information. Mar 01 13:11:09 ubs microk8s.daemon-proxy[4966]: E0301 13:11:09.078144 4966 proxier.go:395] Error removing pure-iptables proxy rule: error checking rule: exit status 2: iptables v1.6.1: Couldn't find target `KUBE-SERVICES'

' Are these chains supposed to be built during install?

ktsakalozos commented 5 years ago

Hi @placidchat

Could you please attach the tarball from microk8s.inspect? Yes, there should be a cbr0 interface.

placidchat commented 5 years ago

Hi kts, no there is no cbr0. Is it me or does the microk8s.inspect tar up quite a bit of information about the system? I did a grep through the microk8s code base and don't see any code that sets cbr0 or the veth interfaces.

ktsakalozos commented 5 years ago

cbr0 is used by kubenet https://kubernetes.io/docs/concepts/extend-kubernetes/compute-storage-net/network-plugins/#kubenet

When you run microk8s.inspect do you get all services running?

placidchat commented 5 years ago

So i used snapcraft to build a snap from scratch, to see if i could triage a reason for all this. I installed the dependencies and nftables to my installation. But when i try booting up, microk8s.inspect gives this '' Inspecting services Service snap.microk8s.daemon-docker is running Service snap.microk8s.daemon-apiserver is running FAIL: Service snap.microk8s.daemon-proxy is not running For more details look at: sudo journalctl -u snap.microk8s.daemon-proxy FAIL: Service snap.microk8s.daemon-kubelet is not running For more details look at: sudo journalctl -u snap.microk8s.daemon-kubelet Service snap.microk8s.daemon-scheduler is running Service snap.microk8s.daemon-controller-manager is running Service snap.microk8s.daemon-etcd is running Copy service arguments to the final report tarball ''

And journalctl -u snap.microk8s.daemon-proxy gives this: '' systemd[1]: Started Service for snap application microk8s.daemon-proxy. microk8s.daemon-proxy[6145]: W0301 1.20:46.560145 6145 server.go:198] WARNING: all flags other than --config, --write-c microk8s.daemon-proxy[6145]: W0301 1.20:46.942837 6145 node.go:103] Failed to retrieve node info: Get http://127.0.0.1: microk8s.daemon-proxy[6145]: I0301 1.20:46.942894 6145 server_others.go:221] Using userspace Proxier. microk8s.daemon-proxy[6145]: I0301 1.20:47.040452 6145 server_others.go:247] Tearing down inactive rules. microk8s.daemon-proxy[6145]: E0301 1.20:47.043328 6145 proxier.go:395] Error removing pure-iptables proxy rule: error c microk8s.daemon-proxy[6145]: Try iptables -h' or 'iptables --help' for more information. microk8s.daemon-proxy[6145]: E0301 1.20:47.048601 6145 proxier.go:395] Error removing pure-iptables proxy rule: error c microk8s.daemon-proxy[6145]: Tryiptables -h' or 'iptables --help' for more information. ''

ktsakalozos commented 5 years ago

Can you please try building with snapcraft cleanbuild. This will spawn an LXC container so your build will be on ubuntu xenial and will not be affected by the librariy versions you have localy. This also means you need to have lxd/lxc setup.

placidchat commented 5 years ago

cbr0 is used by kubenet https://kubernetes.io/docs/concepts/extend-kubernetes/compute-storage-net/network-plugins/#kubenet

When you run microk8s.inspect do you get all services running?

Yes they are all running, but building the github repository version generates some errors when starting. I suspect, the interfaces are not generated, veth, and the bridge cbr0 isn't created as well.

Can you please try building with snapcraft cleanbuild. This will spawn an LXC container :) this really feels like learning by compiling

ktsakalozos commented 5 years ago

Which distribution are you on?

placidchat commented 5 years ago

Can you please try building with snapcraft cleanbuild. This will spawn an LXC container so your build will be on ubuntu xenial and will not be affected by the librariy versions you have localy. This also means you need to have lxd/lxc setup.

I'm getting an error about a storage pool.

''+Creating snapcraft-easily-vast-oriole Error: Failed container creation: No storage pool found. Please create a new storage pool. Failed to setup container Refer to the documentation at https://linuxcontainers.org/lxd/getting-started-cli. '' Do i need to setup a zfs storage for this to work? I'm on the ubuntu bionic beaver

ktsakalozos commented 5 years ago

Any storage pool type would do. Here is what I usually do:

# Remove any old version of lxc/lxd on my machine
apt-get purge lxc*
apt-get purge lxd*
# Get the latest lxd
snap install lxd
lxd init
# Go with the defaults

To test, lxc launch ubuntu should spawn and ubuntu container that you can delete with lxc delete "container-name" --force.

placidchat commented 5 years ago

installed zfs, and initiallised lxd. I don't remember it being this chatty a couple of years ago.

placidchat commented 5 years ago
 sudo -E snapcraft cleanbuild
Creating snapcraft-kindly-golden-caiman
Error: Failed container creation: Get https://cloud-images.ubuntu.com/releases/streams/v1/index.json: lookup cloud-images.ubuntu.com on [::1]:53: server misbehaving
Failed to setup container
Refer to the documentation at https://linuxcontainers.org/lxd/getting-started-cli.

I did a GET https://cloud-images.ubuntu.com/releases/streams/v1/index.json , and it returned a json file. So i'm not sure why the [::1]:53 is being generated or even why it is using ipv6, since there aren't any configured v6 interfaces

placidchat commented 5 years ago

rebuilt and reinstalled. checking with microk8s.inspect

Inspecting services
  Service snap.microk8s.daemon-docker is running
  Service snap.microk8s.daemon-apiserver is running
 FAIL:  Service snap.microk8s.daemon-proxy is not running
For more details look at: sudo journalctl -u snap.microk8s.daemon-proxy
 FAIL:  Service snap.microk8s.daemon-kubelet is not running
For more details look at: sudo journalctl -u snap.microk8s.daemon-kubelet
  Service snap.microk8s.daemon-scheduler is running
  Service snap.microk8s.daemon-controller-manager is running
  Service snap.microk8s.daemon-etcd is running
  Copy service arguments to the final report tarball
Inspecting AppArmor configuration
Gathering system info
  Copy network configuration to the final report tarball
  Copy processes list to the final report tarball
  Copy snap list to the final report tarball
  Inspect kubernetes cluster

During the snapcraft process of building I get this

Priming iptables 
warning: working around a Linux kernel bug by creating a hole of 2093056 bytes in ‘/tmp/tmpxia4mq9a’
warning: working around a Linux kernel bug by creating a hole of 2428928 bytes in ‘/tmp/tmp0kk6rzwe’
warning: working around a Linux kernel bug by creating a hole of 2433024 bytes in ‘/tmp/tmp9jp_ysuu’
Priming docker 
cannot find section
Failed to update '/home/ubs/Programs/microk8s/prime/usr/bin/docker-proxy'. Retrying after stripping the .note.go.buildid from the elf file.
warning: working around a Linux kernel bug by creating a hole of 2097152 bytes in ‘/tmp/tmpd6fqlule’
warning: working around a Linux kernel bug by creating a hole of 2097152 bytes in ‘/tmp/tmpqd8p1nh8’
warning: working around a Linux kernel bug by creating a hole of 2097152 bytes in ‘/tmp/tmppik7kf6r’
warning: working around a Linux kernel bug by creating a hole of 2244608 bytes in ‘/tmp/tmp9ua1m3j6’
warning: working around a Linux kernel bug by creating a hole of 2097152 bytes in ‘/tmp/tmprfsb0ea_’
warning: working around a Linux kernel bug by creating a hole of 2097152 bytes in ‘/tmp/tmppx_mq1pz’
warning: working around a Linux kernel bug by creating a hole of 2097152 bytes in ‘/tmp/tmp4dktuy6_’
warning: working around a Linux kernel bug by creating a hole of 2097152 bytes in ‘/tmp/tmpytfsi5jy’
warning: working around a Linux kernel bug by creating a hole of 2097152 bytes in ‘/tmp/tmpemmqn8s9’
warning: working around a Linux kernel bug by creating a hole of 2097152 bytes in ‘/tmp/tmp7u92g01s’
warning: working around a Linux kernel bug by creating a hole of 2121728 bytes in ‘/tmp/tmpq2arqe5b’
warning: working around a Linux kernel bug by creating a hole of 2252800 bytes in ‘/tmp/tmpqkyea92w’
warning: working around a Linux kernel bug by creating a hole of 2097152 bytes in ‘/tmp/tmpn5cef6gr’
Priming microk8s 
Files from the build host were migrated into the snap to satisfy dependencies that would otherwise not be met. This feature will be removed in a future release. If these libraries are needed in the final snap, ensure that the following are either satisfied by a stage-packages entry or through a part:
lib/x86_64-linux-gnu/libatm.so.1
The primed files for part 'microk8s' will not be verified for correctness or patched: build-attributes: [no-patchelf] is set.
Determining the version from the project repo (version-script).
  % Total    % Received % Xferd  Average Speed   Time    Time     Time  Current
                                 Dload  Upload   Total   Spent    Left  Speed
100   161  100   161    0     0    194      0 --:--:-- --:--:-- --:--:--   194
100     8  100     8    0     0      6      0  0:00:01  0:00:01 --:--:--     6
The version has been set to 'v1.13.4'
Snapping 'microk8s' |                                                                                           
Snapped microk8s_v1.13.4_amd64.snap

And looking through journalctl -u snap.microk8s.daemon-proxy

Mar 01 3:08:42 user systemd[1]: Started Service for snap application microk8s.daemon-proxy.
Mar 01 3:08:46 user microk8s.daemon-proxy[6145]: W0301 3:08:46.560145    6145 server.go:198] WARNING: all flags other than --config, --write-config-to, and --cleanup are deprecated. Please begin using a config file ASAP.
Mar 01 3:08:46 user microk8s.daemon-proxy[6145]: W0301 3:08:46.942837    6145 node.go:103] Failed to retrieve node info: Get http://127.0.0.1:8080/api/v1/nodes/ubs: dial tcp 127.0.0.1:8080: connect: connection refused
Mar 01 3:08:46 user microk8s.daemon-proxy[6145]: I0301 3:08:46.942894    6145 server_others.go:221] Using userspace Proxier.
Mar 01 3:08:47 user microk8s.daemon-proxy[6145]: I0301 3:08:47.040452    6145 server_others.go:247] Tearing down inactive rules.
Mar 01 3:08:47 user microk8s.daemon-proxy[6145]: E0301 3:08:47.043328    6145 proxier.go:395] Error removing pure-iptables proxy rule: error checking rule: exit status 2: iptables v1.6.1: Couldn't find target `KUBE-EXTERNAL-SERVICES'
Mar 01 3:08:47 user microk8s.daemon-proxy[6145]: Try `iptables -h' or 'iptables --help' for more information.
Mar 01 3:08:47 user microk8s.daemon-proxy[6145]: E0301 3:08:47.048601    6145 proxier.go:395] Error removing pure-iptables proxy rule: error checking rule: exit status 2: iptables v1.6.1: Couldn't find target `KUBE-SERVICES'
Mar 01 3:08:47 user microk8s.daemon-proxy[6145]: Try `iptables -h' or 'iptables --help' for more information.
Mar 01 3:08:47 user microk8s.daemon-proxy[6145]: E0301 3:08:47.050028    6145 proxier.go:395] Error removing pure-iptables proxy rule: error checking rule: exit status 2: iptables v1.6.1: Couldn't find target `KUBE-SERVICES'
Mar 01 3:08:47 user microk8s.daemon-proxy[6145]: Try `iptables -h' or 'iptables --help' for more information.
Mar 01 3:08:47 user microk8s.daemon-proxy[6145]: E0301 3:08:47.051622    6145 proxier.go:395] Error removing pure-iptables proxy rule: error checking rule: exit status 2: iptables v1.6.1: Couldn't find target `KUBE-SERVICES'
Mar 01 3:08:47 user microk8s.daemon-proxy[6145]: Try `iptables -h' or 'iptables --help' for more information.
Mar 01 3:08:47 user microk8s.daemon-proxy[6145]: E0301 3:08:47.053123    6145 proxier.go:395] Error removing pure-iptables proxy rule: error checking rule: exit status 2: iptables v1.6.1: Couldn't find target `KUBE-POSTROUTING'
Mar 01 3:08:47 user microk8s.daemon-proxy[6145]: Try `iptables -h' or 'iptables --help' for more information.
Mar 01 3:08:47 user microk8s.daemon-proxy[6145]: E0301 3:08:47.054787    6145 proxier.go:395] Error removing pure-iptables proxy rule: error checking rule: exit status 2: iptables v1.6.1: Couldn't find target `KUBE-FORWARD'
Mar 01 3:08:47 user microk8s.daemon-proxy[6145]: Try `iptables -h' or 'iptables --help' for more information.
Mar 01 3:08:47 user microk8s.daemon-proxy[6145]: E0301 3:08:47.056474    6145 proxier.go:395] Error removing pure-iptables proxy rule: error checking rule: exit status 2: iptables v1.6.1: Couldn't find target `KUBE-SERVICES'
Mar 01 3:08:47 user microk8s.daemon-proxy[6145]: Try `iptables -h' or 'iptables --help' for more information.
Mar 01 3:08:47 user microk8s.daemon-proxy[6145]: E0301 3:08:47.058129    6145 proxier.go:395] Error removing pure-iptables proxy rule: error checking rule: exit status 2: iptables v1.6.1: Couldn't find target `KUBE-SERVICES'
Mar 01 3:08:47 user microk8s.daemon-proxy[6145]: Try `iptables -h' or 'iptables --help' for more information.

Also, there isn't a cbr0 bridge created

ktsakalozos commented 5 years ago

Can you please double check and make 100% sure when you build MicroK8s you do so with

snapcraft cleanbuild

and not with just snapcraft.

Some ideas for trouble shooting, increase the logging level by editing the arguments files (eg /var/snap/microk8s/current/args/kube-proxy) and adding the -v=9 argument. You can also edit /var/snap/microk8s/current/args/kube-proxy and remove --proxy-mode="userspace" argument. After an update you should restart MicroK8s with microk8s.stop followed byt a microk8s.start.

placidchat commented 5 years ago

I've tried to reinstall from source again. I've installed lxd/lxc and nftables. I've also

1. removed docker.io 
2. did a snapcraft clean
3. Manually added iptables -N x ( KUBE-SERVICES, KUBE-POSTROUTING, KUBE-FORWARD, KUBE-EXTERNAL-SERVICES )
3.1 these shouldn't affect the starting bootstrap, since these should be handled by snap.microk8s.daemon-proxy.service and that fails
4. sudo -E snapcraft

I get this in my logfile:

server.go:396] unable to create proxier: failed to initialize iptables: error creating chain "KUBE-PORTALS-CONTAINER": exit status 127: iptables: symbol lookup error: iptables: undefined symbol: nfct_labels_get_path

I did this for snapcraft cleanbuild

sudo -E snapcraft cleanbuild
^[^[Creating snapcraft-fully-legal-hawk
Starting snapcraft-fully-legal-hawk
Error: no such file or directory
Try `lxc info --show-log local:snapcraft-fully-legal-hawk` for more info
Failed to setup container
Refer to the documentation at https://linuxcontainers.org/lxd/getting-started-cli.

And checking with lxc i get:

sudo lxc info --show-log local:snapcraft-fully-legal-hawk
Name: snapcraft-fully-legal-hawk
Remote: unix://
Architecture: x86_64
Created: 2019/03/06 06:42 UTC+03
Status: Stopped
Type: ephemeral
Profiles: default

Log:

One question, should snapcraft be built as root or as a user? Another is shouldn't the system iptables be the same with one built by microk8s? How do i check?

placidchat commented 5 years ago

Also, when i try to run the executable prime/usr/bin/docker-proxy, it segfaults. I'm trying this because i get this error during the 'sudo -E snapcraft' build stage

Failed to update '/home/ubs/Programs/microk8s/prime/usr/bin/docker-proxy'. Retrying after stripping the .note.go.buildid from the elf file.

Also I get this:

Priming microk8s 
Files from the build host were migrated into the snap to satisfy dependencies that would otherwise not be met. This feature will be removed in a future release. If these libraries are needed in the final snap, ensure that the following are either satisfied by a stage-packages entry or through a part:
lib/x86_64-linux-gnu/libatm.so.1
The primed files for part 'microk8s' will not be verified for correctness or patched: build-attributes: [no-patchelf] is set.

I hope this is informative

ktsakalozos commented 5 years ago
  1. removed docker.io
  2. did a snapcraft clean
  3. Manually added iptables -N x ( KUBE-SERVICES, KUBE-POSTROUTING, KUBE-FORWARD, KUBE-EXTERNAL-SERVICES ) 3.1 these shouldn't affect the starting bootstrap, since these should be handled by snap.microk8s.daemon-proxy.service and that fails
  4. sudo -E snapcraft

For 2. I would do a sudo snapcraft clean since you tried to build with sudo -E. Step 3 should not be needed. Do not do step 4. MicroK8s should be build on xenial not bionic. snapcraft cleanbuild will spawn a xenial lxc container and perform the build inside there. You do not need sudo either, your user should be able to build MicroK8s.

I see your lxc setup is giving you some trouble. Can you do an lxc launch ubuntu to see if you can start a single lxc container?

placidchat commented 5 years ago

Oh xenial ! The bionic beaver isn't a good match? It looks like the zfs mountpoints are not automatically mounted. The user has been added to the lxd group , and I am able as root to zfs mount the default/containers/mycontainer , but when using the lxc interface, either as root or as a user, it isn't able to mount the zfs dataset.

lvl=eror msg="Failed to mount ZFS dataset \"default/containers/mycontainer\" onto \"/var/lib/lxd/storage-pools/default/containers/mycontainer\"." t=2019-03-06T16:43:34+0300

Because of that lxc returns Error:no such file or directory and the lxc info --show-log shows no indication of why it isn't able to run

placidchat commented 5 years ago

Should the /var/lib/lxd/disks/default.img be owned by the user? Or lxd:lxd or root?

ktsakalozos commented 5 years ago

MicroK8s is a snap that packages all of its dependencies. It is based on base core that in turn it is based on xenial therefore you do not want to build its components in an environment where the correct set of dependencies is missing.

Please install the latest lxd from snap?

# Remove any old version of lxc/lxd on my machine
apt-get purge lxc*
apt-get purge lxd*
# Get the latest lxd
snap install lxd
lxd init
# Go with the defaults
placidchat commented 5 years ago

So i purged the lxd / lxd-clients from apt, and reinstalled from snap.

  1. lxd init 1.1 when it requests to create a new local network bridge, lxdbr0, if it still exists from a past install the setup fails 1.2 A zfs storage pool, lxd, from a past install is still registered. I've zpool removed the entire pool in the past and the setup fails if you refer to a past storage pool name. Somehow lxd doesn't try to figure out if the zpool really exists. It fails right at the end of the creation
Would you like to use LXD clustering? (yes/no) [default=no]:
Do you want to configure a new storage pool? (yes/no) [default=yes]:
Name of the new storage pool [default=default]: lxd
Name of the storage backend to use (btrfs, ceph, dir, lvm, zfs) [default=zfs]:
Create a new ZFS pool? (yes/no) [default=yes]:
Would you like to use an existing block device? (yes/no) [default=no]:
Size in GB of the new loop device (1GB minimum) [default=73GB]: 30
Would you like to connect to a MAAS server? (yes/no) [default=no]:
Would you like to create a new local network bridge? (yes/no) [default=yes]:
What should the new bridge be called? [default=lxdbr0]:
What IPv4 address should be used? (CIDR subnet notation, “auto” or “none”) [default=auto]:
What IPv6 address should be used? (CIDR subnet notation, “auto” or “none”) [default=auto]:
Would you like LXD to be available over the network? (yes/no) [default=no]:
Would you like stale cached images to be updated automatically? (yes/no) [default=yes]
Would you like a YAML "lxd init" preseed to be printed? (yes/no) [default=no]: yes
config: {} 
networks:  
- config:                  
    ipv4.address: auto
    ipv6.address: auto
  description: ""
  managed: false
  name: lxdbr0
  type: "" 
storage_pools:
- config:  
    size: 30GB
  description: ""
  name: lxd
  driver: zfs
profiles:  
- config: {}
  description: ""
  devices: 
    eth0:  
      name: eth0
      nictype: bridged
      parent: lxdbr0
      type: nic
    root:  
      path: /
      pool: lxd
      type: disk
  name: default
cluster: null

Error: Failed to create storage pool 'lxd': Failed to create the ZFS pool: cannot create 'lxd': pool already exists

1.3 ipv6 : For some reason when doing:

lxc launch ubuntu:18.04 mycontainer
Creating mycontainer
Error: Failed container creation: Get https://cloud-images.ubuntu.com/releases/streams/v1/index.json: lookup cloud-images.ubuntu.com on [::1]:53: server misbehaving

For some reason the snap version of lxc requires a ipv6 connection for it to work or nameserver queries fail. But! this happens if you're behind a proxy. When directly connected lxc launch works. The error message doesn't reflect the reality.

1.4 lxc runs a container but: lxc ls shows a running container

----+
| mycontainer | RUNNING | 10.113.236.27 (eth0) | <ipv6address> (eth0) | PERSISTENT |           |

and it is pingable, but sysdig isn't able to list the container, maybe i have an older version of sysdig. Also, when i zpool list, there aren't any pools at all. A mount only shows where snaps are mounted but none of them are zfs. Some of the previous errors, that aren't included here, were indeed caused by having apt versions of lxd/lxd-clients and snap lxd/lxd-clients clashing with each other.

On a side note, in trying to solve some of these issues, i'd disabled the proxying service, tried to resolve it by adding the user to the lxd group because the lxc client needed to connect to the unix domain socket, and for some reason the environment setting for the Xauthority file got changed from $HOME/.Xauthority to ~/.Xauthority, causing the machine not to boot into X. Just in case anyone experiences the same thing.

placidchat commented 5 years ago

Looks like i was pinging the bridge rather than the container itself. And i cannot find any interfaces with configured with the ip address shown in lxc list while doing an lsof -i -n -P. Are there any other ways to show all configured interfaces?

placidchat commented 5 years ago

If you configure: lxc config set core.proxy_http and lxc config set core.proxy_https , you'll be able to set the proxy configuration for the lxc/lxd.Tried this

  1. lxd init 1.1 Gave sane answers, to get past conflicting pool name, bridge name.
  2. Finding where the zpool is. 2.1 as root did a zpool list. No pools were created 2.2 lxc storage volume list default2. Shows a header, doesn't say it is an error
  3. lxc launch ubuntu:18.04 mycontainer 3.1 lxc storage volume list default2. shows an image and and image. Where is this located? And can i use zfs commands to interact with it
placidchat commented 5 years ago

After some looking around, /var/snap/lxd/common/lxd/disks is the place where the disk images are located. They are zfs members but probably unless you add/register them explicitly, it won't be possible to use zpool commands on them. lxc storage works fine

stale[bot] commented 4 years ago

This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.