autopilotpattern / jenkins

Extension of official Jenkins Docker image that supports Joyent's Triton for elastic slave provisioning
43 stars 10 forks source link

Run on KVM #14

Closed tgross closed 2 years ago

tgross commented 7 years ago

This PR updates Jenkins so that we're running Jenkins on KVM so that we can build containers locally and then deploy them to Triton separately.

cc @misterbisson @jasonpincin and also @charandas who's been asking questions about how this repo should work.

charandas commented 7 years ago

Hey @tgross: tried to run with these changes, and here are a few observations:

On ./manage.sh provision:

  1. public ssh keys under builder/keys_public/ seem to be required.
  2. builder/.bash_profile seems to be required also. I can make a blank file, and it moves on.
  3. git@github.com:joyent/product-automation.git is being cloned into /opt/jenkins, but is a private repo.

Let me know if I can help with any testing.

charandas commented 7 years ago

Also, it mentions here, which seems correct.

A checkout of this repo can be found at /opt/jenkins

The ansible vm role has this which jibes:

    - name: Build initial docker containers
      command: docker-compose build
      args:
        chdir: /opt/jenkins

I will try with this repo instead ofjoyent/product-automation and let you know.

tgross commented 7 years ago
  1. public ssh keys under builder/keys_public/ seem to be required.

Yeah, I should have removed that bit.

  1. builder/.bash_profile seems to be required also. I can make a blank file, and it moves on.

Oops, that didn't get copied into this repo. But it's mostly opinionated helper stuff and not really relevant to this work... I should remove that bit.

  1. git@github.com:joyent/product-automation.git is being cloned into /opt/jenkins, but is a private repo.

I missed it with my sed... that should be this repo. Will fix.

tgross commented 7 years ago

I've hit those @charandas

charandas commented 7 years ago

@tgross That's great but I am running into this new error similar to here on the nginx container start. It keeps retrying without success.

2017/01/24 19:30:12 Unable to parse services: None of the interface specifications were able to match
Specifications: [{eth0 eth0 %!s(bool=false)} {net1 net1 %!s(bool=false)}]
Interfaces IPs: [docker0:172.17.0.1 docker0:fe80::42:4bff:fe78:8d3a lo:::1 lo:127.0.0.1 net0:72.2.115.81 net0:fe80::92b8:d0ff:fefd:ca2f]
docker run -it 0x74696d/gh181
{Index:1 MTU:65536 Name:lo HardwareAddr: Flags:up|loopback}
{Index:28 MTU:1500 Name:eth0 HardwareAddr:02:42:ac:11:00:02 Flags:up|broadcast|multicast}

Looking at nginx container's containerpilot, I cannot fully understand what's not being met. A docker network ls shows below:

docker network ls
NETWORK ID          NAME                DRIVER              SCOPE
1428410b6eb3        bridge              bridge              local
1edafec735ee        host                host                local
bdeaad9b00de        none                null                local

which seems to agree with docker.service included in the repo, in that there isn't a overlay network. My guess with overlay networks would be that they are unnecessary in this case, as the cluster is just 1 node. Correct me if you find that wrong. The description in the vm role confuses me a bit since it mentions the overlay file.

    - name: Add Docker service overlay file
      copy:
        src: docker.service
        dest: /etc/systemd/system/docker.service
tgross commented 7 years ago

My guess with overlay networks would be that they are unnecessary in this case

Right. Not only are they unnecessary, we're using the host networking in the Compose file. So Docker's networking doesn't come into play at all here. You're deploying on Triton? What does ifconfig should on the KVM host?

The description in the vm role confuses me a bit since it mentions the overlay file.

That's referring to a "systemd overlay" in this case -- this is the particularly goofy way that systemd works (or at least how it works on Debian-based systems) where there's a service file in /etc/defaults for the service and then you can update those defaults with an overlay version of the service file that overrides select items. It doesn't have anything to do with the Docker networking.

charandas commented 7 years ago

@tgross Oh ok. That helps. ifconfig shows this for the KVM VM:

docker0   Link encap:Ethernet  HWaddr 02:42:e3:68:7f:be
          inet addr:172.17.0.1  Bcast:0.0.0.0  Mask:255.255.0.0
          inet6 addr: fe80::42:e3ff:fe68:7fbe/64 Scope:Link
          UP BROADCAST RUNNING MULTICAST  MTU:1500  Metric:1
          RX packets:474 errors:0 dropped:0 overruns:0 frame:0
          TX packets:2696 errors:0 dropped:0 overruns:0 carrier:0
          collisions:0 txqueuelen:0
          RX bytes:28799 (28.7 KB)  TX bytes:12370248 (12.3 MB)

lo        Link encap:Local Loopback
          inet addr:127.0.0.1  Mask:255.0.0.0
          inet6 addr: ::1/128 Scope:Host
          UP LOOPBACK RUNNING  MTU:65536  Metric:1
          RX packets:160 errors:0 dropped:0 overruns:0 frame:0
          TX packets:160 errors:0 dropped:0 overruns:0 carrier:0
          collisions:0 txqueuelen:1
          RX bytes:11840 (11.8 KB)  TX bytes:11840 (11.8 KB)

net0      Link encap:Ethernet  HWaddr 90:b8:d0:c1:53:59
          inet addr:72.2.119.203  Bcast:72.2.119.255  Mask:255.255.254.0
          inet6 addr: fe80::92b8:d0ff:fec1:5359/64 Scope:Link
          UP BROADCAST RUNNING MULTICAST  MTU:1500  Metric:1
          RX packets:315059 errors:0 dropped:0 overruns:0 frame:0
          TX packets:28399 errors:0 dropped:0 overruns:0 carrier:0
          collisions:0 txqueuelen:1000
          RX bytes:454220655 (454.2 MB)  TX bytes:2139517 (2.1 MB)

vethc907a47 Link encap:Ethernet  HWaddr f2:e1:5d:25:84:4c
          inet6 addr: fe80::f0e1:5dff:fe25:844c/64 Scope:Link
          UP BROADCAST RUNNING MULTICAST  MTU:1500  Metric:1
          RX packets:474 errors:0 dropped:0 overruns:0 frame:0
          TX packets:2703 errors:0 dropped:0 overruns:0 carrier:0
          collisions:0 txqueuelen:0
          RX bytes:35435 (35.4 KB)  TX bytes:12370826 (12.3 MB)
charandas commented 7 years ago

Just spawned a new VM. The below logs will show the correct ip:


2017/01/24 20:29:14 Unable to parse services: None of the interface specifications were able to match
Specifications: [{eth0 eth0 %!s(bool=false)} {net1 net1 %!s(bool=false)}]
Interfaces IPs: [docker0:172.17.0.1 docker0:fe80::42:e3ff:fe68:7fbe lo:::1 lo:127.0.0.1 net0:72.2.119.203 net0:fe80::92b8:d0ff:fec1:5359]
2017/01/24 20:29:14 Unable to parse services: None of the interface specifications were able to match
Specifications: [{eth0 eth0 %!s(bool=false)} {net1 net1 %!s(bool=false)}]
Interfaces IPs: [docker0:172.17.0.1 docker0:fe80::42:e3ff:fe68:7fbe lo:::1 lo:127.0.0.1 net0:72.2.119.203 net0:fe80::92b8:d0ff:fec1:5359]
2017/01/24 20:29:15 Unable to parse services: None of the interface specifications were able to match
Specifications: [{eth0 eth0 %!s(bool=false)} {net1 net1 %!s(bool=false)}]
Interfaces IPs: [docker0:172.17.0.1 docker0:fe80::42:e3ff:fe68:7fbe lo:::1 lo:127.0.0.1 net0:72.2.119.203 net0:fe80::92b8:d0ff:fec1:5359]```
tgross commented 7 years ago

The _jenkins_create function in the manage.sh script should be attaching both the public network and private network, so you should have a NIC at net0 and net1. What's the output of triton network ls -l in your Triton account?

charandas commented 7 years ago

@tgross so looks like net1 is missing.

$ triton network ls -l
ID                                    NAME                SUBNET            GATEWAY        FABRIC  VLAN  PUBLIC
5983940e-58a5-4543-b732-c689b1fe4c08  Joyent-SDC-Private  -                 -              -       -     false
9ec60129-9034-47b4-b111-3026f9b1a10f  Joyent-SDC-Public   -                 -              -       -     true
d5933154-4231-4b29-825f-d702f5712ed8  My-Fabric-Network   192.168.128.0/22  192.168.128.1  true    2     false
charandas commented 7 years ago

The default awk extract gives nothing. Turns out. I can tweak it for Private.

private=$(triton network ls -l | awk -F' +' '/Joyent-SDC-Private/{print $1}')
public=$(triton network ls -l | awk -F' +' '/Joyent-SDC-Public/{print $1}')
tgross commented 7 years ago

Ah. I think the account I'm testing our own Jenkins in has default but not My-Fabric-Network, which is what you want for the private I think. Just checked my own account and I have My-Fabric-Network and not default as well. Maybe that test account has an older setup? Will fix it here though.

charandas commented 7 years ago

Oh looks like you answered my question. I used the wrong network again! Curious how consul docker container knows which ips are private vs public.