Open bradfitz opened 6 years ago
More background on the dfly users list: http://lists.dragonflybsd.org/pipermail/users/2017-December/313731.html
In that thread, @rickard-von-essen says:
I have a working packer build of DragonFly BSD https://github.com/boxcutter/bsd.
The most interesting parts are the boot_command https://github.com/boxcutter/bsd/blob/master/dragonflybsd.json#L5 and actual installer script https://github.com/boxcutter/bsd/blob/master/http/install.sh.dfly
/cc @dmitshur
Update: I just ran Dragonfly (5.2.2) at home on QEMU/KVM with virtio-scsi and virtio net and it works fine.
So it should work fine on GCE, of course (which we already heard).
At this point I'm thinking we should just do this builder "by hand" for now, with a readme file with notes. I'll prepare the image by hand, then shut it down and copy its disk to a GCE image. (uploading it as a sparse tarball)
We can automate it with expect or whatnot later. Perfect is the enemy of good, etc.
I shut down my KVM/QEMU instance, copied its disk to a new GCE image, and created a GCE VM. It kernel panics on boot (over serial) with:
panic() at panic+0x236 0xffffffff805f8666
panic() at panic+0x236 0xffffffff805f8666
vfs_mountroot() at vfs_mountroot+0xfe 0xffffffff80672c7e
mi_startup() at mi_startup+0x84 0xffffffff805c2a64
Debugger("panic")
CPU0 stopping CPUs: 0x0000000e
stopped
Stopped at Debugger+0x7c: movb $0,0xe67a49(%rip)
db>
So, uh, not as easy as I'd hoped.
Perhaps if we already have to do the whole double virtualization thing for Solaris (https://github.com/golang/go/issues/15581#issuecomment-435431402) anyway, we could just reuse that mechanism to run Dragonfly in qemu/kvm under GCE.
I've tried working on this earlier this year (back in 2018-02), and had it scripted to make the image automatically, but I had the same issue that it'd work on my machines with vanilla QEMU just fine, including with the disk being accessible on DFly through DragonFly's vtscsi(4)
with a local QEMU as per the QEMU configuration magic described over at http://wiki.netbsd.org/tutorials/how_to_setup_virtio_scsi_with_qemu/, but it still wouldn't work on GCE with GCE's virtio_scsi. Is there any info on how GCE's virtio_scsi different from QEMU's virtio_scsi?
I've also tried running DragonFly BSD side by side with FreeBSD with CAMDEBUG
, but it didn't seem to reveal anything obvious, although the underlying CAM logic does seem to be quite different, so, it's probably the one to blame. I didn't run out of ideas, but did ran out of time back in February, and recently my GCE credits ran out as well.
Nested virtualisation sounds interesting. Does it require Linux on GCE, or would FreeBSD also work?
@cnst do you have instructions on how you tried DragonFly on GCE?
Change https://golang.org/cl/162959 mentions this issue: dashboard, buildlet: add a disabled builder with nested virt, for testing
I've tried myself and it seems DragonFly is unable to find the disk. We're working on it already: https://bugs.dragonflybsd.org/issues/3175
Change https://golang.org/cl/163057 mentions this issue: buildlet: change image name for COS-with-vmx buildlet
Change https://golang.org/cl/163301 mentions this issue: env/linux-x86-vmx: add new Debian host that's like Container-Optimized OS + vmx
Change https://golang.org/cl/202478 mentions this issue: dashboard: update Dragonfly tip policy for ABI change, add release builder
@tuxillo, looks like no progress on that bug, eh?
Thanks for the reminder, I kind of forgot about this one. It's being a tough one anyways. I'll check with the team again next week to see if we could do something.
@bradfitz I have some time to work on it again, but my credits expired, and trying to signup for a new account required some sort of an extra verification. Is there a way to get the credits again to work on this? Also, is there any way to reproduce this bug outside of Google environment? As per my 2018 comments, our driver works just fine in regular KVM using NetBSD's instructions for activating the codepath.
GCP has a Free Tier these days: https://cloud.google.com/free/
COMPUTE
Compute Engine
1
F1-micro instance per month
Scalable, high-performance virtual machines.
1 f1-micro instance per month (US regions only — excluding Northern Virginia [us-east4])
30 GB-months HDD
5 GB-months snapshot in select regions
1 GB network egress from North America to all region destinations per month (excluding China and Australia)
There's no way to reproduce it locally. GCP uses KVM but doesn't use QEMU and its implementation of virtio-scsi etc isn't open source.
@bradfitz How long does it take recompile the kernel on this free instance? A few hours? It was already taking too long even on non-micro GCP instances compared to 15-year old hardware.
I think it'd be great if there was a way to reproduce this problem locally, because our virtio-scsi drivers work just fine with anything but the proprietary GCP implementation.
Would it be helpful to provide automation for any other cloud provider?
@cnst, I didn't imagine you'd be using the f1-micro installation for compilations. I thought you'd use your normal development environment to build and then use the f1-micro to test boot them on GCE until it worked.
@cnst what I did in my tests was to download the latest IMG, mount null it, build kernel with modifications and install it in the mountpoint. Then I used gcloud/gsutil to upload the img and create the disk and the instance. You can retrieve the console output with gcloud iirc.
because our virtio-scsi drivers work just fine with anything but the proprietary GCP implementation.
FWIW, Go runs the following operating systems on GCP that all work with Google's virtio-scsi implementation:
Either Dragonfly has a bug, or all those operating systems have worked around bugs in Google's implementation. Or both.
Just to give a quick update, we've done some steps in the right direction to fix this. At least the VM now sees the disk but further changes and testing are needed. I'll update this with more information as soonn as we have it.
da0 at vtscsi0 bus 0 target 1 lun 0
da0: <Google PersistentDisk 1> Fixed Direct Access SCSI-6 device
da0: Serial Number
da0: 300.000MB/s transfers
da0: Command Queueing Enabled
da0: 2048MB (4194304 512 byte sectors: 255H 63S/T 261C)
Great!
It is fixed now, see https://github.com/DragonFlyBSD/DragonFlyBSD/commit/f0ee34376aa227bbd17f5ccbc846ac30c6177693
We can boot DragonFly BSD in GCE. Now we'd like to create two official images, one for "master" branch (tip) and one for release branch (right now 5.8). Do you guys know how we should proceed?
@dmitshur I'm going to try to prepare the make.bash for dragonfly. I've tested the freebsd one to see if it builds, but the 'cmd/upload' requires credentials, how do I get them (for dfly in this case) ? Or is this something you just have in your side and I only have to take care of the script?
@tuxillo you need to register for a free tier account and create a GCS bucket to upload to and the get some credentials with the gcloud
tool (also called Cloud SDK).
See https://cloud.google.com/free https://cloud.google.com/sdk/docs/install https://cloud.google.com/sdk/gcloud/reference/auth/application-default/login https://cloud.google.com/storage/docs/creating-buckets
thanks @rickard-von-essen . I was expecting that it would be updated to some "special" bucket.
Or is this something you just have in your side and I only have to take care of the script?
@tuxillo In general, yes, that's right. Only the release team has the permissions to upload to the Go builder project buckets. We can do that if you prepare the script. Also see https://golang.org/wiki/DashboardBuilders#how-to-set-up-a-builder.
This issue is high level and better seen as an umbrella issue. It would help to create an individual tracking issue for each distinct builder being added, so we can coordinate and track work better. Thanks!
I plan to look into this next week.
I have a VM image, it runs on GCE. The only problem is that networking is pretty broken. The short version is that programs that want to send TCP packets to 169.254.169.254 usually can do that just fine, while programs that want to send UDP packets seem to not know where to send them and spend their time sending ARP requests instead.
Each time I boot the VM I get some different options for connectivity. There are a variety of failure modes (no success modes) but the most common is that all TCP traffic to 169.254.169.254 (the local metadata server, which serves DHCP, DNS, and HTTP) is fine, but UDP traffic to 169.254.169.254 can't even be generated. The image runs fine in QEMU on my local machine, of course (talking to other DNS servers, not 169.254.169.254).
DHCP always works - the interface gets an address. I can even seem to renew it. Of course DHCP runs over broadcast. TCP works fine. UDP seems to send ARP requests instead of the actual packets you'd expect. It is as though TCP is seeing different ARP entries than UDP.
The ARP cache looks odd. It shows two entries for 10.128.0.1:
root@buildlet:~ # arp -an
? (10.128.0.1) at 42:01:0a:80:00:01 on vtnet0 permanent [ethernet]
? (10.128.0.1) at (incomplete) on vtnet0 permanent published [ethernet]
root@buildlet:~ #
Here are two working TCP-based commands (with tcpdump -nle in the background):
root@buildlet:~ # curl http://169.254.169.254/
16:29:29.629488 42:01:0a:80:00:0e > 42:01:0a:80:00:01, ethertype IPv4 (0x0800), length 78: 10.128.0.14.4473 > 169.254.169.254.80: Flags [S], seq 2146140974, win 57344, options [mss 1420,nop,wscale 5,nop,nop,sackOK,nop,nop,TS val 9858 ecr 0], length 0
16:29:29.630093 42:01:0a:80:00:01 > 42:01:0a:80:00:0e, ethertype IPv4 (0x0800), length 62: 169.254.169.254.80 > 10.128.0.14.4473: Flags [S.], seq 3517403959, ack 2146140975, win 65535, options [mss 1420,eol], length 0
16:29:29.630166 42:01:0a:80:00:0e > 42:01:0a:80:00:01, ethertype IPv4 (0x0800), length 54: 10.128.0.14.4473 > 169.254.169.254.80: Flags [.], ack 1, win 58220, length 0
16:29:29.630269 42:01:0a:80:00:0e > 42:01:0a:80:00:01, ethertype IPv4 (0x0800), length 133: 10.128.0.14.4473 > 169.254.169.254.80: Flags [P.], seq 1:80, ack 1, win 58220, length 79: HTTP: GET / HTTP/1.1
16:29:29.630389 42:01:0a:80:00:01 > 42:01:0a:80:00:0e, ethertype IPv4 (0x0800), length 54: 169.254.169.254.80 > 10.128.0.14.4473: Flags [.], ack 80, win 65456, length 0
16:29:29.630445 42:01:0a:80:00:01 > 42:01:0a:80:00:0e, ethertype IPv4 (0x0800), length 54: 169.254.169.254.80 > 10.128.0.14.4473: Flags [.], ack 80, win 65535, length 0
16:29:29.632532 42:01:0a:80:00:01 > 42:01:0a:80:00:0e, ethertype IPv4 (0x0800), length 286: 169.254.169.254.80 > 10.128.0.14.4473: Flags [P.], seq 1:233, ack 80, win 65535, length 232: HTTP: HTTP/1.1 200 OK
computeMetadata/
16:29:29.632879 42:01:0a:80:00:0e > 42:01:0a:80:00:01, ethertype IPv4 (0x0800), length 54: 10.128.0.14.4473 > 169.254.169.254.80: Flags [F.], seq 80, ack 233, win 58220, length 0
16:29:29.633014 42:01:0a:80:00:01 > 42:01:0a:80:00:0e, ethertype IPv4 (0x0800), length 54: 169.254.169.254.80 > 10.128.0.14.4473: Flags [.], ack 81, win 65535, length 0
16:29:29.633092 42:01:0a:80:00:01 > 42:01:0a:80:00:0e, ethertype IPv4 (0x0800), length 54: 169.254.169.254.80 > 10.128.0.14.4473: Flags [F.], seq 233, ack 81, win 65535, length 0
16:29:29.633106 42:01:0a:80:00:0e > 42:01:0a:80:00:01, ethertype IPv4 (0x0800), length 54: 10.128.0.14.4473 > 169.254.169.254.80: Flags [.], ack 234, win 58220, length 0
root@buildlet:~ #
root@buildlet:~ # host -T swtch.com
16:29:49.352315 42:01:0a:80:00:0e > 42:01:0a:80:00:01, ethertype IPv4 (0x0800), length 78: 10.128.0.14.1216 > 169.254.169.254.53: Flags [S], seq 229167172, win 57344, options [mss 1420,nop,wscale 5,nop,nop,sackOK,nop,nop,TS val 11831 ecr 0], length 0
16:29:49.353032 42:01:0a:80:00:01 > 42:01:0a:80:00:0e, ethertype IPv4 (0x0800), length 62: 169.254.169.254.53 > 10.128.0.14.1216: Flags [S.], seq 4020397514, ack 229167173, win 65535, options [mss 1420,eol], length 0
16:29:49.353094 42:01:0a:80:00:0e > 42:01:0a:80:00:01, ethertype IPv4 (0x0800), length 54: 10.128.0.14.1216 > 169.254.169.254.53: Flags [.], ack 1, win 58220, length 0
16:29:49.353335 42:01:0a:80:00:0e > 42:01:0a:80:00:01, ethertype IPv4 (0x0800), length 83: 10.128.0.14.1216 > 169.254.169.254.53: Flags [P.], seq 1:30, ack 1, win 58220, length 29 15711+ A? swtch.com. (27)
16:29:49.353404 42:01:0a:80:00:01 > 42:01:0a:80:00:0e, ethertype IPv4 (0x0800), length 54: 169.254.169.254.53 > 10.128.0.14.1216: Flags [.], ack 30, win 65506, length 0
16:29:49.353436 42:01:0a:80:00:01 > 42:01:0a:80:00:0e, ethertype IPv4 (0x0800), length 54: 169.254.169.254.53 > 10.128.0.14.1216: Flags [.], ack 30, win 65535, length 0
16:29:49.472968 42:01:0a:80:00:01 > 42:01:0a:80:00:0e, ethertype IPv4 (0x0800), length 147: 169.254.169.254.53 > 10.128.0.14.1216: Flags [P.], seq 1:94, ack 30, win 65535, length 93 15711 4/0/0 A 216.239.34.21, A 216.239.38.21, A 216.239.36.21, A 216.239.32.21 (91)
swtch.com has address 216.239.34.21
swtch.com has address 216.239.38.21
swtch.com has address 216.239.36.21
swtch.com has address 216.239.32.21
16:29:49.474367 42:01:0a:80:00:0e > 42:01:0a:80:00:01, ethertype IPv4 (0x0800), length 54: 10.128.0.14.1216 > 169.254.169.254.53: Flags [F.], seq 30, ack 94, win 58220, length 0
16:29:49.474518 42:01:0a:80:00:01 > 42:01:0a:80:00:0e, ethertype IPv4 (0x0800), length 54: 169.254.169.254.53 > 10.128.0.14.1216: Flags [.], ack 31, win 65535, length 0
16:29:49.474580 42:01:0a:80:00:0e > 42:01:0a:80:00:01, ethertype IPv4 (0x0800), length 78: 10.128.0.14.4944 > 169.254.169.254.53: Flags [S], seq 2768536126, win 57344, options [mss 1420,nop,wscale 5,nop,nop,sackOK,nop,nop,TS val 11843 ecr 0], length 0
16:29:49.474645 42:01:0a:80:00:01 > 42:01:0a:80:00:0e, ethertype IPv4 (0x0800), length 54: 169.254.169.254.53 > 10.128.0.14.1216: Flags [F.], seq 94, ack 31, win 65535, length 0
16:29:49.474694 42:01:0a:80:00:0e > 42:01:0a:80:00:01, ethertype IPv4 (0x0800), length 54: 10.128.0.14.1216 > 169.254.169.254.53: Flags [.], ack 95, win 58220, length 0
16:29:49.474837 42:01:0a:80:00:01 > 42:01:0a:80:00:0e, ethertype IPv4 (0x0800), length 62: 169.254.169.254.53 > 10.128.0.14.4944: Flags [S.], seq 1364449965, ack 2768536127, win 65535, options [mss 1420,eol], length 0
16:29:49.474895 42:01:0a:80:00:0e > 42:01:0a:80:00:01, ethertype IPv4 (0x0800), length 54: 10.128.0.14.4944 > 169.254.169.254.53: Flags [.], ack 1, win 58220, length 0
16:29:49.475109 42:01:0a:80:00:0e > 42:01:0a:80:00:01, ethertype IPv4 (0x0800), length 83: 10.128.0.14.4944 > 169.254.169.254.53: Flags [P.], seq 1:30, ack 1, win 58220, length 29 43238+ AAAA? swtch.com. (27)
16:29:49.475220 42:01:0a:80:00:01 > 42:01:0a:80:00:0e, ethertype IPv4 (0x0800), length 54: 169.254.169.254.53 > 10.128.0.14.4944: Flags [.], ack 30, win 65506, length 0
16:29:49.475249 42:01:0a:80:00:01 > 42:01:0a:80:00:0e, ethertype IPv4 (0x0800), length 54: 169.254.169.254.53 > 10.128.0.14.4944: Flags [.], ack 30, win 65535, length 0swtch.com has IPv6 address 2001:4860:4802:36::15
swtch.com has IPv6 address 2001:4860:4802:32::15
swtch.com has IPv6 address 2001:4860:4802:38::15
swtch.com has IPv6 address 2001:4860:4802:34::15
16:29:49.594082 42:01:0a:80:00:01 > 42:01:0a:80:00:0e, ethertype IPv4 (0x0800), length 195: 169.254.169.254.53 > 10.128.0.14.4944: Flags [P.], seq 1:142, ack 30, win 65535, length 141 43238 4/0/0 AAAA 2001:4860:4802:36::15, AAAA 2001:4860:4802:32::15, AAAA 2001:4860:4802:38::15, AAAA 2001:4860:4802:34::15 (139)
16:29:49.594582 42:01:0a:80:00:0e > 42:01:0a:80:00:01, ethertype IPv4 (0x0800), length 78: 10.128.0.14.2704 > 169.254.169.254.53: Flags [S], seq 94430372, win 57344, options [mss 1420,nop,wscale 5,nop,nop,sackOK,nop,nop,TS val 11855 ecr 0], length 0
16:29:49.594584 42:01:0a:80:00:0e > 42:01:0a:80:00:01, ethertype IPv4 (0x0800), length 54: 10.128.0.14.4944 > 169.254.169.254.53: Flags [F.], seq 30, ack 142, win 58220, length 0
16:29:49.594903 42:01:0a:80:00:01 > 42:01:0a:80:00:0e, ethertype IPv4 (0x0800), length 54: 169.254.169.254.53 > 10.128.0.14.4944: Flags [.], ack 31, win 65535, length 0
16:29:49.594955 42:01:0a:80:00:01 > 42:01:0a:80:00:0e, ethertype IPv4 (0x0800), length 62: 169.254.169.254.53 > 10.128.0.14.2704: Flags [S.], seq 240584923, ack 94430373, win 65535, options [mss 1420,eol], length 0
16:29:49.594992 42:01:0a:80:00:0e > 42:01:0a:80:00:01, ethertype IPv4 (0x0800), length 54: 10.128.0.14.2704 > 169.254.169.254.53: Flags [.], ack 1, win 58220, length 0
16:29:49.595018 42:01:0a:80:00:01 > 42:01:0a:80:00:0e, ethertype IPv4 (0x0800), length 54: 169.254.169.254.53 > 10.128.0.14.4944: Flags [F.], seq 142, ack 31, win 65535, length 0
16:29:49.595039 42:01:0a:80:00:0e > 42:01:0a:80:00:01, ethertype IPv4 (0x0800), length 54: 10.128.0.14.4944 > 169.254.169.254.53: Flags [.], ack 143, win 58220, length 0
16:29:49.601114 42:01:0a:80:00:0e > 42:01:0a:80:00:01, ethertype IPv4 (0x0800), length 83: 10.128.0.14.2704 > 169.254.169.254.53: Flags [P.], seq 1:30, ack 1, win 58220, length 29 38545+ MX? swtch.comswtch.com mail is handled by 10 ALT2.ASPMX.L.GOOGLE.com.
swtch.com mail is handled by 10 ALT3.ASPMX.L.GOOGLE.com.
swtch.com mail is handled by 5 ALT1.ASPMX.L.GOOGLE.com.
swtch.com mail is handled by 10 ALT4.ASPMX.L.GOOGLE.com.
swtch.com mail is handled by 1 ASPMX.L.GOOGLE.com.
. (27)
16:29:49.601296 42:01:0a:80:00:01 > 42:01:0a:80:00:0e, ethertype IPv4 (0x0800), length 54: 169.254.169.254.53 > 10.128.0.14.2704: Flags [.], ack 30, win 65506, length 0
16:29:49.601310 42:01:0a:80:00:01 > 42:01:0a:80:00:0e, ethertype IPv4 (0x0800), length 54: 169.254.169.254.53 > 10.128.0.14.2704: Flags [.], ack 30, win 65535, length 0
16:29:49.727184 42:01:0a:80:00:01 > 42:01:0a:80:00:0e, ethertype IPv4 (0x0800), length 198: 169.254.169.254.53 > 10.128.0.14.2704: Flags [P.], seq 1:145, ack 30, win 65535, length 144 38545 5/0/0 MX ALT2.ASPMX.L.GOOGLE.com. 10, MX ALT3.ASPMX.L.GOOGLE.com. 10, MX ALT1.ASPMX.L.GOOGLE.com. 5, MX ALT4.ASPMX.L.GOOGLE.com. 10, MX ASPMX.L.GOOGLE.com. 1 (142)
16:29:49.727715 42:01:0a:80:00:0e > 42:01:0a:80:00:01, ethertype IPv4 (0x0800), length 54: 10.128.0.14.2704 > 169.254.169.254.53: Flags [F.], seq 30, ack 145, win 58220, length 0
16:29:49.727898 42:01:0a:80:00:01 > 42:01:0a:80:00:0e, ethertype IPv4 (0x0800), length 54: 169.254.169.254.53 > 10.128.0.14.2704: Flags [.], ack 31, win 65535, length 0
16:29:49.728050 42:01:0a:80:00:01 > 42:01:0a:80:00:0e, ethertype IPv4 (0x0800), length 54: 169.254.169.254.53 > 10.128.0.14.2704: Flags [F.], seq 145, ack 31, win 65535, length 0
16:29:49.728089 42:01:0a:80:00:0e > 42:01:0a:80:00:01, ethertype IPv4 (0x0800), length 54: 10.128.0.14.2704 > 169.254.169.254.53: Flags [.], ack 146, win 58220, length 0
root@buildlet:~ #
And then here's the same host command over UDP:
root@buildlet:~ # host swtch.com
16:29:52.679379 42:01:0a:80:00:0e > ff:ff:ff:ff:ff:ff, ethertype ARP (0x0806), length 42: Request who-has 10.128.0.1 tell 10.128.0.14, length 28
16:29:52.679700 42:01:0a:80:00:01 > 42:01:0a:80:00:0e, ethertype ARP (0x0806), length 42: Reply 10.128.0.1 is-at 42:01:0a:80:00:01, length 28
16:29:57.681297 42:01:0a:80:00:0e > ff:ff:ff:ff:ff:ff, ethertype ARP (0x0806), length 42: Request who-has 10.128.0.1 tell 10.128.0.14, length 28
16:29:57.681538 42:01:0a:80:00:01 > 42:01:0a:80:00:0e, ethertype ARP (0x0806), length 42: Reply 10.128.0.1 is-at 42:01:0a:80:00:01, length 28
^Croot@buildlet:~ # 16:30:04.722116 42:01:0a:80:00:01 > 42:01:0a:80:00:0e, ethertype ARP (0x0806), length 42: Request who-has 10.128.0.14 tell 10.128.0.1, length 28
16:30:04.722788 42:01:0a:80:00:0e > 42:01:0a:80:00:01, ethertype ARP (0x0806), length 42: Reply 10.128.0.14 is-at 42:01:0a:80:00:0e, length 28
Note that the ARP requests are both unnecessary (TCP has no trouble, and there's an entry in the ARP tables already) and answered. Yet the answers seem to be in vain.
If I clear the ARP tables, TCP stops briefly, causing ARP requests that seem to be ignored, but then it mysteriously recovers, seemingly not from any of the ARP traffic:
root@buildlet:~ # arp -a -d
10.128.0.1 (10.128.0.1) deleted
10.128.0.1 (10.128.0.1) cleared
root@buildlet:~ # host -T swtch.com
16:38:21.333466 42:01:0a:80:00:0e > ff:ff:ff:ff:ff:ff, ethertype ARP (0x0806), length 42: Request who-has 10.128.0.1 tell 10.128.0.14, length 28
16:38:21.333894 42:01:0a:80:00:01 > 42:01:0a:80:00:0e, ethertype ARP (0x0806), length 42: Reply 10.128.0.1 is-at 42:01:0a:80:00:01, length 28
16:38:22.330884 42:01:0a:80:00:0e > ff:ff:ff:ff:ff:ff, ethertype ARP (0x0806), length 42: Request who-has 10.128.0.1 tell 10.128.0.14, length 28
16:38:22.331118 42:01:0a:80:00:01 > 42:01:0a:80:00:0e, ethertype ARP (0x0806), length 42: Reply 10.128.0.1 is-at 42:01:0a:80:00:01, length 28
16:38:23.490877 42:01:0a:80:00:0e > ff:ff:ff:ff:ff:ff, ethertype ARP (0x0806), length 42: Request who-has 10.128.0.1 tell 10.128.0.14, length 28
16:38:23.491202 42:01:0a:80:00:01 > 42:01:0a:80:00:0e, ethertype ARP (0x0806), length 42: Reply 10.128.0.1 is-at 42:01:0a:80:00:01, length 28
16:38:25.650897 42:01:0a:80:00:0e > ff:ff:ff:ff:ff:ff, ethertype ARP (0x0806), length 42: Request who-has 10.128.0.1 tell 10.128.0.14, length 28
16:38:25.651221 42:01:0a:80:00:01 > 42:01:0a:80:00:0e, ethertype ARP (0x0806), length 42: Reply 10.128.0.1 is-at 42:01:0a:80:00:01, length 28
16:38:29.810897 42:01:0a:80:00:0e > ff:ff:ff:ff:ff:ff, ethertype ARP (0x0806), length 42: Request who-has 10.128.0.1 tell 10.128.0.14, length 28
16:38:29.811107 42:01:0a:80:00:01 > 42:01:0a:80:00:0e, ethertype ARP (0x0806), length 42: Reply 10.128.0.1 is-at 42:01:0a:80:00:01, length 28
;; Connection to 169.254.169.254#53(169.254.169.254) for swtch.com failed: timed out.
16:38:31.349845 42:01:0a:80:00:0e > 42:01:0a:80:00:01, ethertype IPv4 (0x0800), length 78: 10.128.0.14.4880 > 169.254.169.254.53: Flags [S], seq 1034021509, win 57344, options [mss 1460,nop,wscale 5,nop,nop,sackOK,nop,nop,TS val 24669 ecr 0], length 0
16:38:31.350504 42:01:0a:80:00:01 > 42:01:0a:80:00:0e, ethertype IPv4 (0x0800), length 62: 169.254.169.254.53 > 10.128.0.14.4880: Flags [S.], seq 2990675563, ack 1034021510, win 65535, options [mss 1420,eol], length 0
16:38:31.350584 42:01:0a:80:00:0e > 42:01:0a:80:00:01, ethertype IPv4 (0x0800), length 54: 10.128.0.14.4880 > 169.254.169.254.53: Flags [.], ack 1, win 58220, length 0
16:38:31.350714 42:01:0a:80:00:0e > 42:01:0a:80:00:01, ethertype IPv4 (0x0800), length 83: 10.128.0.14.4880 > 169.254.169.254.53: Flags [P.], seq 1:30, ack 1, win 58220, length 29 56470+ A? swtch.com. (27)
16:38:31.350793 42:01:0a:80:00:01 > 42:01:0a:80:00:0e, ethertype IPv4 (0x0800), length 54: 169.254.169.254.53 > 10.128.0.14.4880: Flags [.], ack 30, win 65506, length 0
16:38:31.350826 42:01:0a:80:00:01 > 42:01:0a:80:00:0e, ethertype IPv4 (0x0800), length 54: 169.254.169.254.53 > 10.128.0.14.4880: Flags [.], ack 30, win 65535, length 0
16:38:31.455801 42:01:0a:80:00:01 > 42:01:0a:80:00:0e, ethertype IPv4 (0x0800), length 147: 169.254.169.254.53 > 10.128.0.14.4880: Flags [P.], seq 1:94, ack 30, win 65535, length 93 56470 4/0/0 A 216.239.36.21, A 216.239.38.21, A 216.239.34.21, A 216.239.32.21 (91)
swtch.com has address 216.239.36.21
swtch.com has address 216.239.38.21
swtch.com has address 216.239.34.21
swtch.com has address 216.239.32.21
...
After that dance the ARP cache looks the same as ever:
root@buildlet:~ # arp -an
? (10.128.0.1) at 42:01:0a:80:00:01 on vtnet0 permanent [ethernet]
? (10.128.0.1) at (incomplete) on vtnet0 permanent published [ethernet]
There was one time, which I didn't capture, where I saw host -T working but curl to port 53 not sending any packets. Both were trying to connect to the same IP address and port over TCP and only one program could do it! I rebooted the machine and never got that behavior again, though.
My understanding is that Dragonfly replicates the ARP tables across all CPUs, but they all have the same two entries. A couple are missing the 'permanent' bits:
[root@buildlet ~]# for i in $(seq 0 15); do echo cpu$i; arp -anc$i; done
cpu0
? (10.128.0.1) at 42:01:0a:80:00:01 on vtnet0 permanent [ethernet]
? (10.128.0.1) at (incomplete) on vtnet0 permanent published [ethernet]
cpu1
? (10.128.0.1) at 42:01:0a:80:00:01 on vtnet0 permanent [ethernet]
? (10.128.0.1) at (incomplete) on vtnet0 permanent published [ethernet]
cpu2
? (10.128.0.1) at 42:01:0a:80:00:01 on vtnet0 permanent [ethernet]
? (10.128.0.1) at (incomplete) on vtnet0 permanent published [ethernet]
cpu3
? (10.128.0.1) at 42:01:0a:80:00:01 on vtnet0 permanent [ethernet]
? (10.128.0.1) at (incomplete) on vtnet0 permanent published [ethernet]
cpu4
? (10.128.0.1) at 42:01:0a:80:00:01 on vtnet0 [ethernet]
? (10.128.0.1) at (incomplete) on vtnet0 published [ethernet]
cpu5
? (10.128.0.1) at 42:01:0a:80:00:01 on vtnet0 permanent [ethernet]
? (10.128.0.1) at (incomplete) on vtnet0 permanent published [ethernet]
cpu6
? (10.128.0.1) at 42:01:0a:80:00:01 on vtnet0 permanent [ethernet]
? (10.128.0.1) at (incomplete) on vtnet0 permanent published [ethernet]
cpu7
? (10.128.0.1) at 42:01:0a:80:00:01 on vtnet0 permanent [ethernet]
? (10.128.0.1) at (incomplete) on vtnet0 permanent published [ethernet]
cpu8
? (10.128.0.1) at 42:01:0a:80:00:01 on vtnet0 [ethernet]
? (10.128.0.1) at (incomplete) on vtnet0 published [ethernet]
cpu9
? (10.128.0.1) at 42:01:0a:80:00:01 on vtnet0 permanent [ethernet]
? (10.128.0.1) at (incomplete) on vtnet0 permanent published [ethernet]
cpu10
? (10.128.0.1) at 42:01:0a:80:00:01 on vtnet0 permanent [ethernet]
? (10.128.0.1) at (incomplete) on vtnet0 permanent published [ethernet]
cpu11
? (10.128.0.1) at 42:01:0a:80:00:01 on vtnet0 [ethernet]
? (10.128.0.1) at (incomplete) on vtnet0 published [ethernet]
cpu12
? (10.128.0.1) at 42:01:0a:80:00:01 on vtnet0 permanent [ethernet]
? (10.128.0.1) at (incomplete) on vtnet0 permanent published [ethernet]
cpu13
? (10.128.0.1) at 42:01:0a:80:00:01 on vtnet0 permanent [ethernet]
? (10.128.0.1) at (incomplete) on vtnet0 permanent published [ethernet]
cpu14
? (10.128.0.1) at 42:01:0a:80:00:01 on vtnet0 permanent [ethernet]
? (10.128.0.1) at (incomplete) on vtnet0 permanent published [ethernet]
cpu15
? (10.128.0.1) at 42:01:0a:80:00:01 on vtnet0 permanent [ethernet]
? (10.128.0.1) at (incomplete) on vtnet0 permanent published [ethernet]
None of this makes any sense. Some more diagnostics:
[root@buildlet ~]# ifconfig
vtnet0: flags=8843<UP,BROADCAST,RUNNING,SIMPLEX,MULTICAST> metric 0 mtu 1500
options=2a<TXCSUM,VLAN_MTU,JUMBO_MTU>
ether 42:01:0a:80:00:0e
inet6 fe80::4001:aff:fe80:e%vtnet0 prefixlen 64 scopeid 0x1
inet 10.128.0.14 netmask 0xffffffff broadcast 10.128.0.14
media: Ethernet 1000baseT <full-duplex>
status: active
lo0: flags=8049<UP,LOOPBACK,RUNNING,MULTICAST> metric 0 mtu 16384
options=43<RXCSUM,TXCSUM,RSS>
inet 127.0.0.1 netmask 0xff000000
inet6 ::1 prefixlen 128
inet6 fe80::1%lo0 prefixlen 64 scopeid 0x2
groups: lo
[root@buildlet ~]# netstat -rn
Routing tables
Internet:
Destination Gateway Flags Refs Use Netif Expire
default 10.128.0.1 UGSc 1 0 vtnet0
10.128.0.1/32 vtnet0 ULSc 2 0 vtnet0
10.128.0.14/32 link#1 UC 0 0 vtnet0
127.0.0.1 127.0.0.1 UH 0 0 lo0
Internet6:
Destination Gateway Flags Netif Expire
::1 ::1 UH lo0
fe80::%vtnet0/64 link#1 UC vtnet0
fe80::4001:aff:fe80:e%vtnet0 42:01:0a:80:00:0e UHL lo0
fe80::%lo0/64 fe80::1%lo0 Uc lo0
fe80::1%lo0 link#2 UHL lo0
ff01::/32 ::1 U lo0
ff02::%vtnet0/32 link#1 UC vtnet0
ff02::%lo0/32 ::1 UC lo0
[root@buildlet ~]# route -nv show
Routing tables
Internet:
Destination Gateway Flags
default 10.128.0.1 UG
10.128.0.1 42:01:0a:80:00:01 UH
10.128.0.1 42:01:0a U
10.128.0.14 link#1 U
127.0.0.1 127.0.0.1 UH
169.254.169.254 10.128.0.1 UGH
Internet6:
Destination Gateway Flags
::1 ::1 UH
fe80::%vtnet0 link#1 U
fe80::4001:aff:fe80:e%vtnet0 42:01:0a:80:00:0e UH
fe80::%lo0 fe80::1%lo0 U
fe80::1%lo0 link#2 UH
ff01:: ::1 U
ff02::%vtnet0 link#1 U
ff02::%lo0 ::1 U
[root@buildlet ~]#
Not directly related, but I also discovered an easy way to panic the kernel:
root@buildlet:~ # ifconfig vtnet0 mtu 16384
panic: overflowed mbuf 0xfffff8037c5bec00
cpuid = 8
Trace beginning at frame 0xfffff8037cf9c6e8
m_free() at m_free+0x351 0xffffffff806be5c1
m_free() at m_free+0x351 0xffffffff806be5c1
m_freem() at m_freem+0x15 0xffffffff806be845
vtnet_newbuf() at vtnet_newbuf+0x4b 0xffffffff80a71e9b
vtnet_init() at vtnet_init+0x108 0xffffffff80a73848
vtnet_ioctl() at vtnet_ioctl+0x213 0xffffffff80a73d23
Debugger("panic")
CPU8 stopping CPUs: 0x0000feff
stopped
Stopped at Debugger+0x7c: movb $0,0xbcc819(%rip)
db>
db>
Did you try adding a static entry after purging the discovered ones?
Also, if there is any form of IPv6, how does that act? Have you tried pinging ff02::1? ;)
Does putting the iface in promisc mod help?
Thanks for the suggestions. Static ARP didn't help before, I hadn't tried IPv6, and tcpdump reported at startup that it cannot put the interface in promiscuous mode at all.
Oddly, in the hour or so I have left the VM sitting here, it has fixed itself for UDP. This is unfortunate in the sense that I don't know what changed, which won't help the next time I create a VM, but it's working at the moment. I can't see anything different (except obviously the lack of ARP messages and the presence of UDP traffic):
[root@buildlet ~]# host swtch.com
18:00:55.963586 42:01:0a:80:00:0e > 42:01:0a:80:00:01, ethertype IPv4 (0x0800), length 69: 10.128.0.14.2112 > 169.254.169.254.53: 38747+ A? swtch.com. (27)
18:00:56.090490 42:01:0a:80:00:01 > 42:01:0a:80:00:0e, ethertype IPv4 (0x0800), length 133: 169.254.169.254.53 > 10.128.0.14.2112: 38747 4/0/0 A 216.239.38.21, A 216.239.32.21, A 216.239.36.21, A 216.239.34.21 (91)
swtch.com has address 216.239.38.21
swtch.com has address 216.239.32.21
swtch.com has address 216.239.36.21
swtch.com has address 216.239.34.21
18:00:56.091320 42:01:0a:80:00:0e > 42:01:0a:80:00:01, ethertype IPv4 (0x0800), length 69: 10.128.0.14.2720 > 169.254.169.254.53: 21334+ AAAA? swtch.com. (27)
18:00:56.212072 42:01:0a:80:00:01 > 42:01:0a:80:00:0e, ethertype IPv4 (0x0800), length 181: 169.254.169.254.53 > 10.128.0.14.2720: 21334 4/0/0 AAAA 2001:4860:4802:36::15, AAAA 2001:4860:4802:32::15, AAAA 2001:4860:4802:38::15, AAAA 2001:4860:4802:34::15 (139)
swtch.com has IPv6 address 2001:4860:4802:36::15
swtch.com has IPv6 address 2001:4860:4802:32::15
swtch.com has IPv6 address 2001:4860:4802:38::15
swtch.com has IPv6 address 2001:4860:4802:34::15
18:00:56.212775 42:01:0a:80:00:0e > 42:01:0a:80:00:01, ethertype IPv4 (0x0800), length 69: 10.128.0.14.1056 > 169.254.169.254.53: 28769+ MX? swtch.com. (27)
18:00:56.347300 42:01:0a:80:00:01 > 42:01:0a:80:00:0e, ethertype IPv4 (0x0800), length 184: 169.254.169.254.53 > 10.128.0.14.1056: 28769 5/0/0 MX ALT4.ASPMX.L.GOOGLE.com. 10, MX ALT1.ASPMX.L.GOOGLE.com. 5, MX ALT2.ASPMX.L.GOOGLE.com. 10, MX ASPMX.L.GOOGLE.com. 1, MX ALT3.ASPMX.L.GOOGLE.com. 10 (142)
swtch.com mail is handled by 10 ALT4.ASPMX.L.GOOGLE.com.
swtch.com mail is handled by 5 ALT1.ASPMX.L.GOOGLE.com.
swtch.com mail is handled by 10 ALT2.ASPMX.L.GOOGLE.com.
swtch.com mail is handled by 1 ASPMX.L.GOOGLE.com.
swtch.com mail is handled by 10 ALT3.ASPMX.L.GOOGLE.com.
[root@buildlet ~]# arp -an
? (10.128.0.1) at 42:01:0a:80:00:01 on vtnet0 permanent [ethernet]
? (10.128.0.1) at (incomplete) on vtnet0 permanent published [ethernet]
[root@buildlet ~]# netstat -rn
Routing tables
Internet:
Destination Gateway Flags Refs Use Netif Expire
default 10.128.0.1 UGSc 1 0 vtnet0
10.128.0.1/32 vtnet0 ULSc 2 0 vtnet0
10.128.0.14/32 link#1 UC 0 0 vtnet0
127.0.0.1 127.0.0.1 UH 0 0 lo0
Internet6:
Destination Gateway Flags Netif Expire
::1 ::1 UH lo0
fe80::%vtnet0/64 link#1 UC vtnet0
fe80::4001:aff:fe80:e%vtnet0 42:01:0a:80:00:0e UHL lo0
fe80::%lo0/64 fe80::1%lo0 Uc lo0
fe80::1%lo0 link#2 UHL lo0
ff01::/32 ::1 U lo0
ff02::%vtnet0/32 link#1 UC vtnet0
ff02::%lo0/32 ::1 UC lo0
[root@buildlet ~]# ifconfig -a
vtnet0: flags=8843<UP,BROADCAST,RUNNING,SIMPLEX,MULTICAST> metric 0 mtu 1500
options=2a<TXCSUM,VLAN_MTU,JUMBO_MTU>
ether 42:01:0a:80:00:0e
inet6 fe80::4001:aff:fe80:e%vtnet0 prefixlen 64 scopeid 0x1
inet 10.128.0.14 netmask 0xffffffff broadcast 10.128.0.14
media: Ethernet 1000baseT <full-duplex>
status: active
lo0: flags=8049<UP,LOOPBACK,RUNNING,MULTICAST> metric 0 mtu 16384
options=43<RXCSUM,TXCSUM,RSS>
inet 127.0.0.1 netmask 0xff000000
inet6 ::1 prefixlen 128
inet6 fe80::1%lo0 prefixlen 64 scopeid 0x2
groups: lo
[root@buildlet ~]# route -nv show
Routing tables
Internet:
Destination Gateway Flags
default 10.128.0.1 UG
10.128.0.1 42:01:0a:80:00:01 UH
10.128.0.1 42:01:0a U
10.128.0.14 link#1 U
127.0.0.1 127.0.0.1 UH
169.254.169.254 10.128.0.1 UGH
Internet6:
Destination Gateway Flags
::1 ::1 UH
fe80::%vtnet0 link#1 U
fe80::4001:aff:fe80:e%vtnet0 42:01:0a:80:00:0e UH
fe80::%lo0 fe80::1%lo0 U
fe80::1%lo0 link#2 UH
ff01:: ::1 U
ff02::%vtnet0 link#1 U
ff02::%lo0 ::1 U
[root@buildlet ~]#
Now that I notice it, the line in route -nv show
that has a half-MAC address is a bit odd. But it was there when things were broken and remains there now that they are working.
There is no obvious explanation for what changed. The only traffic shown by the background tcpdump between an hour ago and when things were working just now is ARP requests from the router for the VM's IP address, and the VM replying, one round trip per minute like clockwork.
I started two more VMs. One was working at boot (first time!). The other came up in the "TCP is fine, UDP is broken" state.
The 'published' might indicate ProxyARP..... that would be quite interesting, but might be the case in your environment, that you are in one VLAN, and the gateway is actually in another VLAN.
I guess that your local box at least is not playing proxy_arp... but the remote one might...
@rsc <TXCSUM,VLAN_MTU,JUMBO_MTU>
just the TXCSUM alone without the RXCSUM looks strange on the virtio NIC.
We have it disabled (both tx and rx hardware checksum offloading) for the FreeBSD builders for many years now:
https://github.com/golang/build/blob/d35cb804da1f71ec56603f818a96dd0b43e14da5/env/freebsd-amd64/loader.conf#L6
It is recommended for pfSense (a FreeBSD based firewall appliance),
Thanks @paulzhol, I will see what effect that has. I've found that ifdown/ifup/dhclient vtnet0 seems to "correct" the problem, so another option I am trying is just doing that as needed (up to 10 times) before trying to download the buildlet.
@rsc, the manpage mentions why RXCSUM is disabled.
Maybe you can add an ifconfig -txcsum vtnet0
in your current flow instead of disabling it via the bootloader.
Disabling TXCSUM did not help, but thanks for the suggestion. I have left it disabled.
I just did 10 runs of all.bash. 3 came up OK the first time. 4 required one reset. 3 required two resets. So it looks like a reset has about a 50% chance of working. The buildlet script is willing to do up to 10 and then it powers down the machine. This should be good enough, if dissatisfying.
Another point to consider: cmd/buildlet
lowers the MTU on FreeBSD/OpenBSD & Plan9 (all GCE VMs?) to 1460:
https://github.com/golang/build/blob/4864e2e8a08906f74b4ee3a973596fd7a93e9273/cmd/buildlet/buildlet.go#L440-L448
While your tcpdump shows mss 1420
in the first host -T
for the flow 10.128.0.14.1216 > 169.254.169.254.53 in both SYN and SYN+ACK, but after you reset the ARP tables in the 10.128.0.14.4880 > 169.254.169.254.53 flow its mss 1460
for the SYN but mss 1420
for the returning SYN+ACK.
I don't really have a good story about how this could be triggering the small UDP packets to send these ARPs but it still looks strange enough.
Change https://go.dev/cl/419083 mentions this issue: dashboard: add new, unused dragonfly-amd64-622 builder
Change https://go.dev/cl/419081 mentions this issue: env/dragonfly-amd64: add scripts for building GCE VM image
Change https://go.dev/cl/419084 mentions this issue: dashboard: use dragonfly on GCE for dragonfly-amd64 builds
Glad to see progress on this issue!
I see there are problems with UDP and vtnet but it is not clear to me how it is reproduced. Is there anything we, from DragonFlyBSD, should do, investigate?
I've also seen that you created a GCE image for 6.2.2, are you guys going to follow RELEASE only? And what do we do with the reverse builder?
@tuxillo, would you be willing to review https://go-review.googlesource.com/c/build/+/419081/ to see if it looks like it makes sense?
To answer your questions:
When the image boots on GCP - a completely standard build, a VM configured with just the Dragonfly install CD should be enough to reproduce - it just can't do any UDP traffic at all. UDP traffic triggers ARP requests for the gateway instead. So 'host -W 3 google.com' times out for example, but 'host -T -W 3 google.com' works fine. This is the state after bringing up vtnet0 at boot on something like half the times it boots. I don't understand what could possibly cause that failure mode, honestly. It could be Dragonfly or it could be something about the virtio network device on Google Cloud's side.
I used a standard release for reproducibility. Over at https://farmer.golang.org/builders we have a list of the builders for other systems and we typically have a few different release versions as needed for supportability. The idea is that we'd add a new builder for new releases and retire the old ones. Does that seem like a reasonable plan to you?
We haven't changed over from the reverse builder yet, but once we do I will post here. At that point you can retire the reverse builder, with our gratitude for keeping it running for so long.
Thanks!
@tuxillo, would you be willing to review https://go-review.googlesource.com/c/build/+/419081/ to see if it looks like it makes sense?
@rsc the patch looks good to me and it's far better than what I could provide, which was nothing :-) It also helps me understand the image creation process from your side.
To answer your questions:
When the image boots on GCP - a completely standard build, a VM configured with just the Dragonfly install CD should be enough to reproduce - it just can't do any UDP traffic at all. UDP traffic triggers ARP requests for the gateway instead. So 'host -W 3 google.com' times out for example, but 'host -T -W 3 google.com' works fine. This is the state after bringing up vtnet0 at boot on something like half the times it boots. I don't understand what could possibly cause that failure mode, honestly. It could be Dragonfly or it could be something about the virtio network device on Google Cloud's side.
I can see you're using "DHCP mtu 1460" when setting up the vtnet netwok interface, but I don't know why. We have two DHCP clients, one is dhclient
which comes from OpenBSD and it is a bit outdated and the other one is dhcpcd
. We have known issues with dhclient in virtual environments (see https://bugs.dragonflybsd.org/issues/3317), not sure if this affects GCE VMs too.
Is there a way I can pick up the already generated image and boot it myself in GCP so I can try? Or should I generate a new one myself? Also, I'd need the network configuration I need to use in GCP to get a setup as close as possible to the one you had.
I used a standard release for reproducibility. Over at https://farmer.golang.org/builders we have a list of the builders for other systems and we typically have a few different release versions as needed for supportability. The idea is that we'd add a new builder for new releases and retire the old ones. Does that seem like a reasonable plan to you?
Our release model is very typical, with a point release which is the stable version, i.e. RELEASE-6.2, which is then tagged for minors (.2, .3, whatever) and this is done twice a year.
Then we have our "master" branch which is what you'd call "tip" I think, but the difference is that most of the DFly developers use this one, so normally it is pretty stable. Ideally, if you don't mind, under amd64 (we only support one arch atm) we'd have something what the freebds builder has. For example, "6_2" and "BE" (bleeding-edge) or tip, whatever you want to call it.
We haven't changed over from the reverse builder yet, but once we do I will post here. At that point you can retire the reverse builder, with our gratitude for keeping it running for so long.
Sure thing, thanks!
Thanks!
Not directly related, but I also discovered an easy way to panic the kernel:
root@buildlet:~ # ifconfig vtnet0 mtu 16384 panic: overflowed mbuf 0xfffff8037c5bec00 cpuid = 8 Trace beginning at frame 0xfffff8037cf9c6e8 m_free() at m_free+0x351 0xffffffff806be5c1 m_free() at m_free+0x351 0xffffffff806be5c1 m_freem() at m_freem+0x15 0xffffffff806be845 vtnet_newbuf() at vtnet_newbuf+0x4b 0xffffffff80a71e9b vtnet_init() at vtnet_init+0x108 0xffffffff80a73848 vtnet_ioctl() at vtnet_ioctl+0x213 0xffffffff80a73d23 Debugger("panic") CPU8 stopping CPUs: 0x0000feff stopped Stopped at Debugger+0x7c: movb $0,0xbcc819(%rip) db> db>
Thanks for reporting, created: https://bugs.dragonflybsd.org/issues/3320
I can see you're using "DHCP mtu 1460" when setting up the vtnet netwok interface, but I don't know why.
I tried that because FreeBSD was setting the smaller MTU as well. Not setting it didn't help.
We have two DHCP clients, one is dhclient which comes from OpenBSD and it is a bit outdated and the other one is dhcpcd. We have known issues with dhclient in virtual environments (see https://bugs.dragonflybsd.org/issues/3317), not sure if this affects GCE VMs too.
Thanks for this tip. I will give dhcpd a try.
Then we have our "master" branch which is what you'd call "tip" I think, but the difference is that most of the DFly developers use this one, so normally it is pretty stable. Ideally, if you don't mind, under amd64 (we only support one arch atm) we'd have something what the freebds builder has. For example, "6_2" and "BE" (bleeding-edge) or tip, whatever you want to call it.
The only problem with bleeding-edge is that it means we have to keep rebuilding the image at regular intervals, which we could do, but it's a bit of a pain. It also means that results change when the builder changes, whereas we try to keep the builder constant and have only our Go tree changing. For comparison, as I understand it we do not have any FreeBSD builder tracking the dev branch, just numbered releases.
I will work on getting you precise directions for GCP.
This bug is going to auto-close in a little while but we still won't have moved off the reverse builder yet. I'll post here when we have.
The only problem with bleeding-edge is that it means we have to keep rebuilding the image at regular intervals, which we could do, but it's a bit of a pain. It also means that results change when the builder changes, whereas we try to keep the builder constant and have only our Go tree changing. For comparison, as I understand it we do not have any FreeBSD builder tracking the dev branch, just numbered releases.
A good compromise perhaps is to rebuild bleeding-edge only when we bump the __DragonFly_version macro (https://github.com/DragonFlyBSD/DragonFlyBSD/blob/master/sys/sys/param.h#L244), which we only do when there is significant changes, you can see the version history in that header file.
I will work on getting you precise directions for GCP.
Okay thanks.
This bug is going to auto-close in a little while but we still won't have moved off the reverse builder yet. I'll post here when we have.
Sure, let me know.
Looks like Dragonfly now supports virtio:
https://leaf.dragonflybsd.org/cgi/web-man?command=virtio§ion=4
So it should run on GCE?
If somebody could prepare make.bash scripts to script the install to prepare bootable images, we could run it on GCE.
See the netbsd, openbsd, and freebsd directories as examples: https://github.com/golang/build/tree/master/env
(The script must run on Linux and use qemu to do the image creation.)
/cc @tdfbsd