Closed mtrofimm closed 8 years ago
I'm testing with -vultr-os-id=160 (buntu 14.04 x64) and --vultr-private-networking=true.
one more test:
docker-machine create --driver vultr --vultr-api-key=myAPIKey --vultr-region-id=7 --vultr-plan-id=29 --vultr-ipv6=1 --vultr-private-networking=1 --engine-env DEV=1 testowa5
Running pre-create checks...
(testowa5) Validating Vultr VPS parameters...
(testowa5) getting client
(testowa5) getting client
(testowa5) getting client
Creating machine...
(testowa5) getting client
(testowa5) Creating Vultr VPS...
(testowa5) Using PXE boot
(testowa5) Provisioning RancherOS (stable)
(testowa5) getting client
(testowa5) getting client
(testowa5) Waiting for IP address to become available...
(testowa5) Created Vultr VPS ID: 3323436, Public IP: 185.92.221.191, Private IP: 10.99.0.12
Waiting for machine to be running, this may take a few minutes...
(testowa5) getting client
(testowa5) getting client
(testowa5) getting client
(testowa5) getting client
(testowa5) getting client
(testowa5) getting client
(testowa5) getting client
Machine is running, waiting for SSH to be available...
Detecting operating system of created instance...
Detecting the provisioner...
Provisioning with rancheros...
Copying certs to the local machine directory...
Copying certs to the remote machine...
Setting Docker configuration on the remote daemon...
Checking connection to Docker...
(testowa5) getting client
Docker is up and running!
To see how to connect Docker to this machine, run: docker-machine env testowa5
[root@vultr ~]# docker-machine ssh testowa5
(testowa5) getting client
[rancher@testowa5 ~]$ ifconfig eth1
eth1 Link encap:Ethernet HWaddr 5A:00:00:1E:4E:C6
inet6 addr: fe80::5800:ff:fe1e:4ec6/64 Scope:Link
UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1
RX packets:1 errors:0 dropped:0 overruns:0 frame:0
TX packets:10 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:1000
RX bytes:396 (396.0 B) TX bytes:1439 (1.4 KiB)
[rancher@testowa5 ~]$ hostname
testowa5
Auto-configuration of private network interface eth1 is already implemented. But only for machines that are created with the default RancherOS. For Ubuntu machines you can actually pass a Cloud-init userscript with the --vultr-userdata
parameter.
Then you just have to write a bash script that configures the private interface.
Something similar to this (not tested):
#!/bin/sh
cat > /etc/network/interfaces <<EOF
auto eth1
iface eth1 inet static
address 10.99.0.200
netmask 255.255.0.0
mtu 1450
EOF
ifup eth1
Pozdro :smile:
there is still some issue with autoconfiguration of rancheros. See my second test with default os.
pozdro.
This has been fixed in https://github.com/janeczku/docker-machine-vultr/releases/tag/v1.0.3.
Auto-configuration of private network interface eth1
is fully working again with RancherOS.
Would it be possible to add auto-configuration of private interface in VPS when --vultr-private-networking is enabled? I'm thinking abount establishing docker swarm using vultr private network only.
Thanks for advance.