Open moul opened 9 years ago
This doesn't really make sense. Maybe the network infrastructure has some restrictions, but the servers themselves? The hardware receives and sends ethernet frames, a layer below IP.
Can you describe those limitations? If the C1 hardware actually cannot do native IPv6, that makes it somewhat useless. Come on, it's 2015 and the IPv4 depletion is actually a real problem now. Launching a product these days without native IPv6 support (either planned or in production) seems very short-sighted.
The hardware limitation is not on the C1 node itself but on the blade containing the C1 servers, look at this: https://www.scaleway.com/features/, the first photo shows a blade with 18 C1 nodes. The blade handles the power usage, serial but also the network between the node and the big backbones
Today we have a limitation on the blade and are working to get this fixed. As soon as the blade and the rest of our network will forward, filter and shape the IPv6 network, you will be able to manually configure the IPv6 address on your node, then we will still need to support DHCP or a script to automatically configure your nodes if possible
Please, stay tuned to this thread as we will update it progressively, so you can try and give us feedbacks
Thanks for the information, it makes more sense now. :)
I assume the deployment will be similar to online.net's servers where you can request a prefix delegation via DHCPv6-PD then. That is a reasonable solution. Can you share more details about the upcoming IPv6 deployment? E.g. what kind (length) of prefixes can we get delegated?
If you have some kind of beta of the IPv6 support, I'd be willing to participate in it as well. I have solid IPv6 knowledge, so I'd be able to hopefully give useful feedback.
Update: I managed to get IPv6 working between two nodes:
root@distracted-wright:~# ip addr show dev eth0
2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc mq state UP group default qlen 532
link/ether 00:07:cb:03:0a:dc brd ff:ff:ff:ff:ff:ff
inet 10.1.45.56/23 brd 10.1.45.255 scope global eth0
valid_lft forever preferred_lft forever
inet6 2::6/64 scope global
valid_lft forever preferred_lft forever
root@distracted-wright:~# ip -6 neigh show dev eth0
2::4 lladdr 00:07:cb:03:0a:e8 REACHABLE
root@distracted-wright:~# ping6 -I -c 2 eth0 2::4
PING 2::4(2::4) from 2::6 eth0: 56 data bytes
64 bytes from 2::4: icmp_seq=1 ttl=64 time=1.19 ms
64 bytes from 2::4: icmp_seq=2 ttl=64 time=1.20 ms
--- 2::4 ping statistics ---
2 packets transmitted, 2 received, 0% packet loss, time 1001ms
rtt min/avg/max/mdev = 1.194/1.197/1.200/0.003 ms
root@distracted-wright:~#
However, I needed to disable some filtering options from the blade, the hardware team is working on a patch
Looks good. Like I stated in the community thread, even simple local IPv6 connectivity would be quite helpful for me already.
I would be also interested in testing beta Ipv6
C1 support IPv6 in IPv4 but kernel config limit ip6tables possibility. The kernel configuration must be updated with that : CONFIG_NF_CONNTRACK_IPV6=m
@moul, is there any progress with IPv6 on the C1 hardware?
@grigorig we are waiting for the hardware team to release a new software version with ipv6 filtering support In the meantime, we are preparing our infrastructure, so you should get full ipv6 support (not only local one)
I will update the ticket as soon as we have this feature in beta :)
Why don't you run some 6in4 gateways/tunnel servers on hardware connected to the scaleway network in the meant time for people who want IPv6 NOW?
I mean come on you are a major datacentre I'm sure finding a few servers for that wouldn't be a problem ;-)
If they're on your network they could be reachable from the private IPv4 ranges as well saving on public IP's.
@moul Hi. Is there any ETA yet for native IPv6 support on C1? Could we try the alpha or the beta?
What they need to do is 4in6 not 6in4 if they have a native IPv6 network which they should
and that requires something called 4rd RFC7600
Native IPV6 support is now available for C2 servers and VPS offers
https://blog.scaleway.com/2016/03/31/introducing-native-ipv6-connectivity-on-scaleway/
Unfortunately, the way you implemented IPv6 on C2/VPS is the best example of how not do it I have seen yet. And the advertisement is off topic here, too.
Scaleway really dropped the ball on IPv6 support on C1 and it's sad to see. I'll probably move my services elsewhere soon.
@grigorig Hi. Could you argue to better understand your point about the fact that IPv6 would be badly implemented with C2 servers?
@HLFH sure.
Each server just gets a single IPv6 address in a /127 subnet. The recommendation is to allocate at least a /64 per node, so that's really bad and precludes all of the use cases that are possible with the bigger address space of IPv6. Furthermore, it looks like a /64 prefix is shared among multiple customers. Since /64 is considered the smallest prefix given to an end node, any IP based blocking in IPv6 land is usually done with the granularity of at least /64. So if one Scaleway customer does something bad and gets blocked somewhere, many customers are affected.
I agree with @grigorig
@moul Hi. I'm asking if IPv6 reverse DNS delegation is supported? On Online.net, it could work. Does that work too on Scaleway?
@grigorig, @meyskens,
You missed that this is the first step of our IPv6 deployment. We're announcing IPv6 connectivity and as stated, not a full flexible implementation. To route a /64 on a node, you first need an IPv6 connectivity. ;)
These IPv6 are assigned to nodes and not to customers, there is no reverse DNS delegation on the nodes IPs.
Stay tuned.
OK then. But this is barely usable, there isn't really much point in releasing IPv6 support in such an unfinished state and still claiming to be "fully IPv6 ready". It's a lie. The IPv6 support that is available today on Scaleway is as basic as it can possibly get. It certainly would have helped to clarify it in the blog post that this is only the very first step.
Also, can you give any statement about IPv6 support on C1, to get back on topic?
Hey, this issue is marked as "help wanted". What help do you need to get this working on C1?
I am very interested in using your C1 servers, but without IPv6 connectivity with at least /64 allocations, it's not usable to me. Has there been any update @moul?
I also have a C1, and was wondering in what state it is now
FWIW, I deleted my scaleway account yesterday, as this doesn't seem to be happening.
It's a pity, either it should be closed if not feasable or updated if there is a way to do it
@moul Still no updates to provide for the C1?
I'm migrating to VC1 which has ipv6 and same price
@moul Still no updates for the C1?
Please make it happen. I mean it's soon 2017. Any timeline available?
I think we can close this issue with a won't-fix tag. Because it will never happen on C1.
@QuentinPerez
Any updates on this?
I think C1s are obsolete and not available anymore (except for the already existing instances, of course). As far as I understood, they were replaced with the newer ARMv8 offering - which has better performance and does support IPv6 (poorly implemented, with /127 subnets - but that's another story).
@drdaeman the c1 seems to be still available, also the arm64 offering is virtualized so not baremetal (yet, i hope)
@meyskens Ah, you're right. I was looking at AMS region and thought C1s were gone upon not seeing them listed anywhere. My bad, thanks for pointing this out.
@drdaeman I can see them being replaced by the arm64, but definitely not this year. aarch64 is even less supported than armhf (which over the past 2 years really became big). See Docker for example, they now support armhf but not aarch64.
We are working on workarounds to get this fixed on C1
@moul has this been abandoned?
I know it's not on topic but I would start using a C2 or Virtual instance right now if it had proper IPv6 support (i.e. an IPv6 address that does not change after reboot).
Keep trying and failing to deploy a C1 because you're out of IPv4 addresses apparently… It's a bit of a pain having to SSH through an existing one.
So, it is possible to have proper IPv6 support on VC1S? It seems that my v6 gateway is reachable, but not my server.
root@me:/home# ip -6 neigh show dev eth0
2001:bc8:4700:2300::1:104 FAILED
It was always possible to have IPv6 support on VC1S (the VPS). This issue is about the C1 (ARM dedicated server). To solve issue with your server create a support ticket instead of spamming unrelated github issues.
Still no news about IPv6 for C1?
I would also like that on my C1 instance. There has been no update from @moul in the last 3 years, as if they dropped the ball on this. At the very least, give us an update, so we can know what to expect and react accordingly.
Actually, the C1 servers are lacking from native IPv6 support due to hardware limitations. We are working on workarounds to get this fixed on C1 and we should support it natively in newer generations of the hardware.
This issue will stay open if anybody have suggestions, use cases or feedbacks, so we can get this feature more quickly.