Open sharon-vendrov opened 4 years ago
I was looking for this information and the closest TLDR I got was the "Simpler Solutions" section of the cni-ipvlan-vpc-k8s
announcement blogpost
Lincoln Stoll’s k8s-vpcnet, and more recently, Amazon’s amazon-vpc-cni-k8s CNI stacks use Elastic Network Interfaces (ENIs) and secondary private IPs to achieve an overlay-free AWS VPC-native solutions for Kubernetes networking. While both of these solutions achieve the same base goal of drastically simplifying the network complexity of deploying Kubernetes at scale on AWS, they do not focus on minimizing network latency and kernel overhead as part of implementing a compliant networking stack.
Feature-wise, searching for ipvlan in amazon-vpc-cni-k8s's repo turns out some differences:
https://github.com/aws/amazon-vpc-cni-k8s/issues/353: this plugin is chainable, AWS' isn't.
https://github.com/aws/amazon-vpc-cni-k8s/issues/790: AWS' CNI doesn't work with ipvs, this one should
https://github.com/aws/amazon-vpc-cni-k8s/issues/53: This CNI supports using different subnets for different ENIs, AWS' doesn't (or didn't back in 2018)
How do these 2 plugins compare in terms of network performance/latency? I wonder if there is any practical comparison or benchmarks
We should add a comparison between aws-vpc-cni-k8s vs lyft cni, so users get familiar with lyft's advantages over AWS CNI