networkservicemesh / deployments-k8s

Apache License 2.0
42 stars 34 forks source link

Feature: VCL #10023

Open barby1138 opened 11 months ago

barby1138 commented 11 months ago

Hi guys I decided to open VCL as new feature request not to mix-up things. I paste all needed info from prev issue below.


Hi Ed,

I took some holidays for a couple of days.

so VCL:

I will try to describe how I see vcl feature is represented in nsm. I will try to describe the simplest setup but with look for scalability.

  1. I think it should be one more mechanism for connection like kernel and memif. Supported by fwder-VPP
  2. To have it enabled in fwder-VPP it should be configured respectively - with session enabled / etc
  3. So in client we request vcl://service-name There is no specific interface in client / nse for VCL but it's bound to interface inside fwder-VPP (vxlan-tunnel). IPAM assign IP addresses to correspondent interfaces used for tunneling inside fwder-VPP And I propose to inject this info into client/nse just like we inject tap for kernel case. It could be config-file in mounted directory

Ex. I ll add some reference bash code snapshot how I configure VCL manually for now

VPP_POD1=$(kubectl --kubeconfig=$KUBECONFIG1 get pod  -l app=forwarder-vpp -n nsm-system  -o jsonpath="{.items[0].metadata.name}")
echo $VPP_POD1

VPP_POD2=$(kubectl --kubeconfig=$KUBECONFIG2 get pod  -l app=forwarder-vpp -n nsm-system  -o jsonpath="{.items[0].metadata.name}")
echo $VPP_POD2

PCIDEV1=vxlan_tunnel1
VCL_IP_ADDR1=172.17.1.8/16

PCIDEV2=vxlan_tunnel1
VCL_IP_ADDR2=172.17.1.9/16

kubectl --kubeconfig=$KUBECONFIG1 -it exec $VPP_POD1 -n nsm-system -- vppctl set int ip addr $PCIDEV1 $VCL_IP_ADDR1
kubectl --kubeconfig=$KUBECONFIG1 -it exec $VPP_POD1 -n nsm-system -- vppctl sh int addr

kubectl --kubeconfig=$KUBECONFIG2 -it exec $VPP_POD2 -n nsm-system -- vppctl set int ip addr $PCIDEV2 $VCL_IP_ADDR2
kubectl --kubeconfig=$KUBECONFIG2 -it exec $VPP_POD2 -n nsm-system -- vppctl sh int addr

CLUSTER2_IP=<my.cluster2.IP>
SERVICE_NAME="vcl_data"

APP_POD1=$(kubectl --kubeconfig=$KUBECONFIG1 get pod  -l app=app-1 -n ns-floating-kernel2ethernet2kernel -o jsonpath="{.items[0].metadata.name}")
CONF_NAME1=$APP_POD1$SERVICE_NAME
echo 172.17.1.8/16 > $CONF_NAME1
cp $CONF_NAME1 /etc/vpp
rm -f $CONF_NAME1

NSE_POD1=$(kubectl --kubeconfig=$KUBECONFIG2 get pod  -l app=nse-kernel-1 -n ns-floating-kernel2ethernet2kernel -o jsonpath="{.items[0].metadata.name}")
CONF_NAME2=$NSE_POD1$SERVICE_NAME
echo 172.17.1.9/16 > $CONF_NAME2
scp $CONF_NAME2 root@$CLUSTER2_IP:/etc/vpp
rm -f $CONF_NAME2

Summary: The main idea - VCL connection is built just like kernel - but there is no taps. Instead of it fwder-VPP interfaces are configured and this info is injected to client/nse

May be you have better ideas how to enable VCL with even less effort.

Have a nice day!!!


@barby1138 Bear with me while I try to swap my VCL knowledge back into my brain :) If memory serves, VCL is 'setup' by a user by sending messages over a unix file socket, correct?

In which case it would work very much like memif. I like your idea of a vcl mechanism type. So maybe something like:

vcl://${service-name}/${optional requested filename of unix file socket}

Thoughts?


Hi Ed Glad to hear from you :) No ${optional requested filename of unix file socket} is not needed. VCL atractor (client) needs vcl configuration with socket, queues, secrets, etc. - but it's client logic not related to NSM - per my vision. Also control socket is shared to VCL client via mounted folder from fwder. So the only thing is needed from VPP fwder is to share control socket and to enable sessions in startup.conf

Described good here: https://www.envoyproxy.io/docs/envoy/latest/configuration/other_features/vcl refer to "Installing and running VPP/VCL"

yes, technical y it's like memif but in VCL we work with socket not packets and no need to bring VPP to clients just some libs - but it's client responsibility. For test nsc / nse we ll need it - I can help with it. I have working manually configured setup already

glazychev-art commented 8 months ago

@edwarnicke I looked at the VCL feature and tried to run it for NSM. There are a few questions:

1. How do we plan to share the socket with the client? I think there are 2 ways:

2. The main question is the assignment of IP addresses/routes on the forwarder side. Currently we use the forwarder in this way:

VCL_problem-Page-1 where X is cross-connect. We don't assign addresses to forwarder interfaces. Therefore, the forwarder doesn't care what addresses/routes will be on the NSC.

When we add VCL: VCL_problem-Page-2 Now addresses matter. What if they overlap for different clients? Not using a cross-connect requires the use of additional logic to manage addresses and routes.

edwarnicke commented 7 months ago

1. How do we plan to share the socket with the client? I think there are 2 ways:

  • Use a shared volume between the Forwarder and NSC
  • Transfer the socket using grpcfd I think this option is preferable

I would tend to support your conclusion of using grpcfd

edwarnicke commented 7 months ago

2. The main question is the assignment of IP addresses/routes on the forwarder side. Currently we use the forwarder in this way:

VCL_problem-Page-1 where X is cross-connect. We don't assign addresses to forwarder interfaces. Therefore, the forwarder doesn't care what addresses/routes will be on the NSC.

When we add VCL: VCL_problem-Page-2 Now addresses matter. What if they overlap for different clients? Not using a cross-connect requires the use of additional logic to manage addresses and routes.

This is an excellent question, and gets to the philosophical heart of why we do routes at all in the L3/L2 interface-in-the-pod case.

When we are dropping an interface into the Pod, we provide routes to help the Pod's network namespace understand which traffic should be sent over the vWire (aka kernel interface). Its a traffic selection mechanism.

In the case of VCL, we don't need to select traffic. If the vcl handle is used to open a session, then clearly we should be handling that session. So we need the SrcRoutes.

Does that make sense to you?

glazychev-art commented 7 months ago

@edwarnicke Not really I agree that in case of VCL we don't need traffic selection on the NSC side. But what will we do on the forwarder side to which the NSC is connected via VCL? Thanks!

edwarnicke commented 7 months ago

@glazychev-art On the forwarder side, the question is can we simply connect the session to one end of a 'cross connect' to the outgoing traffic?

glazychev-art commented 7 months ago

@edwarnicke Yes. And as far as I understand, we can't. We must assign an IP address and route to this VPP interface (one end of a 'cross connect', vxlan for example) Am I thinking correctly?