srl-labs / containerlab

container-based networking labs
https://containerlab.dev
BSD 3-Clause "New" or "Revised" License
1.37k stars 242 forks source link

Junos grpc: "context deadline exceeded" #2101

Closed HeikkiLavaste closed 1 day ago

HeikkiLavaste commented 2 weeks ago

Hi,

Is there some.special knob that needs pushing to get grpc working? The same vjunos image with the same config works fine in eve-ng. In clab, I get:

target "172.20.20.3:50051", capabilities request failed: failed to create a gRPC client for target "172.20.20.3:50051" : 172.20.20.3:50051: context deadline exceeded

Tcpdump says there is a reply but not really any payload.

Any suggestions where to look and what to try?

Thanks

hellt commented 1 week ago

is it vjunos switch or evo or router?

HeikkiLavaste commented 1 week ago

Hi, I'm using vjunos switch.

hellt commented 1 week ago

The reason it doens't work is because we don't forward 50051 port to the management port. Is 50051 the default gnmi port on vjunos switch? If yes, I can add this forwarding for you to try

hellt commented 1 week ago

if you want to try adding it yourself, here is how it is done for veos for example:

https://github.com/hellt/vrnetlab/blob/1e1ec73a95c761c1c633ec2e3f546b5921a03cb5/veos/docker/launch.py#L158

HeikkiLavaste commented 1 week ago

Looks like port 32767 is the default for Juniper. I'll let you add it and then I know how to do it the next time.

HeikkiLavaste commented 4 days ago

So something like:

def gen_mgmt(self): res = [] res.append("-device") res.append( self.nic_type + f",netdev=p00,mac={vrnetlab.gen_mac(0)},bus=pci.1,addr=0x2" ) res.append("-netdev") res.append( "user,id=p00,net=10.0.0.0/24,hostfwd=tcp::32767-10.0.0.15:32767"

to vjunosswitch launch.py should do the trick?

hellt commented 3 days ago

yeah, looks like it

HeikkiLavaste commented 2 days ago

I finally got there in the end. I had a look at the vmx launch.py which pointed me to the right direction. Not sure what the difference is in the end but adding the socat rule got it working.

 def gen_mgmt(self):
        res = []
        res.append("-device")
        res.append(
            self.nic_type + f",netdev=p00,mac={vrnetlab.gen_mac(0)},bus=pci.1,addr=0x2"
        )
        res.append("-netdev")
        res.append(
                "user,id=p00,net=10.0.0.0/24,tftp=/tftpboot,hostfwd=tcp::2022-10.0.0.15:22,hostfwd=tcp::52767-10.0.0.15:32767,hostfwd=tcp::2830-10.0.0.15:830,hostfwd=udp::2161-10.0.0.15:161,hostfwd=tcp::2080-10.0.0.15:80,hostfwd=tcp::2443-10.0.0.15:443"
        )
        vrnetlab.run_command(
            ["socat", "TCP-LISTEN:32767,fork", "TCP:127.0.0.1:52767"],
            background=True,
        )

        return res

gnmic -a 172.20.20.3:32767 -u admin -p admin@123 --insecure capabilities
gNMI version: 0.7.0
supported models:
  - ietf-yang-metadata, IETF NETMOD (NETCONF Data Modeling Language) Working Group, 2016-08-05
hellt commented 2 days ago

Brilliant! Maybe you fancy opening a PR so others can benefit from it? Also if you know what commands enable gnmi on vjunos we can add it to the docs

HeikkiLavaste commented 2 days ago

Looks like I broke something else. Initially for testing I did not have any "front ports" connected. Only fxp0. When I connected two switches together, it won't boot.

2024-07-05 12:47:37,923: vrnetlab   DEBUG    Starting vrnetlab VJUNOSSWITCH
2024-07-05 12:47:37,924: vrnetlab   DEBUG    VMs: [<__main__.VJUNOSSWITCH_vm object at 0x7fb7afb3ce10>]
2024-07-05 12:47:37,926: vrnetlab   DEBUG    VM not started; starting!
2024-07-05 12:47:37,926: vrnetlab   INFO     Starting VJUNOSSWITCH_vm
2024-07-05 12:47:37,927: vrnetlab   DEBUG    number of provisioned data plane interfaces is 2
2024-07-05 12:47:37,927: vrnetlab   DEBUG    waiting for provisioned interfaces to appear...
2024-07-05 12:47:42,927: vrnetlab   DEBUG    highest allocated interface id determined to be: 2...
2024-07-05 12:47:42,928: vrnetlab   DEBUG    interfaces provisioned, continuing...
2024-07-05 12:47:42,928: vrnetlab   DEBUG    qemu cmd: qemu-system-x86_64 -enable-kvm -display none -machine pc -monitor tcp:0.0.0.0:4000,server,nowait -m 5120 -serial telnet:0.0.0.0:5000,server,nowait -drive if=ide,file=/vjunos-switch-23.1R1.8-overlay.qcow2 -smp 4,sockets=1,cores=4,threads=1 -cpu IvyBridge,vme=on,ss=on,vmx=on,f16c=on,rdrand=on,hypervisor=on,arat=on,tsc-adjust=on,umip=on,arch-capabilities=on,pdpe1gb=on,skip-l1dfl-vmentry=on,pschange-mc-no=on,bmi1=off,avx2=off,bmi2=off,erms=off,invpcid=off,rdseed=off,adx=off,smap=off,xsaveopt=off,abm=off,svm=on -drive if=none,id=config_disk,file=/config.img,format=raw -device virtio-blk-pci,drive=config_disk -overcommit mem-lock=off -display none -no-user-config -nodefaults -boot strict=on -machine pc,usb=off,dump-guest-core=off,accel=kvm -device piix3-usb-uhci,id=usb,bus=pci.0,addr=0x1.0x2 -smbios "type=1,product=VM-VEX" -device pci-bridge,chassis_nr=1,id=pci.1 -device virtio-net-pci,netdev=p00,mac=0C:00:3e:ff:12:00,bus=pci.1,addr=0x2 -netdev user,id=p00,net=10.0.0.0/24,tftp=/tftpboot,hostfwd=tcp::2022-10.0.0.15:22,hostfwd=tcp::52767-10.0.0.15:32767,hostfwd=tcp::2830-10.0.0.15:830,hostfwd=udp::2161-10.0.0.15:161,hostfwd=tcp::2080-10.0.0.15:80,hostfwd=tcp::2443-10.0.0.15:443 -device virtio-net-pci,netdev=p01,mac=0C:00:27:d0:bd:01,bus=pci.1,addr=0x2 -netdev tap,id=p01,ifname=tap1,script=/etc/tc-tap-ifup,downscript=no -device virtio-net-pci,netdev=p02,mac=0C:00:8c:da:f9:02,bus=pci.1,addr=0x3 -netdev tap,id=p02,ifname=tap2,script=/etc/tc-tap-ifup,downscript=no
2024-07-05 12:47:43,449: vrnetlab   INFO     STDOUT:
2024-07-05 12:47:43,449: vrnetlab   INFO     STDERR: qemu-system-x86_64: warning: host doesn't support requested feature: CPUID.80000001H:ECX.svm [bit 2]
qemu-system-x86_64: warning: host doesn't support requested feature: CPUID.80000001H:ECX.svm [bit 2]
qemu-system-x86_64: warning: host doesn't support requested feature: CPUID.80000001H:ECX.svm [bit 2]
qemu-system-x86_64: warning: host doesn't support requested feature: CPUID.80000001H:ECX.svm [bit 2]
qemu-system-x86_64: -device virtio-net-pci,netdev=p01,mac=0C:00:27:d0:bd:01,bus=pci.1,addr=0x2: PCI: slot 2 function 0 not available for virtio-net-pci, in use by virtio-net-pci,id=(null)

2024-07-05 12:47:43,452: vrnetlab   INFO     Unable to connect to qemu monitor (port 4000), retrying in a second (attempt 1)
hellt commented 2 days ago

try this instead

    def gen_mgmt(self):
        """Generate mgmt interface

        Add additional port forwarding.
        """
        # call parent function to generate first mgmt interface
        res = super().gen_mgmt()

        # append gNMI management port forwarding
        res[-1] = res[-1] + ",hostfwd=tcp::52767-10.0.0.15:32767"
        vrnetlab.run_command(
            ["socat", "TCP-LISTEN:32767,fork", "TCP:127.0.0.1:52767"],
            background=True,
        )

        return res
HeikkiLavaste commented 2 days ago

much better.

hellt commented 1 day ago

I have added 32767 port by default to vrnetlab, if you rebuild the image it should work as expected now https://github.com/hellt/vrnetlab/commit/23d4986f8099298e692da73df60d379ee83b1b4f