I see we can get NXOS docker from vrnetlab,and get N9Kv docker from vrnetlab,
but containerlab can not support kind vr-nxos, it only supports such as vr-n9kv...
so I tried to use kind vr-n9kv to run vr-nxos images, but I encountered a big problem. it can't map the interface correctly.
in the vr-n9kv,interface begin from E1/1, so that eth1 mapping to E1/1 in the yml file, it is okay.
but in the vr-nxos, interface begin from E2/1, so that eth1 still mapping to E1/1 in the yml file,.. I can't get the link established correctly.
Which file can I modify to adapt to this phenomenon? Or would you happen to have any good suggestions?
Thanks a lot. Due to resource constraints, I have to use nxos which is more resource-efficient
@hellt Bro Sorry to bother you
could you provide me with some help? Thank you very much.
Hi @zeliang3
yes, we do not support nxos images, as they are EOL for a while
We did have it in the past, you can switch back to containerlab 0.47.2 which was the last release supporting nxos kind
I see we can get NXOS docker from vrnetlab,and get N9Kv docker from vrnetlab,
but containerlab can not support kind vr-nxos, it only supports such as vr-n9kv...
so I tried to use kind vr-n9kv to run vr-nxos images, but I encountered a big problem. it can't map the interface correctly.
in the vr-n9kv,interface begin from E1/1, so that eth1 mapping to E1/1 in the yml file, it is okay. but in the vr-nxos, interface begin from E2/1, so that eth1 still mapping to E1/1 in the yml file,.. I can't get the link established correctly.
Which file can I modify to adapt to this phenomenon? Or would you happen to have any good suggestions?
Thanks a lot. Due to resource constraints, I have to use nxos which is more resource-efficient
@hellt Bro Sorry to bother you could you provide me with some help? Thank you very much.