oakestra / oakestra-net

Networking component of Oakestra
Apache License 2.0
5 stars 7 forks source link

153 ipv6 proxy #164

Closed smnzlnsk closed 3 months ago

smnzlnsk commented 4 months ago

implements #153 builds on #163 requires https://github.com/oakestra/oakestra/pull/280

changelog:

smnzlnsk commented 4 months ago

Jesus christ, 2900 lines changed and a lot of them are formatting related. Let me know if you want me to fix it up. May be worth an issue to run black and isort over the codebase or implement a linter check just like in the main repo.

giobart commented 3 months ago

While testing this PR I encountered the following issue at the node NetManager while deploying an application:

2024/03/07 13:37:30 Received HTTP request - /container/deploy 
2024/03/07 13:37:30 ReqBody received : [123 34 112 105 100 34 58 50 51 49 52 50 54 44 34 115 101 114 118 105 99 101 78 97 109 101 34 58 34 112 105 112 101 46 112 105 112 101 46 80 114 101 46 100 101 112 108 111 121 34 44 34 105 110 115 116 97 110 99 101 78 117 109 98 101 114 34 58 49 44 34 112 111 114 116 77 97 112 112 105 110 103 115 34 58 34 53 48 49 48 48 58 53 48 49 48 48 47 117 100 112 34 125]
2024/03/07 13:37:30 Changed port 50100 status toward destination 10.18.1.4:50100
2024/03/07 13:37:30 ERROR: running [/usr/sbin/ip6tables -t nat -A OAKESTRA -p udp --dport 50100 -j DNAT --to-destination fc00::404:50100 --wait]: exit status 2: ip6tables v1.8.7 (nf_tables): Bad IP address "fc00::404:50100"

Try `ip6tables -h' or 'ip6tables --help' for more information.
ERROR-2024/03/07 13:37:30 ContainerNetDeployment.go:140: Error in ManageContainerPorts v6
goroutine 206 [running]:
runtime/debug.Stack()
        /opt/go/1.21.3/src/runtime/debug/stack.go:24 +0x5e
runtime/debug.PrintStack()
        /opt/go/1.21.3/src/runtime/debug/stack.go:16 +0x13
NetManager/env.(*ContainerDeyplomentHandler).DeployNetwork(0x142bb92?, 0x14?, {0xc0005cb9b0, 0x14}, 0x0?, {0xc000b935b0, 0xf})
        /home/cm/oakestra_net_ansible/node-net-manager/env/ContainerNetDeployment.go:141 +0xb6a
NetManager/handlers.deploymentHandler(0xc000293700)
        /home/cm/oakestra_net_ansible/node-net-manager/handlers/deployment.go:90 +0x174
NetManager/handlers.(*deployTaskQueue).taskExecutor(0x1cfc388)
        /home/cm/oakestra_net_ansible/node-net-manager/handlers/deployment.go:65 +0xdf
created by NetManager/handlers.NewDeployTaskQueue.func1 in goroutine 204
        /home/cm/oakestra_net_ansible/node-net-manager/handlers/deployment.go:51 +0x68
ERROR-2024/03/07 13:37:30 deployment.go:92: [ERROR]: running [/usr/sbin/ip6tables -t nat -A OAKESTRA -p udp --dport 50100 -j DNAT --to-destination fc00::404:50100 --wait]: exit status 2: ip6tables v1.8.7 (nf_tables): Bad IP address "fc00::404:50100"

Try `ip6tables -h' or 'ip6tables --help' for more information.

ERROR-2024/03/07 13:37:30 deployment.go:67: [ERROR]:  running [/usr/sbin/ip6tables -t nat -A OAKESTRA -p udp --dport 50100 -j DNAT --to-destination fc00::404:50100 --wait]: exit status 2: ip6tables v1.8.7 (nf_tables): Bad IP address "fc00::404:50100"

How to replicate

Current setup:

Root Orchestrator: machine1 -> startup command: docker compose -f docker-compose.yml -f override-local-service-manager.yml up --build Cluster 1 Orchestrator: machine2 -> docker compose -f docker-compose.yml -f override-local-service-manager.yml up --build Cluster 1 Node 1: machine 3 Cluster 2 Orchestrator: machine1 -> docker compose -f docker-compose.yml -f override-local-service-manager.yml up --build Cluster 2 Node 1: machine1

If I deploy the following application, everything works, and traffic between curlv4/curlv4 reaches the nginx server in every circumstance.

{
      "microservices" : [
        {
          "microserviceID": "",
          "microservice_name": "curlv6",
          "microservice_namespace": "test",
          "virtualization": "container",
          "cmd": ["sh", "-c", "curl [fdff:2000::55:55] ; sleep 5"],
          "memory": 100,
          "vcpus": 1,
          "vgpus": 0,
          "vtpus": 0,
          "bandwidth_in": 0,
          "bandwidth_out": 0,
          "storage": 0,
          "code": "docker.io/curlimages/curl:7.82.0",
          "state": "",
          "port": "",
          "added_files": [],
          "constraints":[
            {
              "type":"direct",
              "cluster":"cluster1"
            }
          ]
        },
        {
          "microserviceID": "",
          "microservice_name": "curlv4",
          "microservice_namespace": "test",
          "virtualization": "container",
          "cmd": ["sh", "-c", "curl 10.30.55.55 ; sleep 5"],
          "memory": 100,
          "vcpus": 1,
          "vgpus": 0,
          "vtpus": 0,
          "bandwidth_in": 0,
          "bandwidth_out": 0,
          "storage": 0,
          "code": "docker.io/curlimages/curl:7.82.0",
          "state": "",
          "port": "",
          "added_files": [],
          "constraints":[
            {
              "type":"direct",
              "cluster":"cluster2"
            }
          ]
        },
        {
          "microserviceID": "",
          "microservice_name": "nginx",
          "microservice_namespace": "test",
          "virtualization": "container",
          "cmd": [],
          "memory": 100,
          "vcpus": 1,
          "vgpus": 0,
          "vtpus": 0,
          "bandwidth_in": 0,
          "bandwidth_out": 0,
          "storage": 0,
          "code": "docker.io/library/nginx:latest",
          "state": "",
          "port": "",
          "addresses": {
            "rr_ip": "10.30.55.55",
            "rr_ip_v6": "fdff:2000::55:55"
          },
          "added_files": [],
          "constraints": [
            {
              "type":"direct",
              "cluster":"cluster2"
            }
          ]
        }
      ]
}

But, when I deploy the following application:

{
  "microservices": [
    {
      "microserviceID": "",
      "microservice_name": "Pre",
      "microservice_namespace": "deploy",
      "virtualization": "container",
      "cmd": [
        "sh",
        "-c",
        "./pre/main -port 4040 -x 500 -y 500 & /home/SidecarQueue -entry=true -exit=false -p=50100 -next=10.30.20.11:50101 -sidecar=0.0.0.0:4040 -analytics=1"
      ],
      "memory": 50,
      "vcpus": 1,
      "vgpus": 0,
      "vtpus": 0,
      "bandwidth_in": 0,
      "bandwidth_out": 0,
      "storage": 0,
      "code": "ghcr.io/giobart/preprocessing:v0.0.5",
      "state": "",
      "port": "50100:50100/udp",
      "connectivity": [],
      "constraints": [
        {
          "type": "direct",
          "cluster": "cluster2",
          "node": "machine1"
        }
      ],
      "added_files": []
    },
    {
      "microserviceID": "",
      "microservice_name": "Det",
      "microservice_namespace": "deploy",
      "virtualization": "container",
      "cmd": [
        "sh",
        "-c",
        "/home/SidecarQueue -exit=true -p=50101 -sidecar=0.0.0.0:4041 -next=10.30.20.20:50102 -analytics=1 & python3 detection.py --model yolox_nano"
      ],
      "memory": 50,
      "vcpus": 1,
      "vgpus": 0,
      "vtpus": 0,
      "bandwidth_in": 0,
      "bandwidth_out": 0,
      "storage": 0,
      "code": "ghcr.io/giobart/detection:v0.0.5",
      "state": "",
      "port": "",
      "addresses": {
        "rr_ip": "10.30.20.11"
      },
      "connectivity": [],
      "added_files": [],
      "constraints": [
        {
          "type": "direct",
          "cluster": "cluster2",
          "node": "machine1"
        }
      ]
    },
    {
      "microserviceID": "",
      "microservice_name": "Rec",
      "microservice_namespace": "deploy",
      "virtualization": "container",
      "cmd": [
        "sh",
        "-c",
        "./SidecarQueue -exit=true -p=50102 -sidecar=0.0.0.0:4042 -analytics=1 & python3 recognition.py --model buffalo_s"
      ],
      "memory": 50,
      "vcpus": 1,
      "vgpus": 1,
      "vtpus": 0,
      "bandwidth_in": 0,
      "bandwidth_out": 0,
      "storage": 0,
      "code": "ghcr.io/giobart/recognition:v0.0.5",
      "state": "",
      "port": "",
      "addresses": {
        "rr_ip": "10.30.20.20"
      },
      "connectivity": [],
      "added_files": [],
      "constraints": [
        {
          "type": "direct",
          "cluster": "cluster2",
          "node": "machine1"
        }
      ]
    }
  ]
}

The NetManager fails when deploying "Pre" This setup works on: v0.4.300

Full log from the undeployment of curlv6 to the deployment of the second application:

2024/03/07 13:31:11 De-registering from test.test.curlv6.test
2024/03/07 13:31:11 [MQTT TABLE QUERY] sip: fdff::4
2024/03/07 13:31:11 waiting for table query fdff::4
2024/03/07 13:31:11 MQTT - Received mqtt table query message: {"app_name": "test.test.curlv6.test", "instance_list": [{"cluster_id": "65e9abe15de6bac1c56cbb7f", "host_ip": "131.159.24.170", "host_port": 50103, "instance_ip": "10.30.0.8", "instance_ip_v6": "fdff::4", "instance_number": 0, "namespace_ip": "10.18.0.130", "namespace_ip_v6": "fc00::202", "service_ip": [{"Address": "10.30.0.6", "Address_v6": "fdff:2000::3", "IpType": "RR"}, {"IpType": "instance_ip", "Address": "10.30.0.8", "Address_v6": "fdff::4"}]}], "query_key": "fdff::4"}
2024/03/07 13:31:11 MQTT - Subscribed to jobs/test.test.curlv6.test/updates_available 
2024/03/07 13:31:11 self destruction timeout started for job test.test.curlv6.test
2024/03/07 13:31:11 MQTT - Subscribed to jobs/fdff::4/updates_available 
2024/03/07 13:31:11 self destruction timeout started for job fdff::4
2024/03/07 13:31:12 Received job update regarding jobs/test.test.nginx.test/updates_available
2024/03/07 13:31:12 [MQTT TABLE QUERY] sname: test.test.nginx.test
2024/03/07 13:31:12 waiting for table query test.test.nginx.test
2024/03/07 13:31:12 MQTT - Received mqtt table query message: {"app_name": "test.test.nginx.test", "instance_list": [], "query_key": "test.test.nginx.test"}
2024/03/07 13:31:12 Received job update regarding jobs/test.test.curlv4.test/updates_available
2024/03/07 13:31:12 [MQTT TABLE QUERY] sname: test.test.curlv4.test
2024/03/07 13:31:12 waiting for table query test.test.curlv4.test
2024/03/07 13:31:12 Received job update regarding jobs/test.test.curlv6.test/updates_available
2024/03/07 13:31:12 [MQTT TABLE QUERY] sname: test.test.curlv6.test
2024/03/07 13:31:12 waiting for table query test.test.curlv6.test
2024/03/07 13:31:12 MQTT - Received mqtt table query message: {"app_name": "test.test.curlv4.test", "instance_list": [], "query_key": "test.test.curlv4.test"}
2024/03/07 13:31:12 MQTT - Received mqtt table query message: {"app_name": "test.test.curlv6.test", "instance_list": [], "query_key": "test.test.curlv6.test"}
2024/03/07 13:31:13 Received HTTP request - /container/undeploy 
2024/03/07 13:31:13 {test.test.curlv4.test 0}
2024/03/07 13:31:14 Received HTTP request - /container/undeploy 
2024/03/07 13:31:14 {test.test.nginx.test 0}
2024/03/07 13:31:16 De-registering from test.test.nginx.test
2024/03/07 13:31:16 De-registering from test.test.curlv4.test
2024/03/07 13:31:21 De-registering from fdff::4
2024/03/07 13:31:21 De-registering from test.test.curlv6.test
2024/03/07 13:31:36 Received HTTP request - /container/deploy 
2024/03/07 13:31:36 ReqBody received : [123 34 112 105 100 34 58 50 48 51 51 55 55 44 34 115 101 114 118 105 99 101 78 97 109 101 34 58 34 112 105 112 101 46 112 105 112 101 46 68 101 116 46 100 101 112 108 111 121 34 44 34 105 110 115 116 97 110 99 101 78 117 109 98 101 114 34 58 48 44 34 112 111 114 116 77 97 112 112 105 110 103 115 34 58 34 34 125]
2024/03/07 13:31:37 [MQTT TABLE QUERY] sname: pipe.pipe.Det.deploy
INFO-2024/03/07 13:31:37 ContainerManager.go:101: Response to /container/deploy:  {pipe.pipe.Det.deploy 10.18.1.2 fc00::402}
2024/03/07 13:31:37 waiting for table query pipe.pipe.Det.deploy
2024/03/07 13:31:37 MQTT - Received mqtt table query message: {"app_name": "pipe.pipe.Det.deploy", "instance_list": [{"cluster_id": "65e9abd95de6bac1c56cbb7a", "instance_ip": "10.30.0.11", "instance_ip_v6": "fdff::6", "instance_number": 0, "worker_id": "65e9b2e0eef309e81e7963f3", "namespace_ip": "10.18.1.2", "namespace_ip_v6": "fc00::402", "host_ip": "131.159.24.51", "host_port": 50103, "service_ip": [{"Address": "10.30.20.11", "Address_v6": "fdff:2000::4", "IpType": "RR"}, {"IpType": "instance_ip", "Address": "10.30.0.11", "Address_v6": "fdff::6"}]}], "query_key": "pipe.pipe.Det.deploy"}
2024/03/07 13:31:37 MQTT - Subscribed to jobs/pipe.pipe.Det.deploy/updates_available 
2024/03/07 13:31:37 self destruction timeout started for job pipe.pipe.Det.deploy
2024/03/07 13:31:37 Received HTTP request - /container/deploy 
2024/03/07 13:31:37 ReqBody received : [123 34 112 105 100 34 58 50 48 51 52 49 55 44 34 115 101 114 118 105 99 101 78 97 109 101 34 58 34 112 105 112 101 46 112 105 112 101 46 80 114 101 46 100 101 112 108 111 121 34 44 34 105 110 115 116 97 110 99 101 78 117 109 98 101 114 34 58 48 44 34 112 111 114 116 77 97 112 112 105 110 103 115 34 58 34 53 48 49 48 48 58 53 48 49 48 48 47 117 100 112 34 125]
2024/03/07 13:31:37 Changed port 50100 status toward destination 10.18.1.3:50100
2024/03/07 13:31:37 ERROR: running [/usr/sbin/ip6tables -t nat -A OAKESTRA -p udp --dport 50100 -j DNAT --to-destination fc00::403:50100 --wait]: exit status 2: ip6tables v1.8.7 (nf_tables): Bad IP address "fc00::403:50100"

Try `ip6tables -h' or 'ip6tables --help' for more information.
ERROR-2024/03/07 13:31:37 ContainerNetDeployment.go:140: Error in ManageContainerPorts v6
goroutine 206 [running]:
runtime/debug.Stack()
        /opt/go/1.21.3/src/runtime/debug/stack.go:24 +0x5e
runtime/debug.PrintStack()
        /opt/go/1.21.3/src/runtime/debug/stack.go:16 +0x13
NetManager/env.(*ContainerDeyplomentHandler).DeployNetwork(0x142bb92?, 0x14?, {0xc000396288, 0x14}, 0x0?, {0xc00029fdb0, 0xf})
        /home/cm/oakestra_net_ansible/node-net-manager/env/ContainerNetDeployment.go:141 +0xb6a
NetManager/handlers.deploymentHandler(0xc00094c800)
        /home/cm/oakestra_net_ansible/node-net-manager/handlers/deployment.go:90 +0x174
NetManager/handlers.(*deployTaskQueue).taskExecutor(0x1cfc388)
        /home/cm/oakestra_net_ansible/node-net-manager/handlers/deployment.go:65 +0xdf
created by NetManager/handlers.NewDeployTaskQueue.func1 in goroutine 204
        /home/cm/oakestra_net_ansible/node-net-manager/handlers/deployment.go:51 +0x68
ERROR-2024/03/07 13:31:37 deployment.go:92: [ERROR]: running [/usr/sbin/ip6tables -t nat -A OAKESTRA -p udp --dport 50100 -j DNAT --to-destination fc00::403:50100 --wait]: exit status 2: ip6tables v1.8.7 (nf_tables): Bad IP address "fc00::403:50100"

Try `ip6tables -h' or 'ip6tables --help' for more information.

ERROR-2024/03/07 13:31:37 deployment.go:67: [ERROR]:  running [/usr/sbin/ip6tables -t nat -A OAKESTRA -p udp --dport 50100 -j DNAT --to-destination fc00::403:50100 --wait]: exit status 2: ip6tables v1.8.7 (nf_tables): Bad IP address "fc00::403:50100"

Try `ip6tables -h' or 'ip6tables --help' for more information.

2024/03/07 13:31:37 [MQTT TABLE QUERY] sname: pipe.pipe.Pre.deploy
2024/03/07 13:31:37 waiting for table query pipe.pipe.Pre.deploy
2024/03/07 13:31:37 MQTT - Received mqtt table query message: {"app_name": "pipe.pipe.Pre.deploy", "instance_list": [{"cluster_id": "65e9abd95de6bac1c56cbb7a", "instance_ip": "10.30.0.12", "instance_ip_v6": "fdff::7", "instance_number": 0, "service_ip": [{"Address": "10.30.0.10", "Address_v6": "fdff:2000::5", "IpType": "RR"}, {"IpType": "instance_ip", "Address": "10.30.0.12", "Address_v6": "fdff::7"}]}], "query_key": "pipe.pipe.Pre.deploy"}
2024/03/07 13:31:37 TranslationTable: Invalid Entry, wrong nodeip
ERROR-2024/03/07 13:31:37 EnvironmentManager.go:529: InvalidEntry
2024/03/07 13:31:37 MQTT - Subscribed to jobs/pipe.pipe.Pre.deploy/updates_available 
2024/03/07 13:31:37 self destruction timeout started for job pipe.pipe.Pre.deploy
2024/03/07 13:31:38 Received HTTP request - /container/deploy 
2024/03/07 13:31:38 ReqBody received : [123 34 112 105 100 34 58 50 48 51 52 53 51 44 34 115 101 114 118 105 99 101 78 97 109 101 34 58 34 112 105 112 101 46 112 105 112 101 46 82 101 99 46 100 101 112 108 111 121 34 44 34 105 110 115 116 97 110 99 101 78 117 109 98 101 114 34 58 48 44 34 112 111 114 116 77 97 112 112 105 110 103 115 34 58 34 34 125]
2024/03/07 13:31:38 [MQTT TABLE QUERY] sname: pipe.pipe.Rec.deploy
INFO-2024/03/07 13:31:38 ContainerManager.go:101: Response to /container/deploy:  {pipe.pipe.Rec.deploy 10.18.1.3 fc00::403}
2024/03/07 13:31:38 waiting for table query pipe.pipe.Rec.deploy
2024/03/07 13:31:38 MQTT - Received mqtt table query message: {"app_name": "pipe.pipe.Rec.deploy", "instance_list": [{"cluster_id": "65e9abd95de6bac1c56cbb7a", "instance_ip": "10.30.0.13", "instance_ip_v6": "fdff::8", "instance_number": 0, "worker_id": "65e9b2e0eef309e81e7963f3", "namespace_ip": "10.18.1.3", "namespace_ip_v6": "fc00::403", "host_ip": "131.159.24.51", "host_port": 50103, "service_ip": [{"Address": "10.30.20.20", "Address_v6": "fdff:2000::6", "IpType": "RR"}, {"IpType": "instance_ip", "Address": "10.30.0.13", "Address_v6": "fdff::8"}]}], "query_key": "pipe.pipe.Rec.deploy"}
2024/03/07 13:31:38 MQTT - Subscribed to jobs/pipe.pipe.Rec.deploy/updates_available 
2024/03/07 13:31:38 self destruction timeout started for job pipe.pipe.Rec.deploy
INFO-2024/03/07 13:31:41 ProxyTunnel.go:337: Packet forwarded locally
INFO-2024/03/07 13:31:41 ProxyTunnel.go:337: Packet forwarded locally
INFO-2024/03/07 13:31:42 ProxyTunnel.go:337: Packet forwarded locally
INFO-2024/03/07 13:31:42 ProxyTunnel.go:337: Packet forwarded locally
INFO-2024/03/07 13:31:43 ProxyTunnel.go:337: Packet forwarded locally
INFO-2024/03/07 13:31:43 ProxyTunnel.go:337: Packet forwarded locally
INFO-2024/03/07 13:31:44 ProxyTunnel.go:337: Packet forwarded locally
INFO-2024/03/07 13:31:44 ProxyTunnel.go:337: Packet forwarded locally
2024/03/07 13:31:44 Received HTTP request - /container/deploy 
2024/03/07 13:31:44 ReqBody received : [123 34 112 105 100 34 58 50 48 52 50 57 52 44 34 115 101 114 118 105 99 101 78 97 109 101 34 58 34 112 105 112 101 46 112 105 112 101 46 80 114 101 46 100 101 112 108 111 121 34 44 34 105 110 115 116 97 110 99 101 78 117 109 98 101 114 34 58 48 44 34 112 111 114 116 77 97 112 112 105 110 103 115 34 58 34 53 48 49 48 48 58 53 48 49 48 48 47 117 100 112 34 125]
2024/03/07 13:31:45 Changed port 50100 status toward destination 10.18.1.4:50100
2024/03/07 13:31:45 ERROR: running [/usr/sbin/ip6tables -t nat -A OAKESTRA -p udp --dport 50100 -j DNAT --to-destination fc00::404:50100 --wait]: exit status 2: ip6tables v1.8.7 (nf_tables): Bad IP address "fc00::404:50100"

Try `ip6tables -h' or 'ip6tables --help' for more information.
ERROR-2024/03/07 13:31:45 ContainerNetDeployment.go:140: Error in ManageContainerPorts v6
goroutine 206 [running]:
runtime/debug.Stack()
        /opt/go/1.21.3/src/runtime/debug/stack.go:24 +0x5e
runtime/debug.PrintStack()
        /opt/go/1.21.3/src/runtime/debug/stack.go:16 +0x13
NetManager/env.(*ContainerDeyplomentHandler).DeployNetwork(0x142bb92?, 0x14?, {0xc000aba768, 0x14}, 0x0?, {0xc00027cca0, 0xf})
        /home/cm/oakestra_net_ansible/node-net-manager/env/ContainerNetDeployment.go:141 +0xb6a
NetManager/handlers.deploymentHandler(0xc0001f3700)
        /home/cm/oakestra_net_ansible/node-net-manager/handlers/deployment.go:90 +0x174
NetManager/handlers.(*deployTaskQueue).taskExecutor(0x1cfc388)
        /home/cm/oakestra_net_ansible/node-net-manager/handlers/deployment.go:65 +0xdf
created by NetManager/handlers.NewDeployTaskQueue.func1 in goroutine 204
        /home/cm/oakestra_net_ansible/node-net-manager/handlers/deployment.go:51 +0x68
ERROR-2024/03/07 13:31:45 deployment.go:92: [ERROR]: running [/usr/sbin/ip6tables -t nat -A OAKESTRA -p udp --dport 50100 -j DNAT --to-destination fc00::404:50100 --wait]: exit status 2: ip6tables v1.8.7 (nf_tables): Bad IP address "fc00::404:50100"

Try `ip6tables -h' or 'ip6tables --help' for more information.

ERROR-2024/03/07 13:31:45 deployment.go:67: [ERROR]:  running [/usr/sbin/ip6tables -t nat -A OAKESTRA -p udp --dport 50100 -j DNAT --to-destination fc00::404:50100 --wait]: exit status 2: ip6tables v1.8.7 (nf_tables): Bad IP address "fc00::404:50100"

Try `ip6tables -h' or 'ip6tables --help' for more information.

INFO-2024/03/07 13:31:45 ProxyTunnel.go:337: Packet forwarded locally
INFO-2024/03/07 13:31:45 ProxyTunnel.go:337: Packet forwarded locally
INFO-2024/03/07 13:31:46 ProxyTunnel.go:337: Packet forwarded locally
INFO-2024/03/07 13:31:46 ProxyTunnel.go:337: Packet forwarded locally
INFO-2024/03/07 13:31:47 ProxyTunnel.go:337: Packet forwarded locally
INFO-2024/03/07 13:31:47 ProxyTunnel.go:337: Packet forwarded locally
2024/03/07 13:31:47 De-registering from pipe.pipe.Pre.deploy
INFO-2024/03/07 13:31:48 ProxyTunnel.go:337: Packet forwarded locally
INFO-2024/03/07 13:31:48 ProxyTunnel.go:337: Packet forwarded locally
INFO-2024/03/07 13:31:49 ProxyTunnel.go:337: Packet forwarded locally
INFO-2024/03/07 13:31:49 ProxyTunnel.go:337: Packet forwarded locally
2024/03/07 13:31:49 Received HTTP request - /container/deploy 
2024/03/07 13:31:49 ReqBody received : [123 34 112 105 100 34 58 50 48 52 54 57 53 44 34 115 101 114 118 105 99 101 78 97 109 101 34 58 34 112 105 112 101 46 112 105 112 101 46 80 114 101 46 100 101 112 108 111 121 34 44 34 105 110 115 116 97 110 99 101 78 117 109 98 101 114 34 58 48 44 34 112 111 114 116 77 97 112 112 105 110 103 115 34 58 34 53 48 49 48 48 58 53 48 49 48 48 47 117 100 112 34 125]
2024/03/07 13:31:50 Changed port 50100 status toward destination 10.18.1.4:50100
2024/03/07 13:31:50 ERROR: running [/usr/sbin/ip6tables -t nat -A OAKESTRA -p udp --dport 50100 -j DNAT --to-destination fc00::404:50100 --wait]: exit status 2: ip6tables v1.8.7 (nf_tables): Bad IP address "fc00::404:50100"

Try `ip6tables -h' or 'ip6tables --help' for more information.
ERROR-2024/03/07 13:31:50 ContainerNetDeployment.go:140: Error in ManageContainerPorts v6
goroutine 206 [running]:
runtime/debug.Stack()
        /opt/go/1.21.3/src/runtime/debug/stack.go:24 +0x5e
runtime/debug.PrintStack()
        /opt/go/1.21.3/src/runtime/debug/stack.go:16 +0x13
NetManager/env.(*ContainerDeyplomentHandler).DeployNetwork(0x142bb92?, 0x14?, {0xc00094a7e0, 0x14}, 0x0?, {0xc000462a30, 0xf})
        /home/cm/oakestra_net_ansible/node-net-manager/env/ContainerNetDeployment.go:141 +0xb6a
NetManager/handlers.deploymentHandler(0xc000116480)
        /home/cm/oakestra_net_ansible/node-net-manager/handlers/deployment.go:90 +0x174
NetManager/handlers.(*deployTaskQueue).taskExecutor(0x1cfc388)
        /home/cm/oakestra_net_ansible/node-net-manager/handlers/deployment.go:65 +0xdf
created by NetManager/handlers.NewDeployTaskQueue.func1 in goroutine 204
        /home/cm/oakestra_net_ansible/node-net-manager/handlers/deployment.go:51 +0x68
ERROR-2024/03/07 13:31:50 deployment.go:92: [ERROR]: running [/usr/sbin/ip6tables -t nat -A OAKESTRA -p udp --dport 50100 -j DNAT --to-destination fc00::404:50100 --wait]: exit status 2: ip6tables v1.8.7 (nf_tables): Bad IP address "fc00::404:50100"

Try `ip6tables -h' or 'ip6tables --help' for more information.

ERROR-2024/03/07 13:31:50 deployment.go:67: [ERROR]:  running [/usr/sbin/ip6tables -t nat -A OAKESTRA -p udp --dport 50100 -j DNAT --to-destination fc00::404:50100 --wait]: exit status 2: ip6tables v1.8.7 (nf_tables): Bad IP address "fc00::404:50100"

Try `ip6tables -h' or 'ip6tables --help' for more information.

2024/03/07 13:31:50 [MQTT TABLE QUERY] sname: pipe.pipe.Pre.deploy
2024/03/07 13:31:50 waiting for table query pipe.pipe.Pre.deploy
2024/03/07 13:31:50 MQTT - Received mqtt table query message: {"app_name": "pipe.pipe.Pre.deploy", "instance_list": [{"cluster_id": "65e9abd95de6bac1c56cbb7a", "instance_ip": "10.30.0.12", "instance_ip_v6": "fdff::7", "instance_number": 0, "service_ip": [{"Address": "10.30.0.10", "Address_v6": "fdff:2000::5", "IpType": "RR"}, {"IpType": "instance_ip", "Address": "10.30.0.12", "Address_v6": "fdff::7"}]}], "query_key": "pipe.pipe.Pre.deploy"}
2024/03/07 13:31:50 TranslationTable: Invalid Entry, wrong nodeip
ERROR-2024/03/07 13:31:50 EnvironmentManager.go:529: InvalidEntry
2024/03/07 13:31:50 MQTT - Subscribed to jobs/pipe.pipe.Pre.deploy/updates_available 
2024/03/07 13:31:50 self destruction timeout started for job pipe.pipe.Pre.deploy
INFO-2024/03/07 13:31:50 ProxyTunnel.go:337: Packet forwarded locally
INFO-2024/03/07 13:31:50 ProxyTunnel.go:337: Packet forwarded locally
INFO-2024/03/07 13:31:51 ProxyTunnel.go:337: Packet forwarded locally
INFO-2024/03/07 13:31:51 ProxyTunnel.go:337: Packet forwarded locally
INFO-2024/03/07 13:31:52 ProxyTunnel.go:337: Packet forwarded locally
INFO-2024/03/07 13:31:52 ProxyTunnel.go:337: Packet forwarded locally
INFO-2024/03/07 13:31:53 ProxyTunnel.go:337: Packet forwarded locally
INFO-2024/03/07 13:31:53 ProxyTunnel.go:337: Packet forwarded locally
INFO-2024/03/07 13:31:54 ProxyTunnel.go:337: Packet forwarded locally
INFO-2024/03/07 13:31:54 ProxyTunnel.go:337: Packet forwarded locally
2024/03/07 13:31:55 Received HTTP request - /container/deploy 
2024/03/07 13:31:55 ReqBody received : [123 34 112 105 100 34 58 50 48 53 48 55 53 44 34 115 101 114 118 105 99 101 78 97 109 101 34 58 34 112 105 112 101 46 112 105 112 101 46 80 114 101 46 100 101 112 108 111 121 34 44 34 105 110 115 116 97 110 99 101 78 117 109 98 101 114 34 58 48 44 34 112 111 114 116 77 97 112 112 105 110 103 115 34 58 34 53 48 49 48 48 58 53 48 49 48 48 47 117 100 112 34 125]
2024/03/07 13:31:55 Changed port 50100 status toward destination 10.18.1.4:50100
2024/03/07 13:31:55 ERROR: running [/usr/sbin/ip6tables -t nat -A OAKESTRA -p udp --dport 50100 -j DNAT --to-destination fc00::404:50100 --wait]: exit status 2: ip6tables v1.8.7 (nf_tables): Bad IP address "fc00::404:50100"

Try `ip6tables -h' or 'ip6tables --help' for more information.
ERROR-2024/03/07 13:31:55 ContainerNetDeployment.go:140: Error in ManageContainerPorts v6
goroutine 206 [running]:
runtime/debug.Stack()
        /opt/go/1.21.3/src/runtime/debug/stack.go:24 +0x5e
runtime/debug.PrintStack()
        /opt/go/1.21.3/src/runtime/debug/stack.go:16 +0x13
NetManager/env.(*ContainerDeyplomentHandler).DeployNetwork(0x142bb92?, 0x14?, {0xc000abb4e8, 0x14}, 0x0?, {0xc00027def0, 0xf})
        /home/cm/oakestra_net_ansible/node-net-manager/env/ContainerNetDeployment.go:141 +0xb6a
NetManager/handlers.deploymentHandler(0xc0001f3f80)
        /home/cm/oakestra_net_ansible/node-net-manager/handlers/deployment.go:90 +0x174
NetManager/handlers.(*deployTaskQueue).taskExecutor(0x1cfc388)
        /home/cm/oakestra_net_ansible/node-net-manager/handlers/deployment.go:65 +0xdf
created by NetManager/handlers.NewDeployTaskQueue.func1 in goroutine 204
        /home/cm/oakestra_net_ansible/node-net-manager/handlers/deployment.go:51 +0x68
ERROR-2024/03/07 13:31:55 deployment.go:92: [ERROR]: running [/usr/sbin/ip6tables -t nat -A OAKESTRA -p udp --dport 50100 -j DNAT --to-destination fc00::404:50100 --wait]: exit status 2: ip6tables v1.8.7 (nf_tables): Bad IP address "fc00::404:50100"

Try `ip6tables -h' or 'ip6tables --help' for more information.

ERROR-2024/03/07 13:31:55 deployment.go:67: [ERROR]:  running [/usr/sbin/ip6tables -t nat -A OAKESTRA -p udp --dport 50100 -j DNAT --to-destination fc00::404:50100 --wait]: exit status 2: ip6tables v1.8.7 (nf_tables): Bad IP address "fc00::404:50100"

Try `ip6tables -h' or 'ip6tables --help' for more information.

INFO-2024/03/07 13:31:55 ProxyTunnel.go:337: Packet forwarded locally
INFO-2024/03/07 13:31:55 ProxyTunnel.go:337: Packet forwarded locally
INFO-2024/03/07 13:31:56 ProxyTunnel.go:337: Packet forwarded locally
INFO-2024/03/07 13:31:56 ProxyTunnel.go:337: Packet forwarded locally
INFO-2024/03/07 13:31:57 ProxyTunnel.go:337: Packet forwarded locally
INFO-2024/03/07 13:31:57 ProxyTunnel.go:337: Packet forwarded locally
INFO-2024/03/07 13:31:58 ProxyTunnel.go:337: Packet forwarded locally
INFO-2024/03/07 13:31:58 ProxyTunnel.go:337: Packet forwarded locally
INFO-2024/03/07 13:31:59 ProxyTunnel.go:337: Packet forwarded locally
INFO-2024/03/07 13:31:59 ProxyTunnel.go:337: Packet forwarded locally
2024/03/07 13:32:00 Received HTTP request - /container/deploy 
2024/03/07 13:32:00 ReqBody received : [123 34 112 105 100 34 58 50 48 53 53 49 57 44 34 115 101 114 118 105 99 101 78 97 109 101 34 58 34 112 105 112 101 46 112 105 112 101 46 80 114 101 46 100 101 112 108 111 121 34 44 34 105 110 115 116 97 110 99 101 78 117 109 98 101 114 34 58 48 44 34 112 111 114 116 77 97 112 112 105 110 103 115 34 58 34 53 48 49 48 48 58 53 48 49 48 48 47 117 100 112 34 125]
2024/03/07 13:32:00 Changed port 50100 status toward destination 10.18.1.4:50100
2024/03/07 13:32:00 ERROR: running [/usr/sbin/ip6tables -t nat -A OAKESTRA -p udp --dport 50100 -j DNAT --to-destination fc00::404:50100 --wait]: exit status 2: ip6tables v1.8.7 (nf_tables): Bad IP address "fc00::404:50100"

Try `ip6tables -h' or 'ip6tables --help' for more information.
ERROR-2024/03/07 13:32:00 ContainerNetDeployment.go:140: Error in ManageContainerPorts v6
goroutine 206 [running]:
runtime/debug.Stack()
        /opt/go/1.21.3/src/runtime/debug/stack.go:24 +0x5e
runtime/debug.PrintStack()
        /opt/go/1.21.3/src/runtime/debug/stack.go:16 +0x13
NetManager/env.(*ContainerDeyplomentHandler).DeployNetwork(0x142bb92?, 0x14?, {0xc00094b6c8, 0x14}, 0x0?, {0xc00052e530, 0xf})
        /home/cm/oakestra_net_ansible/node-net-manager/env/ContainerNetDeployment.go:141 +0xb6a
NetManager/handlers.deploymentHandler(0xc000116d80)
        /home/cm/oakestra_net_ansible/node-net-manager/handlers/deployment.go:90 +0x174
NetManager/handlers.(*deployTaskQueue).taskExecutor(0x1cfc388)
        /home/cm/oakestra_net_ansible/node-net-manager/handlers/deployment.go:65 +0xdf
created by NetManager/handlers.NewDeployTaskQueue.func1 in goroutine 204
        /home/cm/oakestra_net_ansible/node-net-manager/handlers/deployment.go:51 +0x68
ERROR-2024/03/07 13:32:00 deployment.go:92: [ERROR]: running [/usr/sbin/ip6tables -t nat -A OAKESTRA -p udp --dport 50100 -j DNAT --to-destination fc00::404:50100 --wait]: exit status 2: ip6tables v1.8.7 (nf_tables): Bad IP address "fc00::404:50100"

Try `ip6tables -h' or 'ip6tables --help' for more information.
smnzlnsk commented 3 months ago

Yeah this is obviously an error in the address parsing on the DNAT ip6tables command. This is caused by the port entry parsing to 50100 as the destination port, which in combination with the IPv6 address results in fc00::404:50100, where 50100 is the port and the address portion is fc00::404. The correct representation should be [fc00::404]:50100, which I will have to test with ip6tables for compatibility.

Small oversight on my part, which should have a quick fix.