Closed duckblaster closed 4 years ago
Over a year and I finally have my first issue! What version of k8s are you running?
I'm on latest pfsense using both those plugins so it could be something else entirely. The latest k8s is dropping some old resource types so I'll need to update some of the watches.
I'm running Rancher on vmware, k8s version:
Client Version: version.Info{Major:"1", Minor:"13", GitVersion:"v1.13.9", GitCommit:"3e4f6a92de5f259ef313ad876bb008897f6a98f0", GitTreeState:"clean", BuildDate:"2019-08-05T09:22:00Z", GoVersion:"go1.11.5", Compiler:"gc", Platform:"linux/amd64"}
Server Version: version.Info{Major:"1", Minor:"14", GitVersion:"v1.14.5", GitCommit:"0e9fcb426b100a2aea5ed5c25b3d8cfbb01a8acf", GitTreeState:"clean", BuildDate:"2019-08-05T09:13:08Z", GoVersion:"go1.12.5", Compiler:"gc", Platform:"linux/amd64"}
I have setup a proxy between the controller and pfsense, and intercepted the requests, it seems the error is actually coming from pfsense, in response to the request to restart haproxy:
POST https://pfsense.lab.lan//xmlrpc.php HTTP/1.1
Connection: close
Accept: text/xml
Accept-Encoding: gzip, deflate
Authorization: Basic YWRtaW46RHVja2JsYXN0ZXI3MDkwIQ==
Cookie: PHPSESSID=b7953609ec7eb87627c7e8777df03795; PHPSESSID=b7953609ec7eb87627c7e8777df03795
Host: pfsense.lab.lan
User-Agent: Zend_XmlRpc_Client
X-Forwarded-For: 10.1.1.207
X-Forwarded-Proto: https
X-Forwarded-Host: desktop.lan:44341
Forwarded: proto=https;host=desktop.lan:44341;by=192.168.1.141;for=10.1.1.207;
Request-Id: |6a1b1702-477b44da3ddb9b5d.1.
Content-Type: text/xml; charset=utf-8
Content-Length: 379
<?xml version="1.0" encoding="UTF-8"?>
<methodCall><methodName>pfsense.exec_php</methodName><params><param><value><string>require_once("/usr/local/pkg/haproxy/haproxy.inc");
$messages = null;
$reload = 1;
$ok = haproxy_check_and_run($messages, $reload);
$toreturn = [
'ok' => $ok,
'messages' => $messages,
];</string></value></param></params></methodCall>
Response
HTTP/1.1 200 OK
Server: nginx
Date: Sun, 20 Oct 2019 00:05:00 GMT
Content-Type: text/xml; charset=utf-8
Connection: close
Strict-Transport-Security: max-age=31536000
X-Content-Type-Options: nosniff
Content-Length: 972
<?xml version="1.0" encoding="utf-8"?>
<methodResponse><fault><value><struct><member><name>faultCode</name><value><int>1</int></value></member><member><name>faultString</name><value><string>Unhandled XML_RPC2_InvalidTypeEncodeException exception:Impossible to encode value '' from type 'NULL'. No analogous type in XML_RPC.#0 /usr/local/share/pear/XML/RPC2/Backend/Php/Value/Struct.php(107): XML_RPC2_Backend_Php_Value::createFromNative(NULL)
#1 /usr/local/share/pear/XML/RPC2/Backend/Php/Response.php(86): XML_RPC2_Backend_Php_Value_Struct->encode()
#2 /usr/local/share/pear/XML/RPC2/Backend/Php/Server.php(135): XML_RPC2_Backend_Php_Response::encode(Object(XML_RPC2_Backend_Php_Value_Struct), 'utf-8')
#3 /usr/local/share/pear/XML/RPC2/Backend/Php/Server.php(99): XML_RPC2_Backend_Php_Server->getResponse()
#4 /usr/local/www/xmlrpc.php(768): XML_RPC2_Backend_Php_Server->handleCall()
#5 {main}</string></value></member></struct></value></fault></methodResponse>
And I am getting spammed with this message in the pfsense notifications:
pfSense is restoring the configuration /cf/conf/backup/config-1571384033.xml @ 2019-10-20 13:04:59
My best guess is that maybe something changed in the haproxy config format? I am running haproxy 0.59_19 (depends on haproxy17-1.7.11_1)
Can you disable the controller, get pfsense in a clean state (make sure the config is applied and nothing is pending etc) and then attempt to restart haproxy from the GUI? If all that succeeds then fire up the controller again and we'll see what happens.
As an FYI I have a very similar setup..k8s installed on bare metal using rke. Latest pfsense with latest haproxy plugin.
Thanks for helping out and your patience!
I cleared my haproxy config, and restarted the controller:
2019-10-20T04:00:18+00:00 store successfully initialized
2019-10-20T04:00:18+00:00 waiting for ConfigMap kube-system/kubernetes-pfsense-controller-config to be present and valid
2019-10-20T04:00:23+00:00 controller config loaded/updated
2019-10-20T04:00:23+00:00 loading plugin metallb
2019-10-20T04:00:23+00:00 loading plugin haproxy-declarative
PHP Warning: Illegal string offset 'item' in phar:///usr/local/bin/kubernetes-pfsense-controller/src/KubernetesPfSenseController/Plugin/HAProxyConfig.php on line 99
PHP Warning: Invalid argument supplied for foreach() in phar:///usr/local/bin/kubernetes-pfsense-controller/src/KubernetesPfSenseController/Plugin/HAProxyConfig.php on line 99
PHP Warning: Illegal string offset 'item' in phar:///usr/local/bin/kubernetes-pfsense-controller/src/KubernetesPfSenseController/Plugin/HAProxyConfig.php on line 230
PHP Fatal error: Uncaught Error: Cannot use string offset as an array in phar:///usr/local/bin/kubernetes-pfsense-controller/src/KubernetesPfSenseController/Plugin/HAProxyConfig.php:230
Stack trace:
#0 phar:///usr/local/bin/kubernetes-pfsense-controller/src/KubernetesPfSenseController/Plugin/HAProxyDeclarative.php(114): KubernetesPfSenseController\Plugin\HAProxyConfig->putBackend(Array)
#1 phar:///usr/local/bin/kubernetes-pfsense-controller/vendor/travisghansen/kubernetes-controller-php/src/KubernetesController/Plugin/AbstractPlugin.php(108): KubernetesPfSenseController\Plugin\HAProxyDeclarative->doAction()
#2 phar:///usr/local/bin/kubernetes-pfsense-controller/vendor/travisghansen/kubernetes-controller-php/src/KubernetesController/Controller.php(525): KubernetesController\Plugin\AbstractPlugin->invokeAction()
#3 phar:///usr/local/bin/kubernetes-pfsense-controller/controller.php(68): KubernetesController\Controller->main()
#4 /usr/local/bin/kubernetes-pfsense-controller(2): include('phar:///usr/loc...')
#5 {main}
thrown in phar:///usr/local/bin/kubernetes-pfsense-controller/src/KubernetesPfSenseController/Plugin/HAProxyConfig.php on line 230
It is also not updating my openbgp config either.
haproxy-declarative configmap:
apiVersion: v1
data:
data: |
resources:
- type: backend
definition:
name: metallb-nginx-ingress-https
ha_servers:
# declare dynamic nodes by using the backing service
- type: node-service
# serviceNamespace: optional, uses namespace of the ConfigMap by default
# service must be type NodePort or LoadBalancer
serviceNamespace: ingress-nginx
serviceName: metallb-nginx-ingress
servicePort: 443
definition:
name: metallb-nginx-ingress-https
status: active
- type: backend
definition:
name: metallb-nginx-ingress-http
ha_servers:
# declare dynamic nodes by using the backing service
- type: node-service
# serviceNamespace: optional, uses namespace of the ConfigMap by default
# service must be type NodePort or LoadBalancer
serviceNamespace: ingress-nginx
serviceName: metallb-nginx-ingress
servicePort: 80
definition:
name: metallb-nginx-ingress-http
status: active
- type: frontend
definition:
name: duckblaster.dev-https
type: tcp
a_extaddr:
- extaddr: 192.168.1.253
extaddr_port: 443
- type: frontend
definition:
name: duckblaster.dev-http
type: tcp
a_extaddr:
- extaddr: 192.168.1.253
extaddr_port: 80
kind: ConfigMap
metadata:
annotations:
kubectl.kubernetes.io/last-applied-configuration: |
{"apiVersion":"v1","data":{"data":"resources:\n - type: backend\n definition:\n name: metallb-nginx-ingress-https\n ha_servers:\n # declare dynamic nodes by using the backing service\n - type: node-service\n # serviceNamespace: optional, uses namespace of the ConfigMap by default\n # service must be type NodePort or LoadBalancer\n serviceNamespace: ingress-nginx\n serviceName: metallb-nginx-ingress\n servicePort: 443\n definition:\n name: metallb-nginx-ingress-https\n status: active\n - type: backend\n definition:\n name: metallb-nginx-ingress-http\n ha_servers:\n # declare dynamic nodes by using the backing service\n - type: node-service\n # serviceNamespace: optional, uses namespace of the ConfigMap by default\n # service must be type NodePort or LoadBalancer\n serviceNamespace: ingress-nginx\n serviceName: metallb-nginx-ingress\n servicePort: 80\n definition:\n name: metallb-nginx-ingress-http\n status: active\n - type: frontend\n definition:\n name: duckblaster.dev-https\n type: tcp\n a_extaddr:\n - extaddr: 192.168.1.253\n extaddr_port: 443\n - type: frontend\n definition:\n name: duckblaster.dev-http\n type: tcp\n a_extaddr:\n - extaddr: 192.168.1.253\n extaddr_port: 80\n"},"kind":"ConfigMap","metadata":{"annotations":{},"labels":{"pfsense.org/type":"declarative"},"name":"declarative-example","namespace":"kube-system"}}
creationTimestamp: "2019-08-16T06:22:45Z"
labels:
pfsense.org/type: declarative
name: declarative-example
namespace: kube-system
resourceVersion: "9504"
selfLink: /api/v1/namespaces/kube-system/configmaps/declarative-example
uid: 42c3c902-bfee-11e9-b412-005056816657
pfsense-controller configmap:
apiVersion: v1
data:
config: |
controller-id: "my-cluster"
enabled: true
plugins:
metallb:
enabled: true
nodeLabelSelector: node-role.kubernetes.io/worker=true
nodeFieldSelector:
bgp-implementation: openbgp
options:
openbgp:
# pass through to config.xml
template:
md5sigkey:
md5sigpass:
groupname: metallb
row:
- parameters: announce all
parmvalue:
haproxy-declarative:
enabled: true
haproxy-ingress-proxy:
enabled: false
ingressLabelSelector:
ingressFieldSelector:
defaultFrontend: duckblaster.dev-https
defaultBackend: metallb-nginx-ingress-https
# by default anything is allowed
#allowedHostRegex: "/.*/"
pfsense-dns-services:
enabled: false
serviceLabelSelector:
serviceFieldSelector:
#allowedHostRegex: "/.*/"
dnsBackends:
dnsmasq:
enabled: true
unbound:
enabled: false
pfsense-dns-ingresses:
enabled: false
ingressLabelSelector:
ingressFieldSelector:
#allowedHostRegex: "/.*/"
dnsBackends:
dnsmasq:
enabled: true
unbound:
enabled: false
pfsense-dns-haproxy-ingress-proxy:
enabled: false
#allowedHostRegex: "/.*/"
dnsBackends:
dnsmasq:
enabled: true
unbound:
enabled: false
frontends:
duckblaster.dev-http:
hostname: duckblaster_dev_http.lab.lan
duckblaster.dev-https:
hostname: duckblaster_dev_https.lab.lan
kind: ConfigMap
metadata:
annotations:
kubectl.kubernetes.io/last-applied-configuration: |
{"apiVersion":"v1","data":{"config":"controller-id: \"my-cluster\"\nenabled: true\nplugins:\n metallb:\n enabled: true\n nodeLabelSelector: node-role.kubernetes.io/worker=true\n nodeFieldSelector:\n bgp-implementation: openbgp\n options:\n openbgp:\n # pass through to config.xml\n template:\n md5sigkey:\n md5sigpass:\n groupname: metallb\n row:\n - parameters: announce all\n parmvalue:\n haproxy-declarative:\n enabled: false\n haproxy-ingress-proxy:\n enabled: false\n ingressLabelSelector:\n ingressFieldSelector:\n defaultFrontend: duckblaster.dev-https\n defaultBackend: metallb-nginx-ingress-https\n # by default anything is allowed\n #allowedHostRegex: \"/.*/\"\n pfsense-dns-services:\n enabled: false\n serviceLabelSelector:\n serviceFieldSelector:\n #allowedHostRegex: \"/.*/\"\n dnsBackends:\n dnsmasq:\n enabled: true\n unbound:\n enabled: false\n pfsense-dns-ingresses:\n enabled: false\n ingressLabelSelector:\n ingressFieldSelector:\n #allowedHostRegex: \"/.*/\"\n dnsBackends:\n dnsmasq:\n enabled: true\n unbound:\n enabled: false\n pfsense-dns-haproxy-ingress-proxy:\n enabled: false\n #allowedHostRegex: \"/.*/\"\n dnsBackends:\n dnsmasq:\n enabled: true\n unbound:\n enabled: false\n frontends:\n duckblaster.dev-http:\n hostname: duckblaster_dev_http.lab.lan\n duckblaster.dev-https:\n hostname: duckblaster_dev_https.lab.lan\n"},"kind":"ConfigMap","metadata":{"annotations":{},"name":"kubernetes-pfsense-controller-config","namespace":"kube-system"}}
creationTimestamp: "2019-08-16T05:27:48Z"
name: kubernetes-pfsense-controller-config
namespace: kube-system
resourceVersion: "12032018"
selfLink: /api/v1/namespaces/kube-system/configmaps/kubernetes-pfsense-controller-config
uid: 95553f6a-bfe6-11e9-b412-005056816657
Ok, that's something to work with.. I'll try and get a dev env setup with empty config and see what's going on there.
Can you disable the haproxy plugin(s) and send over the failure from openbgp?
There's no error messages, it fails silently. not even any requests showing in the proxy, looking at your code there area a few catch(e) return false
lines, with no logging in metallb.php
.
Well not logging anything isn't super helpful. I'll add some more defensive programming and better logging and get it committed somewhere.
Do you need images for testing or are you comfortable launching from source?
I'll give it a try from source, I just need to run docker build, push to my docker account and change the image in the kubernetes deployment to the new image right? I haven't got very far into kubernetes/docker just yet, and my php experience is minimal and old.
That's a pain, I'll make ci build images for a development branch. I've got a bunch of stuff cleaned already but I'm gonna add a flag to log all pfsense traffic before committing. I'll have some new code and images up sometime tomorrow.
Thanks, I'm just getting into docker/kubernetes, and I normally program in c# rather than php, so I'm a bit out of my depth with this.
You're doing great if you already have understanding enough to get this installed and conceptually understand what's going on!
OK, I've just built a new image tagged next
that you can try out. It should have better logging in failure scenarios and introduced a new env variable PFSENSE_DEBUG="true"
that will log to the console all xmlrpc traffic with pfsense. I don't know that we'll have solved your issue, but at least we should have some better insight into what's failing.
Oh, and the empty haproxy config issue should be gone now as well but we'll see :)
Test it out and let me know. Thanks!
Well, I figured out why metallb wasn't updating: the controller was looking for a configmap named "config" in namespace "metallb-system" and mine was named "metallb" instead, renaming it to "config" fixed it.
Found some of the errors in the haproxy module:
If there are no frontends or backends in the pfsense config the controller crashes with
PHP Warning: key_exists() expects parameter 2 to be array, string given in phar:///usr/local/bin/kubernetes-pfsense-controller/src/KubernetesPfSenseController/Plugin/HAProxyConfig.php on line 123
PHP Warning: key_exists() expects parameter 2 to be array, string given in phar:///usr/local/bin/kubernetes-pfsense-controller/src/KubernetesPfSenseController/Plugin/HAProxyConfig.php on line 310
PHP Warning: Illegal string offset 'item' in phar:///usr/local/bin/kubernetes-pfsense-controller/src/KubernetesPfSenseController/Plugin/HAProxyConfig.php on line 311
PHP Warning: Illegal string offset 'item' in phar:///usr/local/bin/kubernetes-pfsense-controller/src/KubernetesPfSenseController/Plugin/HAProxyConfig.php on line 314
PHP Fatal error: Uncaught Error: Cannot use string offset as an array in phar:///usr/local/bin/kubernetes-pfsense-controller/src/KubernetesPfSenseController/Plugin/HAProxyConfig.php:314
Stack trace:
#0 phar:///usr/local/bin/kubernetes-pfsense-controller/src/KubernetesPfSenseController/Plugin/HAProxyDeclarative.php(114): KubernetesPfSenseController\Plugin\HAProxyConfig->putBackend(Array)
#1 phar:///usr/local/bin/kubernetes-pfsense-controller/vendor/travisghansen/kubernetes-controller-php/src/KubernetesController/Plugin/AbstractPlugin.php(108): KubernetesPfSenseController\Plugin\HAProxyDeclarative->doAction()
#2 phar:///usr/local/bin/kubernetes-pfsense-controller/vendor/travisghansen/kubernetes-controller-php/src/KubernetesController/Controller.php(525): KubernetesController\Plugin\AbstractPlugin->invokeAction()
#3 phar:///usr/local/bin/kubernetes-pfsense-controller/controller.php(68): KubernetesController\Controller->main()
#4 /usr/local/bin/kubernetes-pfsense-controller(2): include('phar:///usr/loc...')
#5 {main}
thrown in phar:///usr/local/bin/kubernetes-pfsense-controller/src/KubernetesPfSenseController/Plugin/HAProxyConfig.php on line 314
I fixed that error by adding a dummy frontend and backend.
I think I found the root cause of the XML_RPC2_InvalidTypeEncodeException
coming from pfsense, the config you are sending doesn't match the format expected by haproxy.
This is from the config.xml.bad file created when the config was automatically reverted:
<ha_backends>
<item>
<name>dummy</name>
<status>active</status>
<type>http</type>
<a_extaddr>
<item>
<extaddr>wan_ipv4</extaddr>
<extaddr_port>80</extaddr_port>
<_index></_index>
</item>
</a_extaddr>
<desc>test</desc>
</item>
<item>
<name>duckblaster.dev-https</name>
<type>tcp</type>
<a_extaddr>
<0>
<extaddr>192.168.1.253</extaddr>
<extaddr_port>443</extaddr_port>
</0>
</a_extaddr>
</item>
<item>
<name>duckblaster.dev-http</name>
<type>tcp</type>
<a_extaddr>
<0>
<extaddr>192.168.1.253</extaddr>
<extaddr_port>80</extaddr_port>
</0>
</a_extaddr>
</item>
</ha_backends>
Dummy config from web UI:
<a_extaddr>
<item>
<extaddr>wan_ipv4</extaddr>
<extaddr_port>80</extaddr_port>
<_index></_index>
</item>
</a_extaddr>
Generated config from controller:
<a_extaddr>
<0>
<extaddr>192.168.1.253</extaddr>
<extaddr_port>80</extaddr_port>
</0>
</a_extaddr>
Wow, sounds like your config is returning some funky stuff (a string instead of an array). I'll look into that a little deeper.
Where did you get that example for a frontend definition? I don't see one anywhere in the code-base so I'm guessing the controller is doing what it's supposed to, the yaml just needs to be updated (the syntax is a little strange to be clear). For starters wipe your config again in pfsense and then leave on the backend
definitions in there are we'll see if that's working correctly first. While you're doing that I'll lookup the correct syntax to use for a frontend definition.
Interesting results: the backends are added correctly (at least from what I can see), but pfsense still returns an error when the controller tries to restart the haproxy service:
2019-10-21T23:08:23+00:00 plugin (haproxy-declarative): failed reload HAProxy service: Unhandled XML_RPC2_InvalidTypeEncodeException exception:Impossible to encode value '' from type 'NULL'. No analogous type in XML_RPC.#0 /usr/local/share/pear/XML/RPC2/Backend/Php/Value/Struct.php(107): XML_RPC2_Backend_Php_Value::createFromNative(NULL)
#1 /usr/local/share/pear/XML/RPC2/Backend/Php/Response.php(86): XML_RPC2_Backend_Php_Value_Struct->encode()
#2 /usr/local/share/pear/XML/RPC2/Backend/Php/Server.php(135): XML_RPC2_Backend_Php_Response::encode(Object(XML_RPC2_Backend_Php_Value_Struct), 'utf-8')
#3 /usr/local/share/pear/XML/RPC2/Backend/Php/Server.php(99): XML_RPC2_Backend_Php_Server->getResponse()
#4 /usr/local/www/xmlrpc.php(768): XML_RPC2_Backend_Php_Server->handleCall()
#5 {main} (1)
Did you wipe the config completely (getting the <0>
stuff out of there)?
Completely removed all frontends and backends, the controller crashes, and doesn't add anything to pfsense:
2019-10-21T23:16:23+00:00 store successfully initialized
2019-10-21T23:16:23+00:00 waiting for ConfigMap kube-system/kubernetes-pfsense-controller-config to be present and valid
2019-10-21T23:16:29+00:00 controller config loaded/updated
2019-10-21T23:16:29+00:00 loading plugin metallb
2019-10-21T23:16:29+00:00 loading plugin haproxy-declarative
2019-10-21T23:16:29+00:00 plugin (metallb): /api/v1/namespaces/metallb-system/configmaps/config ADDED - 12357730
2019-10-21T23:16:29+00:00 plugin (metallb): successfully reloaded openbgp service
PHP Warning: key_exists() expects parameter 2 to be array, string given in phar:///usr/local/bin/kubernetes-pfsense-controller/src/KubernetesPfSenseController/Plugin/HAProxyConfig.php on line 123
PHP Warning: key_exists() expects parameter 2 to be array, string given in phar:///usr/local/bin/kubernetes-pfsense-controller/src/KubernetesPfSenseController/Plugin/HAProxyConfig.php on line 310
PHP Warning: Illegal string offset 'item' in phar:///usr/local/bin/kubernetes-pfsense-controller/src/KubernetesPfSenseController/Plugin/HAProxyConfig.php on line 311
PHP Warning: Illegal string offset 'item' in phar:///usr/local/bin/kubernetes-pfsense-controller/src/KubernetesPfSenseController/Plugin/HAProxyConfig.php on line 314
PHP Fatal error: Uncaught Error: Cannot use string offset as an array in phar:///usr/local/bin/kubernetes-pfsense-controller/src/KubernetesPfSenseController/Plugin/HAProxyConfig.php:314
Stack trace:
#0 phar:///usr/local/bin/kubernetes-pfsense-controller/src/KubernetesPfSenseController/Plugin/HAProxyDeclarative.php(114): KubernetesPfSenseController\Plugin\HAProxyConfig->putBackend(Array)
#1 phar:///usr/local/bin/kubernetes-pfsense-controller/vendor/travisghansen/kubernetes-controller-php/src/KubernetesController/Plugin/AbstractPlugin.php(108): KubernetesPfSenseController\Plugin\HAProxyDeclarative->doAction()
#2 phar:///usr/local/bin/kubernetes-pfsense-controller/vendor/travisghansen/kubernetes-controller-php/src/KubernetesController/Controller.php(525): KubernetesController\Plugin\AbstractPlugin->invokeAction()
#3 phar:///usr/local/bin/kubernetes-pfsense-controller/controller.php(68): KubernetesController\Controller->main()
#4 /usr/local/bin/kubernetes-pfsense-controller(2): include('phar:///usr/loc...')
#5 {main}
thrown in phar:///usr/local/bin/kubernetes-pfsense-controller/src/KubernetesPfSenseController/Plugin/HAProxyConfig.php on line 314
Adding a dummy frontend makes no difference, but a dummy backend allows it to add the backends to pfsense, then it fails to restart haproxy:
2019-10-21T23:18:26+00:00 store successfully initialized
2019-10-21T23:18:26+00:00 waiting for ConfigMap kube-system/kubernetes-pfsense-controller-config to be present and valid
2019-10-21T23:18:31+00:00 controller config loaded/updated
2019-10-21T23:18:31+00:00 loading plugin metallb
2019-10-21T23:18:31+00:00 loading plugin haproxy-declarative
2019-10-21T23:18:31+00:00 plugin (metallb): /api/v1/namespaces/metallb-system/configmaps/config ADDED - 12357730
2019-10-21T23:18:31+00:00 plugin (metallb): successfully reloaded openbgp service
2019-10-21T23:18:32+00:00 plugin (haproxy-declarative): failed exec_php call: Unhandled XML_RPC2_InvalidTypeEncodeException exception:Impossible to encode value '' from type 'NULL'. No analogous type in XML_RPC.#0 /usr/local/share/pear/XML/RPC2/Backend/Php/Value/Struct.php(107): XML_RPC2_Backend_Php_Value::createFromNative(NULL)
#1 /usr/local/share/pear/XML/RPC2/Backend/Php/Response.php(86): XML_RPC2_Backend_Php_Value_Struct->encode()
#2 /usr/local/share/pear/XML/RPC2/Backend/Php/Server.php(135): XML_RPC2_Backend_Php_Response::encode(Object(XML_RPC2_Backend_Php_Value_Struct), 'utf-8')
#3 /usr/local/share/pear/XML/RPC2/Backend/Php/Server.php(99): XML_RPC2_Backend_Php_Server->getResponse()
#4 /usr/local/www/xmlrpc.php(768): XML_RPC2_Backend_Php_Server->handleCall()
#5 {main} (1)
2019-10-21T23:18:32+00:00 plugin (haproxy-declarative): failed reload HAProxy service: Unhandled XML_RPC2_InvalidTypeEncodeException exception:Impossible to encode value '' from type 'NULL'. No analogous type in XML_RPC.#0 /usr/local/share/pear/XML/RPC2/Backend/Php/Value/Struct.php(107): XML_RPC2_Backend_Php_Value::createFromNative(NULL)
#1 /usr/local/share/pear/XML/RPC2/Backend/Php/Response.php(86): XML_RPC2_Backend_Php_Value_Struct->encode()
#2 /usr/local/share/pear/XML/RPC2/Backend/Php/Server.php(135): XML_RPC2_Backend_Php_Response::encode(Object(XML_RPC2_Backend_Php_Value_Struct), 'utf-8')
#3 /usr/local/share/pear/XML/RPC2/Backend/Php/Server.php(99): XML_RPC2_Backend_Php_Server->getResponse()
#4 /usr/local/www/xmlrpc.php(768): XML_RPC2_Backend_Php_Server->handleCall()
#5 {main} (1)
2019-10-21T23:18:32+00:00 plugin (haproxy-declarative): failed update/reload: Unhandled XML_RPC2_InvalidTypeEncodeException exception:Impossible to encode value '' from type 'NULL'. No analogous type in XML_RPC.#0 /usr/local/share/pear/XML/RPC2/Backend/Php/Value/Struct.php(107): XML_RPC2_Backend_Php_Value::createFromNative(NULL)
#1 /usr/local/share/pear/XML/RPC2/Backend/Php/Response.php(86): XML_RPC2_Backend_Php_Value_Struct->encode()
#2 /usr/local/share/pear/XML/RPC2/Backend/Php/Server.php(135): XML_RPC2_Backend_Php_Response::encode(Object(XML_RPC2_Backend_Php_Value_Struct), 'utf-8')
#3 /usr/local/share/pear/XML/RPC2/Backend/Php/Server.php(99): XML_RPC2_Backend_Php_Server->getResponse()
#4 /usr/local/www/xmlrpc.php(768): XML_RPC2_Backend_Php_Server->handleCall()
#5 {main} (1)
Ugh, XmlRpc should handle all this automatically :( I'm guessing what's going on is the ha_pools
key is in the struct but it's empty/null instead of an empty list/array. If possible can you turn on debugging for pfsense traffic and send over the response for getting the full haproxy config block (if any secrets are in there you can email them to me). I'm interested to see what the response looks like.
In any case, thanks for the patience, I'm confident we can have this sorted out shortly.
I wonder if the null is coming from PfSenseAbstract.php#L139-L144, maybe setting $message to an empty string if it is null?
That xmlrpc null exception is definitly comming from the call to restart haproxy rather than to save/load the config.
True. I don't know if that error is coming from something else going on in the background related to a 'bad' config though. For instance, that could be coming from pfsense trying to do it's own xmlrpc sync to a hot-backup node so we'd be 1 step removed.
I'm committing some code shortly to be even more stringent on checking the values shortly. We'll try it out.
OK, new images built. You'll need to make sure to set the image pull policy to force it to pull the latest image otherwise it will just stay on the current version. If you need help with that let me know.
https://kubernetes.io/docs/concepts/containers/images/#updating-images
I tested a manual xmlrpc request to simulate the call to restart haproxy, and changing the php code send to exec to this works:
require_once("/usr/local/pkg/haproxy/haproxy.inc");
$messages = null;
$reload = 1;
$ok = haproxy_check_and_run($messages, $reload);
if($messages == null) {
$messages = "null";
}
$toreturn = [
'ok' => $ok,
'messages' => $messages,
];
Ah ok, I'll add that shortly then, nice find!
It shouldn't be needed, as haproxy_check_and_run
receives it by reference and sets it to ""
, although my knowledge of php references is from looking at the php manual on references for less than 5 minutes just now.
Well, the xmlrpc serializer may not like a null
value and so wants a string "null" to represent it properly.
I'll hard code if for my scenario and see how it's interpreted over the wire.
Here is the payload from the config update request trying to set both frontends and backends for haproxy: haproxy-full-config-send-xmlrpc.txt
I hate xmlrpc, xml already understands objects, arrays, and data types, must have been invented by someone who didn't really understand doctype definitions or who wanted to look good for management and got paid by the line.
I think I have found the problem with the frontend update, in HAProxyDeclarative.php
line 176 you are returning the definition as-is, so the following yaml:
name: duckblaster.dev-http
type: tcp
a_extaddr:
- extaddr: 192.168.1.253
extaddr_port: 80
becomes
<item>
<name>duckblaster.dev-http</name>
<type>tcp</type>
<a_extaddr>
<0>
<extaddr>192.168.1.253</extaddr>
<extaddr_port>80</extaddr_port>
</0>
</a_extaddr>
</item>
I am not sure what php code is needed to fix that, although changing the yaml to this works:
name: duckblaster.dev-http
type: tcp
a_extaddr:
item:
- extaddr: 192.168.1.253
extaddr_port: 80
Note the added item:
line, that's what fixes it.
Ok yeah, that's what I was referring to earlier, the syntax is pretty bizarre.
Can you try this block for the reload stuff?
require_once("/usr/local/pkg/haproxy/haproxy.inc");
$messages = null;
$reload = 1;
$ok = haproxy_check_and_run($messages, $reload);
if($messages == null) {
$messages = "";
}
$toreturn = [
'ok' => $ok,
'messages' => $messages,
];
That php snippet is exactly what I tested, and works:
A more complete yaml is as follows:
resources:
- type: backend
definition:
name: metallb-nginx-ingress-https
monitor_uri: /healthz
ha_servers:
# declare dynamic nodes by using the backing service
- type: node-service
# serviceNamespace: optional, uses namespace of the ConfigMap by default
# service must be type NodePort or LoadBalancer
serviceNamespace: ingress-nginx
serviceName: metallb-nginx-ingress
servicePort: 443
definition:
name: metallb-nginx-ingress-https
status: active
checkssl: yes
- type: backend
definition:
name: metallb-nginx-ingress-http
monitor_uri: /healthz
ha_servers:
# declare dynamic nodes by using the backing service
- type: node-service
# serviceNamespace: optional, uses namespace of the ConfigMap by default
# service must be type NodePort or LoadBalancer
serviceNamespace: ingress-nginx
serviceName: metallb-nginx-ingress
servicePort: 80
definition:
name: metallb-nginx-ingress-http
status: active
- type: frontend
definition:
name: duckblaster.dev-https
type: tcp
status: active
backend_serverpool: metallb-nginx-ingress-https
a_extaddr:
item:
- extaddr: 192.168.1.253_ipv4
extaddr_port: 443
- type: frontend
definition:
name: duckblaster.dev-http
type: tcp
status: active
backend_serverpool: metallb-nginx-ingress-http
a_extaddr:
item:
- extaddr: 192.168.1.253_ipv4
extaddr_port: 80
My snippet is setting it to ""
instead of "null"
so it's slightly different. Let me commit it real quick and we'll have you start from scratch and see if all warnings etc are cleared up!
EDIT: ah, I see you tested what I sent and not your example from earlier. Looks great.
Ok, new image pushed with reload fix in place. If you can start from scratch again that would be great. Thanks!
I'll add your simple frontend example to the codebase as well to give folks a template to work from.
Then we'll move on to any of the other plugins and make sure they're all sane.
Everything is working fine now for metallb and haproxy modules, thanks! Now I need to figure out why metallb doesn't want to work on my cluster at all right now. Neither layer2 or bgp mode are working, it just times out.
Nice! Thanks for working through that with me! I'll add some detection to deal with newer kubernetes versions and snap a release.
Need any help with any of the other plugins?
Regarding metallb functioning I'm not an expert on that one :( did the plugin properly configure and reload openbgp and at least that side of the equation is ok?
Yes, the openbgp config loaded correctly, I think my problem is more likely my k8s nodes or something, no matter what options I set in metallb it never responds.
Unless you can recommend a config that will use pfsense/haproxy as the layer 4 load balancer instead of metallb?
I suppose a new plugin haproxy-service-proxy
could be created...but I really think you'll be better served by the likes of metallb over the long haul (in particular bgp for prod work loads). The plugin would be difficult to 'scale' in the sense of supporting many services etc.
Are your loadbalancer services getting assigned ip addresses as expected?
Yeah, the ips are assigned as expected, just not listening, seems like a firewall issue on the k8s nodes. My best guess is it's because rancheros runs nested docker.
On Tue, 22 Oct 2019, 4:44 PM Travis Glenn Hansen, notifications@github.com wrote:
I suppose a new plugin haproxy-service-proxy could be created...but I really think you'll be better served by the likes of metallb over the long haul (in particular bgp for prod work loads). The plugin would be difficult to 'scale' in the sense of supporting many services etc.
Are your loadbalancer services getting assigned ip addresses as expected?
— You are receiving this because you authored the thread. Reply to this email directly, view it on GitHub https://github.com/travisghansen/kubernetes-pfsense-controller/issues/1?email_source=notifications&email_token=AAEJWY7YO2MPFHGFLSYWB6LQPZZKBA5CNFSM4JCDTBEKYY3PNVWWK3TUL52HS4DFVREXG43VMVBW63LNMVXHJKTDN5WW2ZLOORPWSZGOEB4OLUY#issuecomment-544794067, or unsubscribe https://github.com/notifications/unsubscribe-auth/AAEJWY4PI72GNKSERWN447TQPZZKBANCNFSM4JCDTBEA .
Hmmm, yeah I use rke with centos hosts so don't have much experience with rancheros. Seems strange they wouldn't support it though.
Perhaps this is legit/related? https://forums.rancher.com/t/baremetal-metallb-loadbalancer/11681
Thanks again for navigating this issue. We've certainly made the code better as a result.
I've released v0.1.8
which has all the fixes mentioned here and more.
metallb fails silently, haproxy-declarative spams about serialization errors:
I haven't tested the other plugins yet, this config was working on an older version, but I can't recall which. I also get notifications in the pfSense webadmin about restoring config from backups,.
pfSense Version: 2.4.4-RELEASE-p3 (amd64) built on Wed May 15 18:53:44 EDT 2019 FreeBSD 11.2-RELEASE-p10