Closed kungfuchicken closed 3 years ago
Can you give a bit more detail on your setup?
that is coming from a cloud install in GKE speaking connected with edge clusters using k3s. I've been able to continue sending messages back and forth even with the container down, but technically the deploy created by the skupper-site-controller (used in both clusters) does not have minimum number of replicas in the replicaset.
Pod goes into CrashLoopBackoff.
Which version of skupper is this with (skupper version should tell you)?
Skupper client reports 0.3.2. I Double checked the image used in the pod, too. Since the pod was deployed via the controller, I figured it might be whatever the controller was). The controller is at 0.3.2:
- name: SKUPPER_SERVICE_CONTROLLER_IMAGE
value: quay.io/skupper/service-controller:0.3.2
image: quay.io/skupper/site-controller:0.3.2
the pod container for the bridge-server reports image: quay.io/skupper/bridge-server:0.3
seems like this issue is a non-issue once https://github.com/skupperproject/skupper/pull/284 is done, so please, don't let this be a distraction :D just trying to make sure to provide useful bug info.
Seeing this issue as well with v0.3.2
, bridge server on the "hub" side of things is stuck in CrashLoopBackOff. Edge bridge servers are seemingly fine and are opening and closing TCP connections seemingly reliably. The router container is still in working order. However HTTP connections are completely broken in this state.
For more information, we're creating the skupper connection via both the "hub" and the "edge" site via the Skupper Site Controller and this yaml:
hub
apiVersion: v1
kind: ConfigMap
metadata:
name: skupper-site
data:
cluster-local: "false"
console: "true"
console-authentication: internal
console-password: "barney"
console-user: "rubble"
edge: "false"
name: test-cloud
router-console: "true"
service-controller: "true"
service-sync: "true"
edge
apiVersion: v1
kind: ConfigMap
metadata:
name: skupper-site
data:
cluster-local: "false"
console: "true"
console-authentication: internal
console-password: "barney"
console-user: "rubble"
edge: "true"
name: test-edge
router-console: "true"
service-controller: "true"
service-sync: "true"
I believe this may warrant higher priority if the removal of bridge server isn't the issue.
Closing as bridge server has been deprecated.
Shortly after starting up the bridge-server container crashes with this error:
I know bridge-server is set to be removed soon, but not sure if this will be an issue with the new strategy.