I think we have a race condition between starting up the node controller and having virtual kubelet call provider.ConfigureNode.
Problem: If kip starts up before virtual kubelet calls provider.ConfigureNode() then the calling NodeStatusController.setNodeStatus will set the nodeReady and networkUnavailable parameters in the controller caching those values. It will then then try to create the node in k8s but will fail since the callback (from virtual-kubelet) doesn't exist yet. Subsequent calls tosetNodeStatus will fail since the controller thinks those values are already set in the node.
Changes:
We have to set the variables since InstanceProvider.ConfigureNode calls GetNodeStatus(), so keep last(NodeReady|NetworkUnavailable) around to determine what we've set in k8s.
Have setNodeStatus return a bool and return false if it fails to set the node status. Use this to call setNodeStatus repeatedly at startup until it succeeds.
I think we have a race condition between starting up the node controller and having virtual kubelet call
provider.ConfigureNode
.Problem: If kip starts up before virtual kubelet calls
provider.ConfigureNode()
then the callingNodeStatusController.setNodeStatus
will set thenodeReady
andnetworkUnavailable
parameters in the controller caching those values. It will then then try to create the node in k8s but will fail since the callback (from virtual-kubelet) doesn't exist yet. Subsequent calls tosetNodeStatus
will fail since the controller thinks those values are already set in the node.Changes:
last(NodeReady|NetworkUnavailable)
around to determine what we've set in k8s.setNodeStatus
return a bool and return false if it fails to set the node status. Use this to callsetNodeStatus
repeatedly at startup until it succeeds.