Closed joeswaminathan closed 8 years ago
What version of openshift are you using? Or is it from 'origin' HEAD? Also try to run with --log-level=4
@rajatchopra Using openshift 1.0.6
--log-level is not a valid option for "openshift start node"
--loglevel
@liggitt Thanks it worked. Seems like a hidden option as --help doesnt display this
@rajatchopra
Seems like plugin is loaded (but no call to Init())
I1105 15:26:02.418600 11494 server.go:704] Watching apiserver
I1105 15:26:02.426003 11494 plugins.go:108] Loaded network plugin "cisco/n1k"
I1105 15:26:02.504122 11494 config.go:276] Setting pods for source api : {[] 0 api}
When I create a pod. The error is still the same with no additional information.
I1105 15:26:51.677792 11494 manager.go:293] Container inspect result: {ID:46f53653c0f0a1414b2b61da82ee9f01229eccd16c5e1aa25e7b4cafbc63de9a Created:2015-11-05 23:26:50.889091379 +0000 UTC Path:/pod Args:[] Config:0xc208cb4ea0 State:{Running:true Paused:false Restarting:false OOMKilled:false Pid:11580 ExitCode:0 Error: StartedAt:2015-11-05 23:26:51.320872358 +0000 UTC FinishedAt:0001-01-01 00:00:00 +0000 UTC} Image:0fabc4edc8b4ce0384b210e158f262a23e5a52e8cfb77af41ad0eefeb8145c90 Node:<nil> NetworkSettings:0xc20872eb00 SysInitPath: ResolvConfPath:/var/lib/docker/containers/46f53653c0f0a1414b2b61da82ee9f01229eccd16c5e1aa25e7b4cafbc63de9a/resolv.conf HostnamePath:/var/lib/docker/containers/46f53653c0f0a1414b2b61da82ee9f01229eccd16c5e1aa25e7b4cafbc63de9a/hostname HostsPath:/var/lib/docker/containers/46f53653c0f0a1414b2b61da82ee9f01229eccd16c5e1aa25e7b4cafbc63de9a/hosts LogPath:/var/lib/docker/containers/46f53653c0f0a1414b2b61da82ee9f01229eccd16c5e1aa25e7b4cafbc63de9a/46f53653c0f0a1414b2b61da82ee9f01229eccd16c5e1aa25e7b4cafbc63de9a-json.log Name:/k8s_POD.89e9f5dd_hello-openshift_default_b0f217d6-8414-11e5-81b8-0050569e75dc_0a0a2708 Driver:devicemapper Volumes:map[] VolumesRW:map[] HostConfig:0xc208644480 ExecIDs:[] RestartCount:0 AppArmorProfile:}
E1105 15:26:51.681387 11494 manager.go:313] NetworkPlugin cisco/n1k failed on the status hook for pod 'hello-openshift' - exit status 1
I1105 15:26:51.695612 11494 status_manager.go:169] Status for pod "hello-openshift_default" updated successfully
Sorry, try with loglevel=5. Also, do you see the setup/teardown being called properly?
@rajatchopra Thanks that helped
Found the issue. The issue was the plugin is a shell script. Inside the script I was printing debug information and piping into a file. But it seems the way the script is invoked all outputs not only to stdout and stderr are captured by the caller.
It would be nice for the plugin to maintain debug information through its own log file
Is the behavior, all outputs from the plugin even if it is to a file other than stdin or stdout being hijacked by the kublet, an intentional one ? Or should I create an issue for the same
Also the --loglevel not shown by the help, is it intentional or shall I create an issue for the same
@joeswaminathan We are going to move away from the exec based plugin soon. The network parameters will be captured natively and called out through CNI. Unlikely that we want to fix the exec pipe to uncapture the stdout/stderr. Lets say anything on the exec hook is a 'wont fix' unless it is critical because we plan to phase it out.
I have followed the instructions described on https://github.com/openshift/origin/issues/1919 ( https://github.com/kubernetes/kubernetes/blob/master/pkg/kubelet/network/exec/exec.go#L17)
I see the node-config.yaml is created properly
Despite the config, I also start the node with --network-plugin=cisco/n1k
My plugin is placed in a directory as per instructions
This contains an executable with name "n1k"
Docker is started on the node with default network
Despite all that I don't see the executable is invoked when the node is started ( Init() was not called) Later when I create a pod, i see the following error and no sign of the exectable being called.
Any help is appreciated.