Closed jimangel closed 5 years ago
That's a tricky one. We hadn't planned to since it hasn't impacted most people much. How severe of an issue is this for you?
It's crashing a fair number (2-3 at any given time) of our nodes in a 20 node cluster with:
Error: failed to start container "<name>": Error response from daemon: oci runtime error: container_linux.go:247: starting container process caused "process_linux.go:291: setting cgroup config for ready process caused \"failed to write 9223372036854775807 to memory.memsw.limit_in_bytes: open /sys/fs/cgroup/memory/kubepods/pod8b7f9982-54d8-11e9-a1ee-005056802f91/<name>/memory.memsw.limit_in_bytes: permission denied\""
We are running the patched docker 17.03 w/ k8s 1.10.11 and getting ready to upgrade to k8s 1.13.5 which would allow us to jump to docker 18.06. However, it wont be until 1.14 that we can upgrade to runc-patched version of docker (18.09.2).
It's also not clear to me if this can be avoided via a k/k patch. I see the back ported e2e test bumping it up in testing but I don't see any upstream changes to k/k for the issue.
Looking further, these might not even be related... permission denied
vs device or resource busy
I'm not convinced there's correlation... /close
See: https://github.com/opencontainers/runc/issues/1980 for context.