Closed dokime7 closed 6 years ago
That's most likely because of the usual kernel pts weirdness. Try ssh-ing into your container or get a separate pts using "script /dev/null /bin/bash", that may help there.
I don't well understand, but I'm already connecting by ssh on LXD container when I run docker-compose up.
From the LXD container docker1 there is no tty file in /var/lib/docker/btrfs/subvolumes/77cc0dd8362cc94816957585d98e7ca50dfa2ebcfeceea2f7cb47ea79a7ee7e7/dev/
root@docker1:~# ls /var/lib/docker/btrfs/subvolumes/77cc0dd8362cc94816957585d98e7ca50dfa2ebcfeceea2f7cb47ea79a7ee7e7/dev
console pts shm
But if I'm entering on a running docker container there is the tty file...
root@docker1:~/mailcow-dockerized# docker-compose exec clamd-mailcow bash
bash-4.3# ls /dev
console fd mqueue ptmx random stderr stdout urandom
core full null pts shm stdin tty zero
bash-4.3#
Obviously the mailcow-dockerized work well on docker outside of LXD container.
@brauner thoughts?
Will take a look in a few.
Thank you very much!
I'm trying to reproduce here.
I don't know what this docker-compose
thing is doing but it's building stuff for ages now.
Yes it takes a while...
@dokime7 so said that it works on the host. Does your host run on btrfs as well?
It's on another server that is on ext4
I could maybe create a ZFS partition in order to test with...
@dokime7 would be fantastic if you could!
OK I will do it. Do you think there is a connection with btrfs?
Not completely sure but it could be.
The fact that you get ENODEV
sounds very much like btrfs weirdness because what runC
itself is trying to do is to simply create a dummy file which will serve as the target of a bind-mount of /dev/tty
but it somehow can't create that file. This makes me suspect that there's some weirdness going on with btrfs itself.
It's curious because other docker containers start well (on mailcow) and we found /dev/tty inside.
So with ZFS, same error :
root@docker1:~/mailcow-dockerized# docker-compose up -d
WARNING: The WATCHDOG_NOTIFY_EMAIL variable is not set. Defaulting to a blank string.
mailcowdockerized_mysql-mailcow_1 is up-to-date
mailcowdockerized_sogo-mailcow_1 is up-to-date
Starting mailcowdockerized_ipv6nat_1 ...
mailcowdockerized_unbound-mailcow_1 is up-to-date
Starting mailcowdockerized_dockerapi-mailcow_1 ...
mailcowdockerized_clamd-mailcow_1 is up-to-date
mailcowdockerized_redis-mailcow_1 is up-to-date
mailcowdockerized_watchdog-mailcow_1 is up-to-date
mailcowdockerized_dovecot-mailcow_1 is up-to-date
mailcowdockerized_memcached-mailcow_1 is up-to-date
mailcowdockerized_postfix-mailcow_1 is up-to-date
mailcowdockerized_php-fpm-mailcow_1 is up-to-date
mailcowdockerized_nginx-mailcow_1 is up-to-date
Starting mailcowdockerized_ipv6nat_1 ... error
mailcowdockerized_rspamd-mailcow_1 is up-to-date
Creating mailcowdockerized_netfilter-mailcow_1 ...
Starting mailcowdockerized_dockerapi-mailcow_1 ... error
ker/vfs/dir/e651ce79444c1ed6acd3148a76bc157187d69f432ae9517218382e1e6c5323de/dev/tty: no such device or address\\\"\"": unknown
Creating mailcowdockerized_netfilter-mailcow_1 ... error
ERROR: for mailcowdockerized_netfilter-mailcow_1 Cannot start service netfilter-mailcow: OCI runtime create failed: container_linux.go:348: starting container process caused "process_linux.go:402: container init caused \"rootfs_linux.go:70: creating device nodes caused \\\"open /var/lib/docker/vfs/dir/11c54a8152e8e632ad4c767768a4dbad3cc601cba3895a7b2f42eedd2c82e23b/dev/tty: no such device or address\\\"\"": unknown
ERROR: for ipv6nat Cannot start service ipv6nat: OCI runtime create failed: container_linux.go:348: starting container process caused "process_linux.go:402: container init caused \"rootfs_linux.go:70: creating device nodes caused \\\"open /var/lib/docker/vfs/dir/e651ce79444c1ed6acd3148a76bc157187d69f432ae9517218382e1e6c5323de/dev/tty: no such device or address\\\"\"": unknown
ERROR: for dockerapi-mailcow Cannot start service dockerapi-mailcow: OCI runtime create failed: container_linux.go:348: starting container process caused "process_linux.go:301: running exec setns process for init caused \"exit status 10\"": unknown
ERROR: for netfilter-mailcow Cannot start service netfilter-mailcow: OCI runtime create failed: container_linux.go:348: starting container process caused "process_linux.go:402: container init caused \"rootfs_linux.go:70: creating device nodes caused \\\"open /var/lib/docker/vfs/dir/11c54a8152e8e632ad4c767768a4dbad3cc601cba3895a7b2f42eedd2c82e23b/dev/tty: no such device or address\\\"\"": unknown
ERROR: Encountered errors while bringing up the project.
Hm, but works fine in a standard docker container so this is mighty weird.
What's confusing me is that the ENODEV
would also only happen if there'd be no controlling terminal.
Please attach to the container and show me the output of:
ls -al /proc/self/fd/
From LXD container :
root@docker1:~# ls -al /proc/self/fd/
total 0
dr-x------ 2 root root 0 May 4 15:04 .
dr-xr-xr-x 9 root root 0 May 4 15:04 ..
lrwx------ 1 root root 64 May 4 15:04 0 -> /dev/pts/0
lrwx------ 1 root root 64 May 4 15:04 1 -> /dev/pts/0
lrwx------ 1 root root 64 May 4 15:04 2 -> /dev/pts/0
lr-x------ 1 root root 64 May 4 15:04 3 -> /proc/12289/fd
I don't know why a mailcow docker container like clamd starts normaly and have /dev/tty
root@docker1:~/mailcow-dockerized# docker-compose exec clamd-mailcow ls -l /dev
WARNING: The WATCHDOG_NOTIFY_EMAIL variable is not set. Defaulting to a blank string.
total 0
crw-rw---- 1 root tty 136, 0 May 4 13:01 console
lrwxrwxrwx 1 root root 11 May 3 22:53 core -> /proc/kcore
lrwxrwxrwx 1 root root 13 May 3 22:53 fd -> /proc/self/fd
crw-rw-rw- 1 nobody nobody 1, 7 May 3 22:51 full
drwxrwxrwt 2 root root 40 May 3 22:53 mqueue
crw-rw-rw- 1 nobody nobody 1, 3 May 3 22:51 null
lrwxrwxrwx 1 root root 8 May 3 22:53 ptmx -> pts/ptmx
drwxr-xr-x 2 root root 0 May 3 22:53 pts
crw-rw-rw- 1 nobody nobody 1, 8 May 3 22:51 random
drwxrwxrwt 2 root root 40 May 3 22:53 shm
lrwxrwxrwx 1 root root 15 May 3 22:53 stderr -> /proc/self/fd/2
lrwxrwxrwx 1 root root 15 May 3 22:53 stdin -> /proc/self/fd/0
lrwxrwxrwx 1 root root 15 May 3 22:53 stdout -> /proc/self/fd/1
crw-rw-rw- 1 nobody nobody 5, 0 May 4 13:04 tty
crw-rw-rw- 1 nobody nobody 1, 9 May 3 22:51 urandom
crw-rw-rw- 1 nobody nobody 1, 5 May 3 22:51 zero
but netfilter container does not...
Two of the three containers that won't start have the "privileged: true" on docker-compose.yml file.
OK, so when I comment out the "privileged: true" from netfilter service on docker-compose.yml file, it starts!
Two of the three containers that won't start have the "privileged: true" on docker-compose.yml file.
Yeah, that explains it! I suspect that in this case they might try to create actual device nodes instead of bind-mounting them. It's a very misleading error though.
Yes, you can reproduce easily by run:
docker run --privileged hello-world
Yeah, they should either just refuse to run privileged when in user namespace or adhere to the user namespace restrictions. In any case, nothing that LXD itself can do so closing if you don't mind. :)
Hmm so it's not possible to run privileged docker container in a not privileged LXD container, it seems obvious!
And for the last one container that fail to start with this error :
ERROR: for dockerapi-mailcow Cannot start service dockerapi-mailcow: OCI runtime create failed: container_linux.go:348: starting container process caused "process_linux.go:301: running exec setns process for init caused \"exit status 10\"": unknown
Do you think that is the same problem?
On Fri, May 04, 2018 at 06:55:08AM -0700, Jeremie Dokime wrote:
And for the last one container that fail to start with this error :
ERROR: for dockerapi-mailcow Cannot start service dockerapi-mailcow: OCI runtime create failed: container_linux.go:348: starting container process caused "process_linux.go:301: running exec setns process for init caused \"exit status 10\"": unknown
Do you think that is the same problem?
Is it a privileged container too?
No privileged
Can you show me the configuration for that specific container?
Required information
The output of "lxc info" or if that fails:
Issue description
I'm trying to run https://mailcow.email/ docker in lxd container and I get this error :
ERROR: for netfilter-mailcow Cannot start service netfilter-mailcow: OCI runtime create failed: container_linux.go:348: starting container process caused "process_linux.go:402: container init caused \"rootfs_linux.go:70: creating device nodes caused \\\"open /var/lib/docker/btrfs/subvolumes/77cc0dd8362cc94816957585d98e7ca50dfa2ebcfeceea2f7cb47ea79a7ee7e7/dev/tty: no such device or address\\\"\"": unknown
Steps to reproduce
Information to attach
lxc config show NAME --expanded
)ERROR: for mailcowdockerized_ipv6nat_1 Cannot start service ipv6nat: OCI runtime create failed: container_linux.go:348: starting container process caused "process_linux.go:402: container init caused \"rootfs_linux.go:70: creating device nodes caused \\"open /var/lib/docStarting 86142ef4fcb0_mailcowdockerized_netfilter-mailcow_1 ... error
Recreating a80fe0eeb0b4_mailcowdockerized_dockerapi-mailcow_1 ... error odes caused \\"open /var/lib/docker/btrfs/subvolumes/77cc0dd8362cc94816957585d98e7ca50dfa2ebcfeceea2f7cb47ea79a7ee7e7/dev/tty: no such device or address\\"\"": unknown
ERROR: for a80fe0eeb0b4_mailcowdockerized_dockerapi-mailcow_1 Cannot start service dockerapi-mailcow: OCI runtime create failed: container_linux.go:348: starting container process caused "process_linux.go:301: running exec setns process for init caused \"exit status 10\"": unknown
ERROR: for ipv6nat Cannot start service ipv6nat: OCI runtime create failed: container_linux.go:348: starting container process caused "process_linux.go:402: container init caused \"rootfs_linux.go:70: creating device nodes caused \\"open /var/lib/docker/btrfs/subvolumes/7d47eb7631eb7d4fa8a9e675107d6c54ed4a03852c4ce988c354e73d48023afd/dev/tty: no such device or address\\"\"": unknown
ERROR: for netfilter-mailcow Cannot start service netfilter-mailcow: OCI runtime create failed: container_linux.go:348: starting container process caused "process_linux.go:402: container init caused \"rootfs_linux.go:70: creating device nodes caused \\"open /var/lib/docker/btrfs/subvolumes/77cc0dd8362cc94816957585d98e7ca50dfa2ebcfeceea2f7cb47ea79a7ee7e7/dev/tty: no such device or address\\"\"": unknown
ERROR: for dockerapi-mailcow Cannot start service dockerapi-mailcow: OCI runtime create failed: container_linux.go:348: starting container process caused "process_linux.go:301: running exec setns process for init caused \"exit status 10\"": unknown ERROR: Encountered errors while bringing up the project.