Closed jmou closed 4 years ago
I've had this issue too, only in rootless. Restarting the container usually helps, but not always. It also applies to pods themselves. To fix it in a pod, I have to restart the pod.
Edit:
debug:
compiler: gc
git commit: ""
go version: go1.13.6
podman version: 1.8.2
host:
BuildahVersion: 1.14.3
CgroupVersion: v2
Conmon:
package: conmon-2.0.14-1.fc31.x86_64
path: /usr/bin/conmon
version: 'conmon version 2.0.14, commit: 083a0be12178013d44ff51ceda3090ea741b6516'
Distribution:
distribution: fedora
version: "31"
IDMappings:
gidmap:
- container_id: 0
host_id: 1001
size: 1
- container_id: 1
host_id: 165536
size: 65536
uidmap:
- container_id: 0
host_id: 1001
size: 1
- container_id: 1
host_id: 165536
size: 65536
MemFree: 182669516800
MemTotal: 270157025280
OCIRuntime:
name: crun
package: crun-0.13-1.fc31.x86_64
path: /usr/bin/crun
version: |-
crun version 0.13
commit: e79e4de4ac16da0ce48777afb72c6241de870525
spec: 1.0.0
+SYSTEMD +SELINUX +APPARMOR +CAP +SECCOMP +EBPF +YAJL
SwapFree: 34324082688
SwapTotal: 34324082688
arch: amd64
cpus: 12
eventlogger: journald
hostname: xena
kernel: 5.5.11-200.fc31.x86_64
os: linux
rootless: true
slirp4netns:
Executable: /usr/bin/slirp4netns
Package: slirp4netns-0.4.0-20.1.dev.gitbbd6f25.fc31.x86_64
Version: |-
slirp4netns version 0.4.0-beta.3+dev
commit: bbd6f25c70d5db2a1cd3bfb0416a8db99a75ed7e
uptime: 120h 1m 19.5s (Approximately 5.00 days)
registries:
search:
- docker.io
- registry.fedoraproject.org
- registry.access.redhat.com
- registry.centos.org
- quay.io
store:
ConfigFile: /home/jens/.config/containers/storage.conf
ContainerStore:
number: 56
GraphDriverName: overlay
GraphOptions:
overlay.mount_program:
Executable: /usr/bin/fuse-overlayfs
Package: fuse-overlayfs-0.7.8-1.fc31.x86_64
Version: |-
fusermount3 version: 3.6.2
fuse-overlayfs: version 0.7.8
FUSE library version 3.6.2
using FUSE kernel interface version 7.29
GraphRoot: /home/jens/.local/share/containers/storage
GraphStatus:
Backing Filesystem: xfs
Native Overlay Diff: "false"
Supports d_type: "true"
Using metacopy: "false"
ImageStore:
number: 61
RunRoot: /run/user/1001
VolumePath: /home/jens/.local/share/containers/storage/volumes
@giuseppe @AkihiroSuda I feel like we may have fixed this one already?
Having the same issue. I'm staring a shell in an interactive container($ podman run -it ...
) and from there starting the development webserver of my webframework. If i have to kill the development server (strg+c) and restart it, it's impossible to connect to the server. The only thing that helps is to restart the container.
A friendly reminder that this issue had no activity for 30 days.
@giuseppe @AkihiroSuda Any comment on this one?
Maybe fixed in https://github.com/containers/libpod/pull/5183 ?
yes I think it is fixed upstream, there were few fixes in rootlessport related to it.
@ingobecker @jmou could you verify if the issue still persists in 1.9?
It's fixed. Thanks for the reply.
It's fixed for me, thanks!
Is this a BUG REPORT or FEATURE REQUEST? (leave only one on its own line)
/kind bug
Description
I have been having this issue for a while and finally found an isolated reproduction. In certain situations, a published port may refuse connections until the container is restarted, even if there is a container process listening. This appears to happen if connections are attempted to the port before there is a listener. In my testing, at least two connections must be attempted to the port before the listener connects.
Steps to reproduce the issue:
In one terminal:
In a second terminal:
Describe the results you received:
All netcats fail, no output on first terminal, first terminal does not quit on Ctrl-C.
Describe the results you expected:
First two netcats are expected to fail because the first terminal's netcat is not yet listening. However the third netcat should succeed, and hi should be output.
I have tested this with Docker and the results are similar to what I expect, but I would expect the first two netcats to fail (I only see output on the third netcat):
Additional information you deem important (e.g. issue happens only occasionally):
I am only able to reproduce this if at least two of the netcats are run while the first terminal is sleeping. If only one (or zero) netcats are run while sleeping, everything behaves as expected. Also perhaps of note is the first error is connection reset and the second and subsequent errors are connection refused.
I know slirp4netns is related to podman networking, but I don't know enough about the internals to be sure where the bug is, so I reported it here. I am using rootless podman.
Output of
podman version
:Output of
podman info --debug
:Package info (e.g. output of
rpm -q podman
orapt list podman
):Additional environment details (AWS, VirtualBox, physical, etc.):
physical