Closed fedinskiy closed 1 year ago
Trying ::1...
There's your problem. Localhost forwarding only works with v4, not v6 (kernel doesn't have support - not sure why, but it doesn't seem likely to be added). We're discussing potentially adding a process to perform such forwarding manually in the future, but for now, 127.0.0.1
instead of localhost
should work.
@mheon the very same thing happens with 127.0.0.1:
$ telnet 127.0.0.1 50000
Trying 127.0.0.1...
Connected to 127.0.0.1.
Escape character is '^]'.
Connection closed by foreign host.
Can you access the port from other systems?
And does accessing from the same system, but via one of the public IPs (instead of localhost), work?
Public ip: no (but worked for docker). Other system\: I could try it in ~15 hours, if you presume, that this can lead to a different result
Any chance you can try with the Netavark network backend instead of CNI?
[fedinskiy@localhost tests]$ telnet 10.XXX.YY.ZZ 50000
Trying 10.XXX.YY.ZZ...
Connected to 10.XXX.YY.ZZ
Escape character is '^]'.
Connection closed by foreign host.
[fedinskiy@localhost tests]$ telnet 10.XXX.YY.ZZ 50001 # I started docker with -P50001:50000
Trying 10.XXX.YY.ZZ...
Connected to 10.XXX.YY.ZZ.
Escape character is '^]'.
^]
telnet> Connection closed.
Probably, I can do it with any backend. Is there a manual for Netavark?
UPD: by "manual" I mean "how to switch a backend from CNI to Netavark"
Do you run rootless? If so, cni/netavark doesn't matter since it is not used by default.
You can try a different port forwarder with --network slirp4netns:port_handler=slirp4netns
.
Please also try 127.0.0.1 instead of localhost, port forwarding with ::1
will not work when run as root.
To switch from cni to netavark you just have to run podman system reset
(deletes all containers/images) , assuming netavark is installed the next podman command should switch automatically to netavark, i.e. just run podman info again.
$ sudo dnf in netavark
Last metadata expiration check: 0:00:17 ago on Tue 08 Nov 2022 09:03:51 CET.
Package netavark-1.1.0-1.fc36.x86_64 is already installed.
Dependencies resolved.
Nothing to do.
Complete!
$ podman system reset
WARNING! This will remove:
- all containers
- all pods
- all images
- all networks
- all build cache
- all machines
- all volumes
Are you sure you want to continue? [y/N] y
$ podman info
host:
arch: amd64
buildahVersion: 1.28.0
cgroupControllers:
- cpu
- io
- memory
- pids
cgroupManager: systemd
cgroupVersion: v2
conmon:
package: conmon-2.1.4-3.fc36.x86_64
path: /usr/bin/conmon
version: 'conmon version 2.1.4, commit: '
cpuUtilization:
idlePercent: 91.88
systemPercent: 1.27
userPercent: 6.86
cpus: 8
distribution:
distribution: fedora
variant: workstation
version: "36"
eventLogger: journald
hostname: localhost.localdomain
idMappings:
gidmap:
- container_id: 0
host_id: 1000
size: 1
- container_id: 1
host_id: 100000
size: 65536
uidmap:
- container_id: 0
host_id: 1000
size: 1
- container_id: 1
host_id: 100000
size: 65536
kernel: 6.0.5-200.fc36.x86_64
linkmode: dynamic
logDriver: journald
memFree: 23608066048
memTotal: 33395286016
networkBackend: netavark #
ociRuntime:
name: crun
package: crun-1.6-2.fc36.x86_64
path: /usr/bin/crun
version: |-
crun version 1.6
commit: 18cf2efbb8feb2b2f20e316520e0fd0b6c41ef4d
spec: 1.0.0
+SYSTEMD +SELINUX +APPARMOR +CAP +SECCOMP +EBPF +CRIU +YAJL
os: linux
remoteSocket:
exists: true
path: /run/user/1000/podman/podman.sock
security:
apparmorEnabled: false
capabilities: CAP_CHOWN,CAP_DAC_OVERRIDE,CAP_FOWNER,CAP_FSETID,CAP_KILL,CAP_NET_BIND_SERVICE,CAP_SETFCAP,CAP_SETGID,CAP_SETPCAP,CAP_SETUID,CAP_SYS_CHROOT
rootless: true
seccompEnabled: true
seccompProfilePath: /usr/share/containers/seccomp.json
selinuxEnabled: true
serviceIsRemote: false
slirp4netns:
executable: /usr/bin/slirp4netns
package: slirp4netns-1.2.0-0.2.beta.0.fc36.x86_64
version: |-
slirp4netns version 1.2.0-beta.0
commit: 477db14a24ff1a3de3a705e51ca2c4c1fe3dda64
libslirp: 4.6.1
SLIRP_CONFIG_VERSION_MAX: 3
libseccomp: 2.5.3
swapFree: 8589930496
swapTotal: 8589930496
uptime: 0h 25m 23.00s
plugins:
authorization: null
log:
- k8s-file
- none
- passthrough
- journald
network:
- bridge
- macvlan
volume:
- local
registries:
registry.access.redhat.com:
Blocked: false
Insecure: true
Location: registry.access.redhat.com
MirrorByDigestOnly: false
Mirrors: null
Prefix: registry.access.redhat.com
PullFromMirror: ""
search:
- docker.io
- registry.fedoraproject.org
- registry.access.redhat.com
- quay.io
store:
configFile: /home/fedinskiy/.config/containers/storage.conf
containerStore:
number: 0
paused: 0
running: 0
stopped: 0
graphDriverName: overlay
graphOptions: {}
graphRoot: /home/fedinskiy/.local/share/containers/storage
graphRootAllocated: 510965841920
graphRootUsed: 143881043968
graphStatus:
Backing Filesystem: btrfs
Native Overlay Diff: "true"
Supports d_type: "true"
Using metacopy: "false"
imageCopyTmpDir: /var/tmp
imageStore:
number: 0
runRoot: /run/user/1000/containers
volumePath: /home/fedinskiy/.local/share/containers/storage/volumes
version:
APIVersion: 4.3.0
Built: 1666340195
BuiltTime: Fri Oct 21 10:16:35 2022
GitCommit: ""
GoVersion: go1.18.7
Os: linux
OsArch: linux/amd64
Version: 4.3.0
Result is the same
podman run -e DB2INSTANCE=test \
-e DB2INST1_PASSWORD=test \
-e DBNAME=amadeus \
-e AUTOCONFIG=false \
-e ARCHIVE_LOGS=false \
-e LICENSE=accept \
--network slirp4netns:port_handler=slirp4netns \
--privileged=true \
-p 50000:50000 docker.io/ibmcom/db2:11.5.7.0a
Did not work, either
Anything else I can help with?
Looks like you're running rootless - so this is a slirp/rootlessport issue. Never seen both of them fail before.
@giuseppe @AkihiroSuda Any thoughts?
@fedinskiy Can you try running the same as root? If it fails there I would expect it is likely something with your application. Did you try other container images, i.e. nginx?
@Luap99
Running as root succeeded:
sudo podman run -e DB2INSTANCE=test -e DB2INST1_PASSWORD=test -e DBNAME=amadeus -e AUTOCONFIG=false -e ARCHIVE_LOGS=false -e LICENSE=accept --privileged=true -p 50000:50000 docker.io/ibmcom/db2:11.5.7.0a
Other images (including postgres and mysql ) work fine without sudo. On the other hand, all of them(including db2) also work fine on docker.
Which one do you mean by "my application"? DB2 or telnet? :)
Db2, it must expect something special about the socket then. The default port forwarder will cause a different source ip but with slirp4netns:port_handler=slirp4netns it should work correctly.
Does it work if you run with --network=host?
@rhatdan it doesn't:
podman run -e DB2INSTANCE=test -e DB2INST1_PASSWORD=test -e DBNAME=amadeus -e AUTOCONFIG=false -e ARCHIVE_LOGS=false -e LICENSE=accept --network=host --privileged=true -p 50000:50000 docker.io/ibmcom/db2:11.5.7.0a
$ telnet 127.0.0.1 50000
Trying 127.0.0.1...
telnet: connect to address 127.0.0.1: Connection refused
Please run podman unshare nsenter --net=$(podman container inspect --format {{.NetworkSettings.SandboxKey}} <container NAME/ID>) ss -tulpn
.
I tried running this image and it is just not listening on port 5000 at all.
Check with podman logs <container NAME/ID>
, I see a ton of db setup related errors. I assume the image cannot be run as rootless, or at least they have to fix the image. I doubt that podman is doing anything wrong here.
@Luap99
$ podman ps
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
9e001a8b710e docker.io/ibmcom/db2:11.5.7.0a 3 minutes ago Up 3 minutes ago zen_keller
$ podman container inspect --format {{.NetworkSettings.SandboxKey}} zen_keller
$
You only have the SandboxKey when you run with private namespace, not with --network=host. But a said before, this is most likely the application misbehaving not podman. Check your logs.
A friendly reminder that this issue had no activity for 30 days.
Closing since I don't think it is a podman issue but rather how the container image works.
/kind bug
Description
If I start official DB2 container via docker, then I connect to it via jdbc connector or telnet. If I start it via podman, then I receive "Connection refused" error or it's analogues
Steps to reproduce the issue:
telnet localhost 5000
Describe the results you received:
or
if connecting via jdbc
Describe the results you expected: If application is started via docker ( replace
podman
withdocker
in the script above), everything is working fine. Telnet exampleAdditional information you deem important (e.g. issue happens only occasionally):
Output of
podman version
:Output of
podman info
:Package info (e.g. output of
rpm -q podman
orapt list podman
orbrew info podman
):Have you tested with the latest version of Podman and have you checked the Podman Troubleshooting Guide? (https://github.com/containers/podman/blob/main/troubleshooting.md)
Yes
Additional environment details (AWS, VirtualBox, physical, etc.): This issue can be reproduced both in Fedora 36 on a laptop and in RHEL 8 on a VM