Closed rm3l closed 1 year ago
🤔 Same behavior with the java-springboot
and java-quarkus
Devfiles.
FWIW, I just discovered a podman port
command and noticed we had several port mappings for each port. Not sure if it explains the issue, that said:
☸ kind-local-k8s-cluster in ~/w/t/6510-unable-to-access-the-forwarded-debug-port-of-the-node.js-starter-project-on-podman on main [!] via ☕ v17.0.5 on ☁️
$ podman port -a
aede5e4daea4 8080/tcp -> 0.0.0.0:40001
aede5e4daea4 5858/tcp -> 0.0.0.0:40002
bcafb172ed0b 8080/tcp -> 0.0.0.0:40001
bcafb172ed0b 5858/tcp -> 0.0.0.0:40002
As noticed by @feloy, it works if the application listens on all interfaces (0.0.0.0
), which is not the case for most apps running in debug mode (by default node --inspect=$DEBUG_PORT
will listen on the loopback interface on 127.0.0.1:$DEBUG_PORT
). And Podman does not seem to forward host ports to such container ports.
This behavior might also happen if the main application listens on localhost.
TODO (Scope of this issue after team discussion on 2023-01-23):
Summary of investigations
Per https://github.com/containers/podman/issues/17353#issuecomment-1416319137, this behavior is intentional on Podman: applications that are bound to the loopback interface cannot be reached using hostPort
.
I noticed we have the same issue when publishing ports with either Docker or Podman, e.g:
# On the host, localhost:20001 is reachable
$ docker container run --rm -p 20001:9000 -t quay.io/redhatworkshops/simple-python-web \
/usr/bin/python3 -m http.server 9000 --bind 0.0.0.0
# On the host, localhost:20001 is not reachable
$ docker container run --rm -p 20001:9000 -t quay.io/redhatworkshops/simple-python-web \
/usr/bin/python3 -m http.server 9000 --bind 127.0.0.1
Kubernetes port forwarding seems to work differently, according to the corresponding design doc: it is currently specified in the Container Runtime Interface (CRI).
Before that, it used to be implemented by the kubelet
on the node by using nsenter
(to enter the network namespace) and socat
(for the actual port-forwarding).
This is also documented in the OpenShift docs: https://docs.openshift.com/container-platform/4.12/nodes/containers/nodes-containers-port-forwarding.html#nodes-containers-port-forwarding-about_nodes-containers-port-forwarding
Now, most CRI implementations I have seen (like containerd or cri-o) forward a stream inside the network namespace to a specific port.
For now, as agreed, I'm documenting this as a difference between K8s and Podman: users would need to explicitly bind their applications to 0.0.0.0 (or any public interfaces of the container) in such cases.
Meanwhile, I have explored alternatives that would not require users to change how they bind their applications to network interfaces:
pasta
, introduced in Podman 4.4), but this is not supported for kube play
commands.podman kube play --net=ns:/proc/.../
, but again, this is not supported for kube play
commands.Trying something similar to the way Kubernetes does, using socat
. For example:
apiVersion: v1
kind: Pod
metadata:
name: my-pod
labels:
app: my-pod
spec:
containers:
- name: nginx-container
image: nginx:latest
#ports:
#- containerPort: 80
# hostPort: 8888
- name: python-web-container
image: quay.io/redhatworkshops/simple-python-web:latest
command: [ '/usr/bin/python3', '-m', 'http.server', '9000', '--bind', '127.0.0.1']
ports:
- containerPort: 9000
# This won't work
hostPort: 19000
- name: python-web-container-socat
image: quay.io/devfile/base-developer-image:ubi8-latest
command: [ '/bin/sh', '-c', 'socat -d -d tcp-listen:20002,reuseaddr,fork tcp:localhost:9000']
ports:
- containerPort: 20002
# This works !
hostPort: 20002
~This is a supposed blocker for moving podman out of experimental mode.~ No longer looking at it as a blocker. It is a generic problem with Devfile that needs more testing.
Cabal call [7th Feb '23]
Stage 1/Quick sol: Use socat inside the container; check if the images in the Devfile registry has the binary; should work for most of the user Stage 2: See how K8s does port forwarding and try to implement this for podman Optional: Ask Mohit to talk with Podman if they can implement port forwarding if it doesn’t exist already.
Stage 1/Quick sol: Use socat inside the container; check if the images in the Devfile registry has the binary; should work for most of the user
As we can see below, the socat
binary is not available at all in all the images in the Devfile registry stacks.
Devfile Stack | Container Image | Has socat ? |
---|---|---|
dotnet50 | registry.access.redhat.com/ubi8/dotnet-50:5.0 | No |
dotnet60 | registry.access.redhat.com/ubi8/dotnet-60:6.0 | No |
dotnetcore31 | registry.access.redhat.com/ubi8/dotnet-31:3.1 | No |
nodejs-angular | registry.access.redhat.com/ubi8/nodejs-16:latest | No |
python-django | registry.access.redhat.com/ubi8/nodejs-16:latest | No |
go | registry.access.redhat.com/ubi9/go-toolset:latest | No |
php-laravel | quay.io/devfile/composer:2.4 | No |
java-maven | registry.access.redhat.com/ubi8/openjdk-11:latest | No |
nodejs-nextjs | registry.access.redhat.com/ubi8/nodejs-16:latest | No |
nodejs | registry.access.redhat.com/ubi8/nodejs-16:latest | No |
nodejs-nuxtjs | registry.access.redhat.com/ubi8/nodejs-16:latest | No |
java-openliberty-gradle | icr.io/appcafe/open-liberty-devfile-stack:22.0.0.1-gradle | No |
java-openliberty | icr.io/appcafe/open-liberty-devfile-stack:22.0.0.1 | No |
python | registry.access.redhat.com/ubi9/python-39:latest | No |
java-quarkus | registry.access.redhat.com/ubi8/openjdk-17 | No |
nodejs-react | registry.access.redhat.com/ubi8/nodejs-16:latest | No |
java-springboot | registry.access.redhat.com/ubi8/openjdk-11:latest | No |
nodejs-svelte | registry.access.redhat.com/ubi8/nodejs-16:latest | No |
java-vertx | quay.io/eclipse/che-java11-maven:next | No |
nodejs-vue | registry.access.redhat.com/ubi8/nodejs-16:latest | No |
java-websphereliberty-gradle | icr.io/appcafe/websphere-liberty-devfile-stack:22.0.0.1-gradle | No |
java-websphereliberty | icr.io/appcafe/websphere-liberty-devfile-stack:22.0.0.1 | No |
java-wildfly-bootable-jar | registry.access.redhat.com/ubi8/openjdk-11 | No |
java-wildfly | quay.io/wildfly/wildfly-centos7:26.1 | No |
/retitle On Podman, unable to access local ports forwarded to container applications listening on the loopback interface
Retitling, now that we understand what the actual issue is.
This is a supposed blocker for moving podman out of experimental mode.
Cabal call [7th Feb '23]
What are the arguments in favor of making this issue a blocker?
Arguments against are:
(1) As I understand, we want to introduce Podman backend for odo to lower the entry level to develop with odo. If we increase the complexity of the deployed containers in Podman (by adding sidecar containers, etc, with the associated security risks), this could be counter-productive.
This is a supposed blocker for moving podman out of experimental mode. Cabal call [7th Feb '23]
What are the arguments in favor of making this issue a blocker?
I understand that there are security concerns when doing this in the developer machine, for example explained here: https://nodejs.org/fr/docs/guides/debugging-getting-started/#security-implications
I don't know much about networking inside Containers/Podman, and if their concerns are also applicable inside a Podman container?
This is a supposed blocker for moving podman out of experimental mode. Cabal call [7th Feb '23]
What are the arguments in favor of making this issue a blocker?
One of the arguments for me was, from the Adapters perspective, to make sure they could use the same current Devfiles to debug on either Podman or Kubernetes, with no changes in the Devfile. But I understand your points. As we discussed earlier today, it makes sense to me that the application listens on all interfaces; but, for security purposes, a debugger should listen by default on localhost, even more importantly when running in a shared cluster, since any Pod can communicate with any other Pod by design (knowing the Pod IP address). That could be an argument for running different commands on Podman vs cluster, but I am not sure about the increased complexity of maintaining different commands.
Arguments against are:
- applications will need to run in production exposing the port in all interfaces and not only localhost, to make the application accessible through a Kubernetes Service. If the app listens only on localhost, it won't be accessible through Service (and so, neither through Ingress/Route)
The remaining technical issue is that debug should listen in all interfaces. Are we sure this is a user issue (1)?
- For developers already developing on Podman (without odo), as the technical issue already exists for them, do they expose the debugger in all interfaces, or only on localhost with some workaround?
- For developers on Kubernetes, as it is possible to debug withtout exposing to all interfaces (thanks to port-forwarding), is it an issue to expose the debuger in all interfaces?
- As the configuration of the debugger resides in devfile, could it be acceptable to differentiate the command to debug on Podman/Kubernetes, so we do not increase the security risks in Kubernetes, and keep listening in all interfaces only in Podman?
The configuration of the Debugger could also be in the project code itself, as we saw with the Node.JS stack; well, the debug command to run is still in the Devfile, that said.
(1) As I understand, we want to introduce Podman backend for odo to lower the entry level to develop with odo. If we increase the complexity of the deployed containers in Podman (by adding sidecar containers, etc, with the associated security risks), this could be counter-productive.
I see all this as an internal implementation detail of how odo
does port-forwarding (the same way as Kubernetes does port-forwarding via the CRI implementation); IMO, most users do not need to know/don't care about how it works under the cover; we are still lowering the barrier as long as they can start right away developing on Podman and then transition to K8s with no specific changes to their application.
But let's see @kadel's take on this.
The biggest issue is that because localhost works with the cluster, we have a lot of Devfiles leveraging this "feature".
debugger on localhost
debugger on 0.0.0.0
It will be confusing to have it working differently on one platform than the other. We need to make sure that it works the same way. Listening on localhost needs to work everywhere or nowhere.
This makes me think about how this works on DevSpaces. If the application opening ports only for localhost doesn't work on DevSpaces, than we don't have to worry about not being able to reach ports on podman. But instead, we will have to restrict cluster port-forwarding to match how it works on DevSpaces and fix devfiles in the registry.
We have checked that applications listening on localhost currently work on DevSpaces. The Che Port extension detects processes listening on localhost, and then prompts a message, but still makes it possible to reach the port via a port-forwarder (see this function).
Summary of our discussions/brainstorming
odo dev
on Podman normally, as currently/proc
filesystem
odo dev --platform podman --ignore-localhost
or odo dev --platform podman --forward-localhost
(new flags to be added)odo dev --platform podman --ignore-localhost
: create the Pod with no side container => in this case, application listening on localhost might not be reachable with Podmanodo dev --platform podman --forward-localhost
: create the Pod with a side container that will do the port-forwarding using socat
, as implemented in https://github.com/redhat-developer/odo/pull/6589Done in the following PRs:
Later, we can turn those new flags into preferences, so users don't need to set them for each call.
/kind bug /area dev /area odo-on-podman
What versions of software are you using?
Operating System: Fedora 37, kernel 6.1.6-200.fc37.x86_64
Output of
odo version
:odo v3.5.0
(8dbf42e5e)How did you run odo exactly?
As expected, trying to access the forwarded application port (40001 here) works correctly:
But trying to access the application on the forwarded debug port (40002 here) returns an error:
Actual behavior
Accessing the application on the forwarded debug port (40002 here) returns an error:
Expected behavior
When running
odo dev
against a cluster, we get the expected HTTP response from the forwarded application debug port:I was expecting the same behavior on Podman.
Any logs, error output, etc?
podman kube generate
```yaml # Save the output of this file and use kubectl create -f to import # it into Kubernetes. # # Created with podman-4.3.1 apiVersion: v1 kind: Pod metadata: annotations: io.kubernetes.cri-o.ContainerType/debug-nodejs-app-runtime: container io.kubernetes.cri-o.SandboxID/debug-nodejs-app-runtime: 269e89e69b091b501843a3c29f0578568cd91f76496c92da296c57cef27c62c io.podman.annotations.autoremove/debug-nodejs-app-runtime: "FALSE" io.podman.annotations.init/debug-nodejs-app-runtime: "FALSE" io.podman.annotations.privileged/debug-nodejs-app-runtime: "FALSE" io.podman.annotations.publish-all/debug-nodejs-app-runtime: "FALSE" creationTimestamp: "2023-01-18T15:17:30Z" labels: app: debug-nodejs-app name: debug-nodejs-app spec: automountServiceAccountToken: false containers: - args: - tail - -f - /dev/null env: - name: PROJECTS_ROOT value: /projects - name: DEBUG_PORT value: "5858" - name: PROJECT_SOURCE value: /projects image: registry.access.redhat.com/ubi8/nodejs-16:latest name: debug-nodejs-app-runtime ports: - containerPort: 3000 hostPort: 40001 - containerPort: 5858 hostPort: 40002 resources: limits: memory: 1Gi securityContext: capabilities: drop: - CAP_MKNOD - CAP_NET_RAW - CAP_AUDIT_WRITE volumeMounts: - mountPath: /projects name: odo-projects-debug-nodejs-app-pvc - mountPath: /opt/odo/ name: odo-shared-data-debug-nodejs-app-pvc enableServiceLinks: false hostname: debug-nodejs-app restartPolicy: Always volumes: - name: odo-projects-debug-nodejs-app-pvc persistentVolumeClaim: claimName: odo-projects-debug-nodejs-app - name: odo-shared-data-debug-nodejs-app-pvc persistentVolumeClaim: claimName: odo-shared-data-debug-nodejs-app status: {} ```