containers / podman

Podman: A tool for managing OCI containers and pods.
https://podman.io
Apache License 2.0
23.79k stars 2.42k forks source link

failed to pull image when use system proxy #14087

Closed y0zong closed 2 years ago

y0zong commented 2 years ago

Description

In China we have to use proxy to access docker hub to pull image

proxy setting (this works fine in docker) http_proxy=socks5://127.0.0.1:7890

access test, success to resolve docker.io

curl -vv registry-1.docker.io
* Uses proxy env variable http_proxy == 'socks5://127.0.0.1:7890'
*   Trying 127.0.0.1:7890...
* SOCKS5 connect to IPv4 34.237.244.67:80 (locally resolved)
* SOCKS5 request granted.
* Connected to 127.0.0.1 (127.0.0.1) port 7890 (#0)
> GET / HTTP/1.1
> Host: registry-1.docker.io
> User-Agent: curl/7.79.1
> Accept: */*
> 
* Mark bundle as not supporting multiuse
< HTTP/1.1 301 Moved Permanently
< content-length: 0
< location: https://registry-1.docker.io/
< 
* Connection #0 to host 127.0.0.1 left intact

or (success when use podman login)

podman login docker.io -u y0zong
Password:

but connection refused when pull image

podman-compose -f docker-compose.yml -f docker-compose.without-nginx.yml up -d
['podman', '--version', '']
using podman version: 4.0.2
 ** merged:
 {
  "_dirname": "/Users/orlowang/Projects/tools/matttermost",
  "version": "2.4",
  "services": {
    "postgres": {
      "container_name": "postgres_mattermost",
      "image": "postgres:13-alpine",
      "restart": "unless-stopped",
      "security_opt": [
        "no-new-privileges:true"
      ],
      "pids_limit": 100,
      "read_only": true,
      "tmpfs": [
        "/tmp",
        "/var/run/postgresql"
      ],
      "volumes": [
        "./volumes/db/var/lib/postgresql/data:/var/lib/postgresql/data"
      ],
      "environment": {
        "TZ": null,
        "POSTGRES_USER": null,
        "POSTGRES_PASSWORD": null,
        "POSTGRES_DB": null
      }
    },
    "mattermost": {
      "depends_on": [
        "postgres"
      ],
      "container_name": "mattermost",
      "image": "mattermost/mattermost-enterprise-edition:6.3",
      "restart": "unless-stopped",
      "security_opt": [
        "no-new-privileges:true"
      ],
      "pids_limit": 200,
      "read_only": "false",
      "tmpfs": [
        "/tmp"
      ],
      "volumes": [
        "./volumes/app/mattermost/config:/mattermost/config:rw",
        "./volumes/app/mattermost/data:/mattermost/data:rw",
        "./volumes/app/mattermost/logs:/mattermost/logs:rw",
        "./volumes/app/mattermost/plugins:/mattermost/plugins:rw",
        "./volumes/app/mattermost/client/plugins:/mattermost/client/plugins:rw",
        "./volumes/app/mattermost/bleve-indexes:/mattermost/bleve-indexes:rw"
      ],
      "environment": {
        "TZ": null,
        "MM_SQLSETTINGS_DRIVERNAME": null,
        "MM_SQLSETTINGS_DATASOURCE": null,
        "MM_BLEVESETTINGS_INDEXDIR": null,
        "MM_SERVICESETTINGS_SITEURL": null
      },
      "ports": [
        "8065:8065"
      ]
    }
  }
}
** excluding:  set()
['podman', 'network', 'exists', 'matttermost_default']
podman run --name=postgres_mattermost -d --security-opt no-new-privileges:true --read-only --label io.podman.compose.config-hash=123 --label io.podman.compose.project=matttermost --label io.podman.compose.version=0.0.1 --label com.docker.compose.project=matttermost --label com.docker.compose.project.working_dir=/Users/orlowang/Projects/tools/matttermost --label com.docker.compose.project.config_files=docker-compose.yml,docker-compose.without-nginx.yml --label com.docker.compose.container-number=1 --label com.docker.compose.service=postgres -e TZ -e POSTGRES_USER -e POSTGRES_PASSWORD -e POSTGRES_DB --tmpfs /tmp --tmpfs /var/run/postgresql -v /Users/orlowang/Projects/tools/matttermost/volumes/db/var/lib/postgresql/data:/var/lib/postgresql/data --net matttermost_default --network-alias postgres --restart unless-stopped postgres:13-alpine
Resolving "postgres" using unqualified-search registries (/etc/containers/registries.conf.d/999-podman-machine.conf)
Trying to pull docker.io/library/postgres:13-alpine...
Error: initializing source docker://postgres:13-alpine: pinging container registry registry-1.docker.io: Get "https://registry-1.docker.io/v2/": proxyconnect tcp: dial tcp 127.0.0.1:7890: connect: connection refused
exit code: 125
podman start postgres_mattermost
Error: no container with name or ID "postgres_mattermost" found: no such container
exit code: 125
['podman', 'network', 'exists', 'matttermost_default']
podman run --name=mattermost -d --security-opt no-new-privileges:true --read-only --label io.podman.compose.config-hash=123 --label io.podman.compose.project=matttermost --label io.podman.compose.version=0.0.1 --label com.docker.compose.project=matttermost --label com.docker.compose.project.working_dir=/Users/orlowang/Projects/tools/matttermost --label com.docker.compose.project.config_files=docker-compose.yml,docker-compose.without-nginx.yml --label com.docker.compose.container-number=1 --label com.docker.compose.service=mattermost -e TZ -e MM_SQLSETTINGS_DRIVERNAME -e MM_SQLSETTINGS_DATASOURCE -e MM_BLEVESETTINGS_INDEXDIR -e MM_SERVICESETTINGS_SITEURL --tmpfs /tmp -v /Users/orlowang/Projects/tools/matttermost/volumes/app/mattermost/config:/mattermost/config:rw -v /Users/orlowang/Projects/tools/matttermost/volumes/app/mattermost/data:/mattermost/data:rw -v /Users/orlowang/Projects/tools/matttermost/volumes/app/mattermost/logs:/mattermost/logs:rw -v /Users/orlowang/Projects/tools/matttermost/volumes/app/mattermost/plugins:/mattermost/plugins:rw -v /Users/orlowang/Projects/tools/matttermost/volumes/app/mattermost/client/plugins:/mattermost/client/plugins:rw -v /Users/orlowang/Projects/tools/matttermost/volumes/app/mattermost/bleve-indexes:/mattermost/bleve-indexes:rw --net matttermost_default --network-alias mattermost -p 8065:8065 --restart unless-stopped mattermost/mattermost-enterprise-edition:6.3
Resolving "mattermost/mattermost-enterprise-edition" using unqualified-search registries (/etc/containers/registries.conf.d/999-podman-machine.conf)
Trying to pull docker.io/mattermost/mattermost-enterprise-edition:6.3...
Error: initializing source docker://mattermost/mattermost-enterprise-edition:6.3: pinging container registry registry-1.docker.io: Get "https://registry-1.docker.io/v2/": proxyconnect tcp: dial tcp 127.0.0.1:7890: connect: connection refused
exit code: 125
podman start mattermost
Error: no container with name or ID "mattermost" found: no such container
exit code: 125

keypoint in error pinging container registry registry-1.docker.io: Get "https://registry-1.docker.io/v2/": proxyconnect tcp: dial tcp 127.0.0.1:7890: connect: connection refused

I don't know if podman use ping result to decide connection status, otherwise ping can be failed but connection is still alive when use socks5 proxy (that's why I use curl -vv test instead ping)

pls help

Is this a BUG REPORT or FEATURE REQUEST? (leave only one on its own line)

/kind bug (maybe)

Steps to reproduce the issue:

  1. set system proxy (http_proxy)

  2. podman-compose start project

Additional information you deem important (e.g. issue happens only occasionally):

Output of podman version:

podman version
Client:       Podman Engine
Version:      4.0.2
API Version:  4.0.2
Go Version:   go1.17.8

Built:      Wed Mar  2 22:04:36 2022
OS/Arch:    darwin/amd64

Server:       Podman Engine
Version:      4.0.3
API Version:  4.0.3
Go Version:   go1.18

Built:      Sat Apr  2 02:21:54 2022
OS/Arch:    linux/amd64

Output of podman info --debug:

podman info --debug
host:
  arch: amd64
  buildahVersion: 1.24.3
  cgroupControllers:
  - cpu
  - io
  - memory
  - pids
  cgroupManager: systemd
  cgroupVersion: v2
  conmon:
    package: conmon-2.1.0-2.fc36.x86_64
    path: /usr/bin/conmon
    version: 'conmon version 2.1.0, commit: '
  cpus: 1
  distribution:
    distribution: fedora
    variant: coreos
    version: "36"
  eventLogger: journald
  hostname: localhost.localdomain
  idMappings:
    gidmap:
    - container_id: 0
      host_id: 1000
      size: 1
    - container_id: 1
      host_id: 100000
      size: 1000000
    uidmap:
    - container_id: 0
      host_id: 502
      size: 1
    - container_id: 1
      host_id: 100000
      size: 1000000
  kernel: 5.17.3-300.fc36.x86_64
  linkmode: dynamic
  logDriver: journald
  memFree: 1363165184
  memTotal: 2066817024
  networkBackend: netavark
  ociRuntime:
    name: crun
    package: crun-1.4.4-1.fc36.x86_64
    path: /usr/bin/crun
    version: |-
      crun version 1.4.4
      commit: 6521fcc5806f20f6187eb933f9f45130c86da230
      spec: 1.0.0
      +SYSTEMD +SELINUX +APPARMOR +CAP +SECCOMP +EBPF +CRIU +YAJL
  os: linux
  remoteSocket:
    exists: true
    path: /run/user/502/podman/podman.sock
  security:
    apparmorEnabled: false
    capabilities: CAP_CHOWN,CAP_DAC_OVERRIDE,CAP_FOWNER,CAP_FSETID,CAP_KILL,CAP_NET_BIND_SERVICE,CAP_SETFCAP,CAP_SETGID,CAP_SETPCAP,CAP_SETUID,CAP_SYS_CHROOT
    rootless: true
    seccompEnabled: true
    seccompProfilePath: /usr/share/containers/seccomp.json
    selinuxEnabled: true
  serviceIsRemote: true
  slirp4netns:
    executable: /usr/bin/slirp4netns
    package: slirp4netns-1.2.0-0.2.beta.0.fc36.x86_64
    version: |-
      slirp4netns version 1.2.0-beta.0
      commit: 477db14a24ff1a3de3a705e51ca2c4c1fe3dda64
      libslirp: 4.6.1
      SLIRP_CONFIG_VERSION_MAX: 3
      libseccomp: 2.5.3
  swapFree: 0
  swapTotal: 0
  uptime: 3h 10m 6.18s (Approximately 0.12 days)
plugins:
  log:
  - k8s-file
  - none
  - passthrough
  - journald
  network:
  - bridge
  - macvlan
  volume:
  - local
registries:
  search:
  - docker.io
store:
  configFile: /var/home/core/.config/containers/storage.conf
  containerStore:
    number: 0
    paused: 0
    running: 0
    stopped: 0
  graphDriverName: overlay
  graphOptions: {}
  graphRoot: /var/home/core/.local/share/containers/storage
  graphStatus:
    Backing Filesystem: xfs
    Native Overlay Diff: "true"
    Supports d_type: "true"
    Using metacopy: "false"
  imageCopyTmpDir: /var/tmp
  imageStore:
    number: 0
  runRoot: /run/user/502/containers
  volumePath: /var/home/core/.local/share/containers/storage/volumes
version:
  APIVersion: 4.0.3
  Built: 1648837314
  BuiltTime: Sat Apr  2 02:21:54 2022
  GitCommit: ""
  GoVersion: go1.18
  OsArch: linux/amd64
  Version: 4.0.3

Have you tested with the latest version of Podman and have you checked the Podman Troubleshooting Guide? (https://github.com/containers/podman/blob/main/troubleshooting.md)

Yes

Additional environment details (AWS, VirtualBox, physical, etc.):

mac os Monterey

openshift-ci[bot] commented 2 years ago

@y0zong: The label(s) kind/(maybe) cannot be applied, because the repository doesn't have them.

In response to [this](https://github.com/containers/podman/issues/14087): >**Description** > >**In China we have to use proxy to access docker hub to pull image** >proxy setting >`http_proxy=socks5://127.0.0.1:7890` > >access test, success to resolve **docker.io** >``` bash >curl -vv registry-1.docker.io >* Uses proxy env variable http_proxy == 'socks5://127.0.0.1:7890' >* Trying 127.0.0.1:7890... >* SOCKS5 connect to IPv4 34.237.244.67:80 (locally resolved) >* SOCKS5 request granted. >* Connected to 127.0.0.1 (127.0.0.1) port 7890 (#0) >> GET / HTTP/1.1 >> Host: registry-1.docker.io >> User-Agent: curl/7.79.1 >> Accept: */* >> >* Mark bundle as not supporting multiuse >< HTTP/1.1 301 Moved Permanently >< content-length: 0 >< location: https://registry-1.docker.io/ >< >* Connection #0 to host 127.0.0.1 left intact >``` >or (success when use podman login) >``` bash >podman login docker.io -u y0zong >Password: >``` > >but connection refused when pull image >``` bash >podman-compose -f docker-compose.yml -f docker-compose.without-nginx.yml up -d >['podman', '--version', ''] >using podman version: 4.0.2 > ** merged: > { > "_dirname": "/Users/orlowang/Projects/tools/matttermost", > "version": "2.4", > "services": { > "postgres": { > "container_name": "postgres_mattermost", > "image": "postgres:13-alpine", > "restart": "unless-stopped", > "security_opt": [ > "no-new-privileges:true" > ], > "pids_limit": 100, > "read_only": true, > "tmpfs": [ > "/tmp", > "/var/run/postgresql" > ], > "volumes": [ > "./volumes/db/var/lib/postgresql/data:/var/lib/postgresql/data" > ], > "environment": { > "TZ": null, > "POSTGRES_USER": null, > "POSTGRES_PASSWORD": null, > "POSTGRES_DB": null > } > }, > "mattermost": { > "depends_on": [ > "postgres" > ], > "container_name": "mattermost", > "image": "mattermost/mattermost-enterprise-edition:6.3", > "restart": "unless-stopped", > "security_opt": [ > "no-new-privileges:true" > ], > "pids_limit": 200, > "read_only": "false", > "tmpfs": [ > "/tmp" > ], > "volumes": [ > "./volumes/app/mattermost/config:/mattermost/config:rw", > "./volumes/app/mattermost/data:/mattermost/data:rw", > "./volumes/app/mattermost/logs:/mattermost/logs:rw", > "./volumes/app/mattermost/plugins:/mattermost/plugins:rw", > "./volumes/app/mattermost/client/plugins:/mattermost/client/plugins:rw", > "./volumes/app/mattermost/bleve-indexes:/mattermost/bleve-indexes:rw" > ], > "environment": { > "TZ": null, > "MM_SQLSETTINGS_DRIVERNAME": null, > "MM_SQLSETTINGS_DATASOURCE": null, > "MM_BLEVESETTINGS_INDEXDIR": null, > "MM_SERVICESETTINGS_SITEURL": null > }, > "ports": [ > "8065:8065" > ] > } > } >} >** excluding: set() >['podman', 'network', 'exists', 'matttermost_default'] >podman run --name=postgres_mattermost -d --security-opt no-new-privileges:true --read-only --label io.podman.compose.config-hash=123 --label io.podman.compose.project=matttermost --label io.podman.compose.version=0.0.1 --label com.docker.compose.project=matttermost --label com.docker.compose.project.working_dir=/Users/orlowang/Projects/tools/matttermost --label com.docker.compose.project.config_files=docker-compose.yml,docker-compose.without-nginx.yml --label com.docker.compose.container-number=1 --label com.docker.compose.service=postgres -e TZ -e POSTGRES_USER -e POSTGRES_PASSWORD -e POSTGRES_DB --tmpfs /tmp --tmpfs /var/run/postgresql -v /Users/orlowang/Projects/tools/matttermost/volumes/db/var/lib/postgresql/data:/var/lib/postgresql/data --net matttermost_default --network-alias postgres --restart unless-stopped postgres:13-alpine >Resolving "postgres" using unqualified-search registries (/etc/containers/registries.conf.d/999-podman-machine.conf) >Trying to pull docker.io/library/postgres:13-alpine... >Error: initializing source docker://postgres:13-alpine: pinging container registry registry-1.docker.io: Get "https://registry-1.docker.io/v2/": proxyconnect tcp: dial tcp 127.0.0.1:7890: connect: connection refused >exit code: 125 >podman start postgres_mattermost >Error: no container with name or ID "postgres_mattermost" found: no such container >exit code: 125 >['podman', 'network', 'exists', 'matttermost_default'] >podman run --name=mattermost -d --security-opt no-new-privileges:true --read-only --label io.podman.compose.config-hash=123 --label io.podman.compose.project=matttermost --label io.podman.compose.version=0.0.1 --label com.docker.compose.project=matttermost --label com.docker.compose.project.working_dir=/Users/orlowang/Projects/tools/matttermost --label com.docker.compose.project.config_files=docker-compose.yml,docker-compose.without-nginx.yml --label com.docker.compose.container-number=1 --label com.docker.compose.service=mattermost -e TZ -e MM_SQLSETTINGS_DRIVERNAME -e MM_SQLSETTINGS_DATASOURCE -e MM_BLEVESETTINGS_INDEXDIR -e MM_SERVICESETTINGS_SITEURL --tmpfs /tmp -v /Users/orlowang/Projects/tools/matttermost/volumes/app/mattermost/config:/mattermost/config:rw -v /Users/orlowang/Projects/tools/matttermost/volumes/app/mattermost/data:/mattermost/data:rw -v /Users/orlowang/Projects/tools/matttermost/volumes/app/mattermost/logs:/mattermost/logs:rw -v /Users/orlowang/Projects/tools/matttermost/volumes/app/mattermost/plugins:/mattermost/plugins:rw -v /Users/orlowang/Projects/tools/matttermost/volumes/app/mattermost/client/plugins:/mattermost/client/plugins:rw -v /Users/orlowang/Projects/tools/matttermost/volumes/app/mattermost/bleve-indexes:/mattermost/bleve-indexes:rw --net matttermost_default --network-alias mattermost -p 8065:8065 --restart unless-stopped mattermost/mattermost-enterprise-edition:6.3 >Resolving "mattermost/mattermost-enterprise-edition" using unqualified-search registries (/etc/containers/registries.conf.d/999-podman-machine.conf) >Trying to pull docker.io/mattermost/mattermost-enterprise-edition:6.3... >Error: initializing source docker://mattermost/mattermost-enterprise-edition:6.3: pinging container registry registry-1.docker.io: Get "https://registry-1.docker.io/v2/": proxyconnect tcp: dial tcp 127.0.0.1:7890: connect: connection refused >exit code: 125 >podman start mattermost >Error: no container with name or ID "mattermost" found: no such container >exit code: 125 >``` >keypoint in error >`pinging container registry registry-1.docker.io: Get "https://registry-1.docker.io/v2/": proxyconnect tcp: dial tcp 127.0.0.1:7890: connect: connection refused` > >I don't know if podman use ping result to decide connection status, otherwise ping can be failed but connection is still alive when use socks5 proxy (that's why I use curl -vv test instead ping) > >pls help > > >**Is this a BUG REPORT or FEATURE REQUEST? (leave only one on its own line)** > >/kind bug (maybe) > >**Steps to reproduce the issue:** > >1. set system proxy (http_proxy) > >2. podman-compose start project > >**Additional information you deem important (e.g. issue happens only occasionally):** > >**Output of `podman version`:** > >``` >podman version >Client: Podman Engine >Version: 4.0.2 >API Version: 4.0.2 >Go Version: go1.17.8 > >Built: Wed Mar 2 22:04:36 2022 >OS/Arch: darwin/amd64 > >Server: Podman Engine >Version: 4.0.3 >API Version: 4.0.3 >Go Version: go1.18 > >Built: Sat Apr 2 02:21:54 2022 >OS/Arch: linux/amd64 >``` > >**Output of `podman info --debug`:** > >``` >podman info --debug >host: > arch: amd64 > buildahVersion: 1.24.3 > cgroupControllers: > - cpu > - io > - memory > - pids > cgroupManager: systemd > cgroupVersion: v2 > conmon: > package: conmon-2.1.0-2.fc36.x86_64 > path: /usr/bin/conmon > version: 'conmon version 2.1.0, commit: ' > cpus: 1 > distribution: > distribution: fedora > variant: coreos > version: "36" > eventLogger: journald > hostname: localhost.localdomain > idMappings: > gidmap: > - container_id: 0 > host_id: 1000 > size: 1 > - container_id: 1 > host_id: 100000 > size: 1000000 > uidmap: > - container_id: 0 > host_id: 502 > size: 1 > - container_id: 1 > host_id: 100000 > size: 1000000 > kernel: 5.17.3-300.fc36.x86_64 > linkmode: dynamic > logDriver: journald > memFree: 1363165184 > memTotal: 2066817024 > networkBackend: netavark > ociRuntime: > name: crun > package: crun-1.4.4-1.fc36.x86_64 > path: /usr/bin/crun > version: |- > crun version 1.4.4 > commit: 6521fcc5806f20f6187eb933f9f45130c86da230 > spec: 1.0.0 > +SYSTEMD +SELINUX +APPARMOR +CAP +SECCOMP +EBPF +CRIU +YAJL > os: linux > remoteSocket: > exists: true > path: /run/user/502/podman/podman.sock > security: > apparmorEnabled: false > capabilities: CAP_CHOWN,CAP_DAC_OVERRIDE,CAP_FOWNER,CAP_FSETID,CAP_KILL,CAP_NET_BIND_SERVICE,CAP_SETFCAP,CAP_SETGID,CAP_SETPCAP,CAP_SETUID,CAP_SYS_CHROOT > rootless: true > seccompEnabled: true > seccompProfilePath: /usr/share/containers/seccomp.json > selinuxEnabled: true > serviceIsRemote: true > slirp4netns: > executable: /usr/bin/slirp4netns > package: slirp4netns-1.2.0-0.2.beta.0.fc36.x86_64 > version: |- > slirp4netns version 1.2.0-beta.0 > commit: 477db14a24ff1a3de3a705e51ca2c4c1fe3dda64 > libslirp: 4.6.1 > SLIRP_CONFIG_VERSION_MAX: 3 > libseccomp: 2.5.3 > swapFree: 0 > swapTotal: 0 > uptime: 3h 10m 6.18s (Approximately 0.12 days) >plugins: > log: > - k8s-file > - none > - passthrough > - journald > network: > - bridge > - macvlan > volume: > - local >registries: > search: > - docker.io >store: > configFile: /var/home/core/.config/containers/storage.conf > containerStore: > number: 0 > paused: 0 > running: 0 > stopped: 0 > graphDriverName: overlay > graphOptions: {} > graphRoot: /var/home/core/.local/share/containers/storage > graphStatus: > Backing Filesystem: xfs > Native Overlay Diff: "true" > Supports d_type: "true" > Using metacopy: "false" > imageCopyTmpDir: /var/tmp > imageStore: > number: 0 > runRoot: /run/user/502/containers > volumePath: /var/home/core/.local/share/containers/storage/volumes >version: > APIVersion: 4.0.3 > Built: 1648837314 > BuiltTime: Sat Apr 2 02:21:54 2022 > GitCommit: "" > GoVersion: go1.18 > OsArch: linux/amd64 > Version: 4.0.3 >``` > >**Have you tested with the latest version of Podman and have you checked the Podman Troubleshooting Guide? (https://github.com/containers/podman/blob/main/troubleshooting.md)** > > >Yes > >**Additional environment details (AWS, VirtualBox, physical, etc.):** > >mac os Monterey Instructions for interacting with me using PR comments are available [here](https://git.k8s.io/community/contributors/guide/pull-requests.md). If you have questions or suggestions related to my behavior, please file an issue against the [kubernetes/test-infra](https://github.com/kubernetes/test-infra/issues/new?title=Prow%20issue:) repository.
vrothberg commented 2 years ago

Thanks for reaching out, @y0zong!

set system proxy (http_proxy)

Where do you set the proxy? Do you set it inside the podman machine?

y0zong commented 2 years ago

Where do you set the proxy? Do you set it inside the podman machine?

no, on host (mac os in my case), maybe it's no need to set proxy inside the podman machine when host is already in proxy?

Luap99 commented 2 years ago

The proxy is set correctly: proxyconnect tcp: dial tcp 127.0.0.1:7890: connect: connection refused

The problem is that your proxy is listening on 127.0.0.1. Obviously 127.0.0.1 inside the VM is a different address and therefore it cannot connect to the proxy on you actual host.

If you set http_proxy=socks5://host.containers.internal:7890 before you run podman machine init it should work. Maybe podman should just s/127.0.0.1/host.containers.internal and s/localhost/host.containers.internal automatically when it copies the proxy value from the host.

y0zong commented 2 years ago

If you set http_proxy=socks5://host.containers.internal:7890 before you run podman machine init it should work.

thank for point out this @Luap99 , and the problem is should I add 127.0.0.1 host.containers.internal to my host file on host os? because http_proxy=socks5://host.containers.internal:7890 will break my proxy and before podman machine init proxy still need to work for podman to pull fedora

I think it's better that podman automatically map host proxy value to machine so it can read it correctly

Luap99 commented 2 years ago

if you run http_proxy=socks5://host.containers.internal:7890 podman machine init it will only change the proxy var for this single command not your system.

y0zong commented 2 years ago

I just test it and I think you are right, but I don't know why http_proxy=socks5://host.containers.internal:7890 podman machine init this command still hit the same error but it works as expected when I add 127.0.0.1 host.containers.internal to my host file

however it works now

add item blow to host file

127.0.0.1 host.containers.internal

and change proxy to

http_proxy=socks5://host.containers.internal:7890

then everything works fine

much thanks @Luap99 for your help and I close this issues for the problem is solved, but I still stay concerned if some day podman can do this itself and no need to change proxy setting

towry commented 2 years ago

update:

I figured out this issue, it seems podman copy all current shell's env into the machine when I do podman machine init or podman machine start? Anyway, I opened a new shell and make sure http_proxy env is not set then stop & start podman machine, everything works now 💯 .

original post:

I don't have $http_proxy var setting on my host zsh terminal but still got this issue.

$> podman pull alpine

Resolved "alpine" as an alias (/etc/containers/registries.conf.d/000-shortnames.conf)
Trying to pull docker.io/library/alpine:latest...
Error: initializing source docker://alpine:latest: pinging container registry registry-1.docker.io: Get "https://registry-1.docker.io/v2/": proxyconnect tcp: dial tcp 127.0.0.1:1081: connect: connection refused

After I ssh into the machine:

> podman machine ssh

Connecting to vm podman-machine-default. To close connection, use `~.` or `exit`
Fedora CoreOS 36.20221014.2.0
Tracker: https://github.com/coreos/fedora-coreos-tracker
Discuss: https://discussion.fedoraproject.org/tag/coreos

[core@localhost ~]$ echo $http_proxy
http://127.0.0.1:1081
[core@localhost ~]$

How is this happen, why the machine have http_proxy set by default ???

yckbilly1929 commented 1 year ago

update:

I figured out this issue, it seems podman copy all current shell's env into the machine when I do podman machine init or podman machine start? Anyway, I opened a new shell and make sure http_proxy env is not set then stop & start podman machine, everything works now 💯 .

original post:

I don't have $http_proxy var setting on my host zsh terminal but still got this issue.

$> podman pull alpine

Resolved "alpine" as an alias (/etc/containers/registries.conf.d/000-shortnames.conf)
Trying to pull docker.io/library/alpine:latest...
Error: initializing source docker://alpine:latest: pinging container registry registry-1.docker.io: Get "https://registry-1.docker.io/v2/": proxyconnect tcp: dial tcp 127.0.0.1:1081: connect: connection refused

After I ssh into the machine:

> podman machine ssh

Connecting to vm podman-machine-default. To close connection, use `~.` or `exit`
Fedora CoreOS 36.20221014.2.0
Tracker: https://github.com/coreos/fedora-coreos-tracker
Discuss: https://discussion.fedoraproject.org/tag/coreos

[core@localhost ~]$ echo $http_proxy
http://127.0.0.1:1081
[core@localhost ~]$

How is this happen, why the machine have http_proxy set by default ???

faced with the same issue, and figured out that it was set in /etc/systemd/system.conf.d/default-env.conf with the following default value, which seems to override the value I manually set at /etc/systemd/system.conf.d/10-default-env.conf as suggested here

[Manager]
#Got from QEMU FW_CFG
DefaultEnvironment=http_proxy="http://127.0.0.1:1081" https_proxy="http://127.0.0.1:1081"