canonical / lxd

Powerful system container and virtual machine manager
https://canonical.com/lxd
GNU Affero General Public License v3.0
4.38k stars 931 forks source link

lxd 3.0.0 - cannot create container : proxyconnect tcp: tls: oversized record received with length 20527 #4440

Closed gaetanquentin closed 6 years ago

gaetanquentin commented 6 years ago

info

Ubuntu 16.04.4 LTS \n \l snap

config:
  core.https_address: '[::]:8443'
  core.proxy_http: http://10.154.8.58:3128
  core.proxy_https: https://10.154.8.58:3128
  core.trust_password: true
api_extensions:
- storage_zfs_remove_snapshots
- container_host_shutdown_timeout
- container_stop_priority
- container_syscall_filtering
- auth_pki
- container_last_used_at
- etag
- patch
- usb_devices
- https_allowed_credentials
- image_compression_algorithm
- directory_manipulation
- container_cpu_time
- storage_zfs_use_refquota
- storage_lvm_mount_options
- network
- profile_usedby
- container_push
- container_exec_recording
- certificate_update
- container_exec_signal_handling
- gpu_devices
- container_image_properties
- migration_progress
- id_map
- network_firewall_filtering
- network_routes
- storage
- file_delete
- file_append
- network_dhcp_expiry
- storage_lvm_vg_rename
- storage_lvm_thinpool_rename
- network_vlan
- image_create_aliases
- container_stateless_copy
- container_only_migration
- storage_zfs_clone_copy
- unix_device_rename
- storage_lvm_use_thinpool
- storage_rsync_bwlimit
- network_vxlan_interface
- storage_btrfs_mount_options
- entity_description
- image_force_refresh
- storage_lvm_lv_resizing
- id_map_base
- file_symlinks
- container_push_target
- network_vlan_physical
- storage_images_delete
- container_edit_metadata
- container_snapshot_stateful_migration
- storage_driver_ceph
- storage_ceph_user_name
- resource_limits
- storage_volatile_initial_source
- storage_ceph_force_osd_reuse
- storage_block_filesystem_btrfs
- resources
- kernel_limits
- storage_api_volume_rename
- macaroon_authentication
- network_sriov
- console
- restrict_devlxd
- migration_pre_copy
- infiniband
- maas_network
- devlxd_events
- proxy
- network_dhcp_gateway
- file_get_symlink
- network_leases
- unix_device_hotplug
- storage_api_local_volume_handling
- operation_description
- clustering
- event_lifecycle
- storage_api_remote_volume_handling
- nvidia_runtime
api_status: stable
api_version: "1.0"
auth: trusted
public: false
auth_methods:
- tls
environment:
  addresses:
  - 10.150.61.47:8443
  - 10.150.56.252:8443
  - 172.18.10.1:8443
  - 172.18.11.1:8443
  - 172.18.12.1:8443
  - 172.18.13.1:8443
  - 172.17.0.1:8443
  architectures:
  - x86_64
  - i686
  certificate: |
    -----BEGIN CERTIFICATE-----
    -----END CERTIFICATE-----
  certificate_fingerprint: 71261d282d756e652c08bb308c5dadf32f85e8807d229690347e0cda0ef1da2e
  driver: lxc
  driver_version: 3.0.0
  kernel: Linux
  kernel_architecture: x86_64
  kernel_version: 4.4.0-116-generic
  server: lxd
  server_pid: 23518
  server_version: 3.0.0
  storage: btrfs
  storage_version: "4.4"
  server_clustered: false
  server_name: vipteam_lab.jmsp.prod

steps to reproduce

1. lxc image list images:centos/7/amd64
+-------------------+--------------+--------+---------------------------------+--------+---------+-------------------------------+
|       ALIAS       | FINGERPRINT  | PUBLIC |           DESCRIPTION           |  ARCH  |  SIZE   |          UPLOAD DATE          |
+-------------------+--------------+--------+---------------------------------+--------+---------+-------------------------------+
| centos/7 (3 more) | c28eb415b0e7 | yes    | Centos 7 amd64 (20180410_02:16) | x86_64 | 82.29MB | Apr 10, 2018 at 12:00am (UTC) |
+-------------------+--------------+--------+---------------------------------+--------+---------+-------------------------------+
2. lxc launch images:c28eb415b0e7 openshift-origin -p public

Creating openshift-origin
Error: Failed container creation: Get https://images.linuxcontainers.org/streams/v1/index.json: proxyconnect tcp: tls: oversized record received with length 20527

debug

DBUG[04-10|15:15:20] 
    {
        "architecture": "",
        "config": {},
        "devices": {},
        "ephemeral": false,
        "profiles": [
            "public"
        ],
        "stateful": false,
        "description": "",
        "name": "openshift-origin",
        "source": {
            "type": "image",
            "certificate": "",
            "alias": "c28eb415b0e7",
            "server": "https://images.linuxcontainers.org",
            "protocol": "simplestreams",
            "mode": "pull"
        },
        "instance_type": ""
    } 
DBUG[04-10|15:15:20] Got operation from LXD 
DBUG[04-10|15:15:20] 
    {
        "id": "7f3d8d2f-0fe6-48d5-b5cd-b6fa09416a1e",
        "class": "task",
        "description": "Creating container",
        "created_at": "2018-04-10T15:15:20.576339558Z",
        "updated_at": "2018-04-10T15:15:20.576339558Z",
        "status": "Running",
        "status_code": 103,
        "resources": {
            "containers": [
                "/1.0/containers/openshift-origin"
            ]
        },
        "metadata": null,
        "may_cancel": false,
        "err": ""
    } 
DBUG[04-10|15:15:20] Sending request to LXD                   etag= method=GET url=http://unix.socket/1.0/operations/7f3d8d2f-0fe6-48d5-b5cd-b6fa09416a1e
DBUG[04-10|15:15:20] Got response struct from LXD 
DBUG[04-10|15:15:20] 
    {
        "id": "7f3d8d2f-0fe6-48d5-b5cd-b6fa09416a1e",
        "class": "task",
        "description": "Creating container",
        "created_at": "2018-04-10T15:15:20.576339558Z",
        "updated_at": "2018-04-10T15:15:20.576339558Z",
        "status": "Failure",
        "status_code": 400,
        "resources": {
            "containers": [
                "/1.0/containers/openshift-origin"
            ]
        },
        "metadata": null,
        "may_cancel": false,
        "err": "Get https://images.linuxcontainers.org/streams/v1/index.json: proxyconnect tcp: tls: oversized record received with length 20527"
    } 
Error: Failed container creation: Get https://images.linuxcontainers.org/streams/v1/index.json: proxyconnect tcp: tls: oversized record received with length 20527
root@vipteam_lab:~# 
stgraber commented 6 years ago

Sounds like the server is being unhappy when fetching index.json from the remote server. The fact that proxy is mentioned makes me wonder what the server config looks like. Can you paste lxc config show?

gaetanquentin commented 6 years ago

lxc config show config: core.https_address: '[::]:8443' core.proxy_http: http://10.154.8.58:3128 core.proxy_https: https://10.154.8.58:3128 core.trust_password: true

https_proxy="https://10.154.8.58:3128" wget https://images.linuxcontainers.org/streams/v1/index.json --2018-04-10 17:46:37-- https://images.linuxcontainers.org/streams/v1/index.json Connecting to 10.154.8.58:3128... connected. Proxy request sent, awaiting response... 301 Moved Permanently Location: https://uk.images.linuxcontainers.org/streams/v1/index.json [following] --2018-04-10 17:46:38-- https://uk.images.linuxcontainers.org/streams/v1/index.json Connecting to 10.154.8.58:3128... connected. Proxy request sent, awaiting response... 200 OK Length: 3052 (3,0K) [application/json] Saving to: ‘index.json’

stgraber commented 6 years ago

Proxy URLs should be http:// even for https.

Try changing your core.proxy_https to be the same value as core.proxy_http

gaetanquentin commented 6 years ago

that's new with 3.0.0? i haven't change this configuration.

it was working fine with 2.21 (have upgraded recently)

stgraber commented 6 years ago

That shouldn't have changed, I remember seeing such behavior in the past, but sometimes you're lucky and your proxy can be smart and do TLS as needed. Assuming the proxy in question is squid3 or similar though, the client's connection isn't supposed to be using TLS, if the connection is https, the client will use the CONNECT method over the proxy protocol to get a raw TLS socket with the target.

stgraber commented 6 years ago

Did changing core.proxy_https to 'http://10.154.8.58:3128' fix that problem for you?

gaetanquentin commented 6 years ago

yes it did. thanks