canonical / lxd

Powerful system container and virtual machine manager
https://canonical.com/lxd
GNU Affero General Public License v3.0
4.32k stars 926 forks source link

lxc copy fails due to socket issue #6478

Closed eviweb closed 4 years ago

eviweb commented 4 years ago

Required information

Issue description

Cannot copy a container from my local host to a remote server due to socker issue.
Running lxc copy --mode=relay alpine remote:alpine results in the following error Error: Error transferring container data: exit status 2

~ lxc monitor
location: none
metadata:
  context: {}
  level: dbug
  message: 'New event listener: 429b58f6-e24b-4fed-bead-8bcee2716303'
timestamp: "2019-11-20T15:34:44.851367052+01:00"
type: logging

location: none
metadata:
  context:
    ip: '@'
    method: GET
    url: /1.0
    user: ""
  level: dbug
  message: Handling
timestamp: "2019-11-20T15:35:00.149792615+01:00"
type: logging

location: none
metadata:
  context:
    ip: '@'
    method: GET
    url: /1.0/instances/alpine
    user: ""
  level: dbug
  message: Handling
timestamp: "2019-11-20T15:35:00.499571534+01:00"
type: logging

location: none
metadata:
  context:
    ip: '@'
    method: GET
    url: /1.0/events
    user: ""
  level: dbug
  message: Handling
timestamp: "2019-11-20T15:35:00.506759812+01:00"
type: logging

location: none
metadata:
  context:
    ip: '@'
    method: POST
    url: /1.0/instances/alpine
    user: ""
  level: dbug
  message: Handling
timestamp: "2019-11-20T15:35:00.508901133+01:00"
type: logging

location: none
metadata:
  context: {}
  level: dbug
  message: 'New event listener: c4749808-8b86-4052-aa41-acbb4c548d73'
timestamp: "2019-11-20T15:35:00.507753839+01:00"
type: logging

location: none
metadata:
  context: {}
  level: dbug
  message: 'New websocket Operation: aec28c75-415c-4624-8af3-eeda70802fb1'
timestamp: "2019-11-20T15:35:00.516506417+01:00"
type: logging

location: none
metadata:
  context: {}
  level: dbug
  message: 'Started websocket operation: aec28c75-415c-4624-8af3-eeda70802fb1'
timestamp: "2019-11-20T15:35:00.516942648+01:00"
type: logging

location: none
metadata:
  class: websocket
  created_at: "2019-11-20T15:35:00.514494973+01:00"
  description: Migrating container
  err: ""
  id: aec28c75-415c-4624-8af3-eeda70802fb1
  location: none
  may_cancel: false
  metadata:
    control: 1f4a33c256786500eebb969f2bd67a9a6e695c4bebab59e55782f51cb6141c25
    fs: 7955ae3183be43ff2f1a03a113d834aac43213913db974cb1b1b93c07b0fb5ea
  resources:
    containers:
    - /1.0/containers/alpine
  status: Pending
  status_code: 105
  updated_at: "2019-11-20T15:35:00.514494973+01:00"
timestamp: "2019-11-20T15:35:00.516907491+01:00"
type: operation

location: none
metadata:
  class: websocket
  created_at: "2019-11-20T15:35:00.514494973+01:00"
  description: Migrating container
  err: ""
  id: aec28c75-415c-4624-8af3-eeda70802fb1
  location: none
  may_cancel: false
  metadata:
    control: 1f4a33c256786500eebb969f2bd67a9a6e695c4bebab59e55782f51cb6141c25
    fs: 7955ae3183be43ff2f1a03a113d834aac43213913db974cb1b1b93c07b0fb5ea
  resources:
    containers:
    - /1.0/containers/alpine
  status: Running
  status_code: 103
  updated_at: "2019-11-20T15:35:00.514494973+01:00"
timestamp: "2019-11-20T15:35:00.52063121+01:00"
type: operation

location: none
metadata:
  context: {}
  level: dbug
  message: 'Connected websocket Operation: aec28c75-415c-4624-8af3-eeda70802fb1'
timestamp: "2019-11-20T15:35:01.19186315+01:00"
type: logging

location: none
metadata:
  context:
    ip: '@'
    method: GET
    url: /1.0/operations/aec28c75-415c-4624-8af3-eeda70802fb1/websocket?secret=1f4a33c256786500eebb969f2bd67a9a6e695c4bebab59e55782f51cb6141c25
    user: ""
  level: dbug
  message: Handling
timestamp: "2019-11-20T15:35:01.19181077+01:00"
type: logging

location: none
metadata:
  context: {}
  level: dbug
  message: 'Handled websocket Operation: aec28c75-415c-4624-8af3-eeda70802fb1'
timestamp: "2019-11-20T15:35:01.192035458+01:00"
type: logging

location: none
metadata:
  context: {}
  level: dbug
  message: 'Connected websocket Operation: aec28c75-415c-4624-8af3-eeda70802fb1'
timestamp: "2019-11-20T15:35:01.416363995+01:00"
type: logging

location: none
metadata:
  context:
    ip: '@'
    method: GET
    url: /1.0/operations/aec28c75-415c-4624-8af3-eeda70802fb1/websocket?secret=7955ae3183be43ff2f1a03a113d834aac43213913db974cb1b1b93c07b0fb5ea
    user: ""
  level: dbug
  message: Handling
timestamp: "2019-11-20T15:35:01.416307099+01:00"
type: logging

location: none
metadata:
  context: {}
  level: dbug
  message: 'Handled websocket Operation: aec28c75-415c-4624-8af3-eeda70802fb1'
timestamp: "2019-11-20T15:35:01.416533335+01:00"
type: logging

location: none
metadata:
  context: {}
  level: dbug
  message: Mounting ZFS storage volume for container "alpine" on storage pool "default"
timestamp: "2019-11-20T15:35:01.423099413+01:00"
type: logging

location: none
metadata:
  context: {}
  level: dbug
  message: Mounted ZFS storage volume for container "alpine" on storage pool "default"
timestamp: "2019-11-20T15:35:01.435544244+01:00"
type: logging

location: none
metadata:
  context: {}
  level: dbug
  message: The other side does not support pre-copy
timestamp: "2019-11-20T15:35:01.651037803+01:00"
type: logging

location: none
metadata:
  context: {}
  level: dbug
  message: 'Updated metadata for websocket Operation: aec28c75-415c-4624-8af3-eeda70802fb1'
timestamp: "2019-11-20T15:35:24.661505319+01:00"
type: logging

location: none
metadata:
  class: websocket
  created_at: "2019-11-20T15:35:00.514494973+01:00"
  description: Migrating container
  err: ""
  id: aec28c75-415c-4624-8af3-eeda70802fb1
  location: none
  may_cancel: false
  metadata:
    control: 1f4a33c256786500eebb969f2bd67a9a6e695c4bebab59e55782f51cb6141c25
    fs: 7955ae3183be43ff2f1a03a113d834aac43213913db974cb1b1b93c07b0fb5ea
    fs_progress: 'alpine: 8.22MB (357.63kB/s)'
  resources:
    containers:
    - /1.0/containers/alpine
  status: Running
  status_code: 103
  updated_at: "2019-11-20T15:35:24.661462929+01:00"
timestamp: "2019-11-20T15:35:24.662181998+01:00"
type: operation

location: none
metadata:
  context: {}
  level: dbug
  message: 'Got err writing writev unix /var/snap/lxd/common/lxd/unix.socket->@: writev:
    broken pipe'
timestamp: "2019-11-20T15:35:46.924791626+01:00"
type: logging

location: none
metadata:
  context: {}
  level: dbug
  message: Unmounting ZFS storage volume for container "alpine" on storage pool "default"
timestamp: "2019-11-20T15:35:46.941954669+01:00"
type: logging

location: none
metadata:
  context: {}
  level: dbug
  message: 'Failure for websocket operation: aec28c75-415c-4624-8af3-eeda70802fb1:
    websocket: close 1006 (abnormal closure): unexpected EOF'
timestamp: "2019-11-20T15:35:46.971053533+01:00"
type: logging

location: none
metadata:
  context: {}
  level: dbug
  message: Unmounted ZFS storage volume for container "alpine" on storage pool "default"
timestamp: "2019-11-20T15:35:46.970999729+01:00"
type: logging

location: none
metadata:
  class: websocket
  created_at: "2019-11-20T15:35:00.514494973+01:00"
  description: Migrating container
  err: 'websocket: close 1006 (abnormal closure): unexpected EOF'
  id: aec28c75-415c-4624-8af3-eeda70802fb1
  location: none
  may_cancel: false
  metadata:
    control: 1f4a33c256786500eebb969f2bd67a9a6e695c4bebab59e55782f51cb6141c25
    fs: 7955ae3183be43ff2f1a03a113d834aac43213913db974cb1b1b93c07b0fb5ea
    fs_progress: 'alpine: 8.22MB (357.63kB/s)'
  resources:
    containers:
    - /1.0/containers/alpine
  status: Failure
  status_code: 400
  updated_at: "2019-11-20T15:35:24.661462929+01:00"
timestamp: "2019-11-20T15:35:46.971484598+01:00"
type: operation

The error seems to come from here 'Got err writing writev unix /var/snap/lxd/common/lxd/unix.socket->@: writev: broken pipe'.

~ ls -la /var/snap/lxd/common/lxd/unix.socket
srw-rw---- 1 root lxd 0 Nov 20 15:03 /var/snap/lxd/common/lxd/unix.socket

Both systems have the same versions of LXD snap, rsync, zfs, Ubuntu and Linux kernel.

I also tried the workaround for rsync:

Steps to reproduce

  1. Run lxc copy --mode=relay alpine remote:alpine

Information to attach

stgraber commented 4 years ago

Can you look at the log on the target server?

With migration, it's always a good idea to look at the debug output on both sides as it seems likely that rsync just failed on one of the two sides.

eviweb commented 4 years ago

Thanks @stgraber. It seems to be a zfs issue `:

~ lxc monitor remote:
location: none
metadata:
  context: {}
  level: dbug
  message: 'New event listener: c0bedb74-2c99-4c76-9c00-381f51994ae8'
timestamp: "2019-11-20T17:00:56.146518394+01:00"
type: logging

location: none
metadata:
  context:
    name: 654a84a954bf415a5f4644572274746748fed0fb0ef990c70215113f01010efd
  level: dbug
  message: Found cert
timestamp: "2019-11-20T17:00:57.935919829+01:00"
type: logging

location: none
metadata:
  context:
    name: 654a84a954bf415a5f4644572274746748fed0fb0ef990c70215113f01010efd
  level: dbug
  message: Found cert
timestamp: "2019-11-20T17:00:57.935249843+01:00"
type: logging

location: none
metadata:
  context:
    ip: 10.2.120.43:51636
    method: GET
    url: /1.0
    user: 654a84a954bf415a5f4644572274746748fed0fb0ef990c70215113f01010efd
  level: dbug
  message: Handling
timestamp: "2019-11-20T17:00:57.935292516+01:00"
type: logging

location: none
metadata:
  context:
    name: 654a84a954bf415a5f4644572274746748fed0fb0ef990c70215113f01010efd
  level: dbug
  message: Found cert
timestamp: "2019-11-20T17:00:58.192023825+01:00"
type: logging

location: none
metadata:
  context:
    ip: 10.2.120.43:51638
    method: GET
    url: /1.0/events
    user: 654a84a954bf415a5f4644572274746748fed0fb0ef990c70215113f01010efd
  level: dbug
  message: Handling
timestamp: "2019-11-20T17:00:58.192061609+01:00"
type: logging

location: none
metadata:
  context: {}
  level: dbug
  message: 'New event listener: 2180827d-0be8-4110-b78a-9c34b5258ede'
timestamp: "2019-11-20T17:00:58.192527799+01:00"
type: logging

location: none
metadata:
  context:
    name: 654a84a954bf415a5f4644572274746748fed0fb0ef990c70215113f01010efd
  level: dbug
  message: Found cert
timestamp: "2019-11-20T17:00:58.422296849+01:00"
type: logging

location: none
metadata:
  context:
    ip: 10.2.120.43:51640
    method: POST
    url: /1.0/instances
    user: 654a84a954bf415a5f4644572274746748fed0fb0ef990c70215113f01010efd
  level: dbug
  message: Handling
timestamp: "2019-11-20T17:00:58.422344481+01:00"
type: logging

location: none
metadata:
  context: {}
  level: dbug
  message: Responding to container create
timestamp: "2019-11-20T17:00:58.422364246+01:00"
type: logging

location: none
metadata:
  context:
    ephemeral: "false"
    name: alpine
    project: default
  level: info
  message: Creating container
timestamp: "2019-11-20T17:00:58.434964015+01:00"
type: logging

location: none
metadata:
  action: container-created
  source: /1.0/containers/alpine
timestamp: "2019-11-20T17:00:58.447538431+01:00"
type: lifecycle

location: none
metadata:
  context:
    ephemeral: "false"
    name: alpine
    project: default
  level: info
  message: Created container
timestamp: "2019-11-20T17:00:58.447518876+01:00"
type: logging

location: none
metadata:
  context: {}
  level: dbug
  message: Creating empty ZFS storage volume for container "alpine" on storage pool
    "default"
timestamp: "2019-11-20T17:00:58.44756113+01:00"
type: logging

location: none
metadata:
  context: {}
  level: dbug
  message: Mounting ZFS storage volume for container "alpine" on storage pool "default"
timestamp: "2019-11-20T17:00:58.519411018+01:00"
type: logging

location: none
metadata:
  context: {}
  level: dbug
  message: Created empty ZFS storage volume for container "alpine" on storage pool
    "default"
timestamp: "2019-11-20T17:00:58.519382872+01:00"
type: logging

location: none
metadata:
  context: {}
  level: dbug
  message: Unmounting ZFS storage volume for container "alpine" on storage pool "default"
timestamp: "2019-11-20T17:00:58.527235745+01:00"
type: logging

location: none
metadata:
  context: {}
  level: dbug
  message: Mounted ZFS storage volume for container "alpine" on storage pool "default"
timestamp: "2019-11-20T17:00:58.52657372+01:00"
type: logging

location: none
metadata:
  context: {}
  level: dbug
  message: Mounting ZFS storage volume for container "alpine" on storage pool "default"
timestamp: "2019-11-20T17:00:58.54968787+01:00"
type: logging

location: none
metadata:
  context: {}
  level: dbug
  message: Unmounted ZFS storage volume for container "alpine" on storage pool "default"
timestamp: "2019-11-20T17:00:58.549648968+01:00"
type: logging

location: none
metadata:
  context: {}
  level: dbug
  message: Mounted ZFS storage volume for container "alpine" on storage pool "default"
timestamp: "2019-11-20T17:00:58.556712496+01:00"
type: logging

location: none
metadata:
  context:
    name: alpine
    project: default
  level: dbug
  message: Unable to update backup.yaml at this time
timestamp: "2019-11-20T17:00:58.560995996+01:00"
type: logging

location: none
metadata:
  context: {}
  level: dbug
  message: Unmounting ZFS storage volume for container "alpine" on storage pool "default"
timestamp: "2019-11-20T17:00:58.561026587+01:00"
type: logging

location: none
metadata:
  context: {}
  level: dbug
  message: Unmounted ZFS storage volume for container "alpine" on storage pool "default"
timestamp: "2019-11-20T17:00:58.589610977+01:00"
type: logging

location: none
metadata:
  context: {}
  level: dbug
  message: 'New websocket Operation: 8b392205-64a3-4c81-891c-a32c7cc757f9'
timestamp: "2019-11-20T17:00:58.60445188+01:00"
type: logging

location: none
metadata:
  context: {}
  level: dbug
  message: 'Started websocket operation: 8b392205-64a3-4c81-891c-a32c7cc757f9'
timestamp: "2019-11-20T17:00:58.6048124+01:00"
type: logging

location: none
metadata:
  class: websocket
  created_at: "2019-11-20T17:00:58.603725741+01:00"
  description: Creating container
  err: ""
  id: 8b392205-64a3-4c81-891c-a32c7cc757f9
  location: none
  may_cancel: false
  metadata:
    control: f690db723470b6211ab2e4addd58a29d3436134eb56031669db73673b4e0e6f9
    fs: 2320b3777fc5367b3645b43330ae5db8238dda7948e13628e5107f41329fb970
  resources:
    containers:
    - /1.0/containers/alpine
  status: Pending
  status_code: 105
  updated_at: "2019-11-20T17:00:58.603725741+01:00"
timestamp: "2019-11-20T17:00:58.604789492+01:00"
type: operation

location: none
metadata:
  class: websocket
  created_at: "2019-11-20T17:00:58.603725741+01:00"
  description: Creating container
  err: ""
  id: 8b392205-64a3-4c81-891c-a32c7cc757f9
  location: none
  may_cancel: false
  metadata:
    control: f690db723470b6211ab2e4addd58a29d3436134eb56031669db73673b4e0e6f9
    fs: 2320b3777fc5367b3645b43330ae5db8238dda7948e13628e5107f41329fb970
  resources:
    containers:
    - /1.0/containers/alpine
  status: Running
  status_code: 103
  updated_at: "2019-11-20T17:00:58.603725741+01:00"
timestamp: "2019-11-20T17:00:58.605080451+01:00"
type: operation

location: none
metadata:
  context: {}
  level: dbug
  message: 'Connected websocket Operation: 8b392205-64a3-4c81-891c-a32c7cc757f9'
timestamp: "2019-11-20T17:00:58.842995358+01:00"
type: logging

location: none
metadata:
  context:
    name: 654a84a954bf415a5f4644572274746748fed0fb0ef990c70215113f01010efd
  level: dbug
  message: Found cert
timestamp: "2019-11-20T17:00:58.842919231+01:00"
type: logging

location: none
metadata:
  context:
    ip: 10.2.120.43:51642
    method: GET
    url: /1.0/operations/8b392205-64a3-4c81-891c-a32c7cc757f9/websocket?secret=f690db723470b6211ab2e4addd58a29d3436134eb56031669db73673b4e0e6f9
    user: 654a84a954bf415a5f4644572274746748fed0fb0ef990c70215113f01010efd
  level: dbug
  message: Handling
timestamp: "2019-11-20T17:00:58.842969586+01:00"
type: logging

location: none
metadata:
  context: {}
  level: dbug
  message: 'Handled websocket Operation: 8b392205-64a3-4c81-891c-a32c7cc757f9'
timestamp: "2019-11-20T17:00:58.843141605+01:00"
type: logging

location: none
metadata:
  context:
    name: 654a84a954bf415a5f4644572274746748fed0fb0ef990c70215113f01010efd
  level: dbug
  message: Found cert
timestamp: "2019-11-20T17:00:59.076447901+01:00"
type: logging

location: none
metadata:
  context: {}
  level: dbug
  message: 'Connected websocket Operation: 8b392205-64a3-4c81-891c-a32c7cc757f9'
timestamp: "2019-11-20T17:00:59.076538416+01:00"
type: logging

location: none
metadata:
  context: {}
  level: dbug
  message: 'Handled websocket Operation: 8b392205-64a3-4c81-891c-a32c7cc757f9'
timestamp: "2019-11-20T17:00:59.076825603+01:00"
type: logging

location: none
metadata:
  context:
    ip: 10.2.120.43:51644
    method: GET
    url: /1.0/operations/8b392205-64a3-4c81-891c-a32c7cc757f9/websocket?secret=2320b3777fc5367b3645b43330ae5db8238dda7948e13628e5107f41329fb970
    user: 654a84a954bf415a5f4644572274746748fed0fb0ef990c70215113f01010efd
  level: dbug
  message: Handling
timestamp: "2019-11-20T17:00:59.076513343+01:00"
type: logging

location: none
metadata:
  context:
    name: 654a84a954bf415a5f4644572274746748fed0fb0ef990c70215113f01010efd
  level: dbug
  message: Found cert
timestamp: "2019-11-20T17:00:59.653873313+01:00"
type: logging

location: none
metadata:
  context:
    ip: 10.2.120.43:51646
    method: GET
    url: /1.0/operations/8b392205-64a3-4c81-891c-a32c7cc757f9
    user: 654a84a954bf415a5f4644572274746748fed0fb0ef990c70215113f01010efd
  level: dbug
  message: Handling
timestamp: "2019-11-20T17:00:59.653916405+01:00"
type: logging

location: none
metadata:
  context: {}
  level: dbug
  message: Didn't write all of buf
timestamp: "2019-11-20T17:01:44.414882129+01:00"
type: logging

location: none
metadata:
  context: {}
  level: eror
  message: "Problem with zfs recv: invalid option 'o'\nusage:\n\treceive [-vnFu] <filesystem|volume|snapshot>\n\treceive
    [-vnFu] [-d | -e] <filesystem>\n\nFor the property list, run: zfs set|get\n\nFor
    the delegated permission list, run: zfs allow|unallow\n"
timestamp: "2019-11-20T17:01:44.415156954+01:00"
type: logging

location: none
metadata:
  context: {}
  level: eror
  message: 'zfs list failed: Failed to run: zfs list -t snapshot -o name -H -d 1 -s
    creation -r rpool/virtual/lxd/containers/alpine: cannot open ''rpool/virtual/lxd/containers/alpine'':
    dataset does not exist'
timestamp: "2019-11-20T17:01:44.418927609+01:00"
type: logging

location: none
metadata:
  context:
    err: exit status 2
  level: eror
  message: Error during migration sink
timestamp: "2019-11-20T17:01:44.419215215+01:00"
type: logging

location: none
metadata:
  context: {}
  level: eror
  message: 'Failed listing snapshots post migration: Failed to list ZFS snapshots:
    Failed to run: zfs list -t snapshot -o name -H -d 1 -s creation -r rpool/virtual/lxd/containers/alpine:
    cannot open ''rpool/virtual/lxd/containers/alpine'': dataset does not exist'
timestamp: "2019-11-20T17:01:44.419033+01:00"
type: logging

location: none
metadata:
  context:
    created: 2019-11-20 17:00:58.430597823 +0100 CET
    ephemeral: "false"
    name: alpine
    project: default
    used: 1970-01-01 01:00:00 +0100 CET
  level: info
  message: Deleting container
timestamp: "2019-11-20T17:01:44.419337717+01:00"
type: logging

location: none
metadata:
  context: {}
  level: dbug
  message: Deleting ZFS storage volume for container "alpine" on storage pool "default"
timestamp: "2019-11-20T17:01:44.422471561+01:00"
type: logging

location: none
metadata:
  action: container-deleted
  source: /1.0/containers/alpine
timestamp: "2019-11-20T17:01:44.434682175+01:00"
type: lifecycle

location: none
metadata:
  context: {}
  level: dbug
  message: Deleted ZFS storage volume for container "alpine" on storage pool "default"
timestamp: "2019-11-20T17:01:44.429070296+01:00"
type: logging

location: none
metadata:
  context: {}
  level: dbug
  message: 'Failure for websocket operation: 8b392205-64a3-4c81-891c-a32c7cc757f9:
    Error transferring container data: exit status 2'
timestamp: "2019-11-20T17:01:44.434720797+01:00"
type: logging

location: none
metadata:
  context:
    created: 2019-11-20 17:00:58.430597823 +0100 CET
    ephemeral: "false"
    name: alpine
    project: default
    used: 1970-01-01 01:00:00 +0100 CET
  level: info
  message: Deleted container
timestamp: "2019-11-20T17:01:44.434664644+01:00"
type: logging

location: none
metadata:
  class: websocket
  created_at: "2019-11-20T17:00:58.603725741+01:00"
  description: Creating container
  err: 'Error transferring container data: exit status 2'
  id: 8b392205-64a3-4c81-891c-a32c7cc757f9
  location: none
  may_cancel: false
  metadata:
    control: f690db723470b6211ab2e4addd58a29d3436134eb56031669db73673b4e0e6f9
    fs: 2320b3777fc5367b3645b43330ae5db8238dda7948e13628e5107f41329fb970
  resources:
    containers:
    - /1.0/containers/alpine
  status: Failure
  status_code: 400
  updated_at: "2019-11-20T17:00:58.603725741+01:00"
timestamp: "2019-11-20T17:01:44.435093958+01:00"
type: operation

location: none
metadata:
  context: {}
  level: dbug
  message: 'Got error reading migration control socket websocket: close 1000 (normal)'
timestamp: "2019-11-20T17:02:05.021211579+01:00"
type: logging
eviweb commented 4 years ago
~ mondinfo zfs
filename:       /lib/modules/4.4.0-169-generic/kernel/zfs/zfs/zfs.ko
version:        0.6.5.6-0ubuntu28
license:        CDDL
author:         OpenZFS on Linux
description:    ZFS
srcversion:     99F1D0FED2F291CA7AED0C6
depends:        spl,znvpair,zunicode,zcommon,zavl
retpoline:      Y
vermagic:       4.4.0-169-generic SMP mod_unload modversions 
parm:           zvol_inhibit_dev:Do not create zvol device nodes (uint)
parm:           zvol_major:Major number for zvol device (uint)
parm:           zvol_max_discard_blocks:Max number of blocks to discard (ulong)
parm:           zvol_prefetch_bytes:Prefetch N bytes at zvol start+end (uint)
parm:           zio_delay_max:Max zio millisec delay before posting event (int)
parm:           zio_requeue_io_start_cut_in_line:Prioritize requeued I/O (int)
parm:           zfs_sync_pass_deferred_free:Defer frees starting in this pass (int)
parm:           zfs_sync_pass_dont_compress:Don't compress starting in this pass (int)
parm:           zfs_sync_pass_rewrite:Rewrite new bps starting in this pass (int)
parm:           zil_replay_disable:Disable intent logging replay (int)
parm:           zfs_nocacheflush:Disable cache flushes (int)
parm:           zil_slog_limit:Max commit bytes to separate log device (ulong)
parm:           zfs_object_mutex_size:Size of znode hold array (uint)
parm:           zfs_read_chunk_size:Bytes to read per chunk (long)
parm:           zfs_immediate_write_sz:Largest data block to write to zil (long)
parm:           zfs_dbgmsg_enable:Enable ZFS debug message log (int)
parm:           zfs_dbgmsg_maxsize:Maximum ZFS debug log size (int)
parm:           zfs_admin_snapshot:Enable mkdir/rmdir/mv in .zfs/snapshot (int)
parm:           zfs_expire_snapshot:Seconds to expire .zfs/snapshot (int)
parm:           zfs_vdev_aggregation_limit:Max vdev I/O aggregation size (int)
parm:           zfs_vdev_read_gap_limit:Aggregate read I/O over gap (int)
parm:           zfs_vdev_write_gap_limit:Aggregate write I/O over gap (int)
parm:           zfs_vdev_max_active:Maximum number of active I/Os per vdev (int)
parm:           zfs_vdev_async_write_active_max_dirty_percent:Async write concurrency max threshold (int)
parm:           zfs_vdev_async_write_active_min_dirty_percent:Async write concurrency min threshold (int)
parm:           zfs_vdev_async_read_max_active:Max active async read I/Os per vdev (int)
parm:           zfs_vdev_async_read_min_active:Min active async read I/Os per vdev (int)
parm:           zfs_vdev_async_write_max_active:Max active async write I/Os per vdev (int)
parm:           zfs_vdev_async_write_min_active:Min active async write I/Os per vdev (int)
parm:           zfs_vdev_scrub_max_active:Max active scrub I/Os per vdev (int)
parm:           zfs_vdev_scrub_min_active:Min active scrub I/Os per vdev (int)
parm:           zfs_vdev_sync_read_max_active:Max active sync read I/Os per vdev (int)
parm:           zfs_vdev_sync_read_min_active:Min active sync read I/Os per vdev (int)
parm:           zfs_vdev_sync_write_max_active:Max active sync write I/Os per vdev (int)
parm:           zfs_vdev_sync_write_min_active:Min active sync write I/Os per vdev (int)
parm:           zfs_vdev_mirror_switch_us:Switch mirrors every N usecs (int)
parm:           zfs_vdev_scheduler:I/O scheduler (charp)
parm:           zfs_vdev_cache_max:Inflate reads small than max (int)
parm:           zfs_vdev_cache_size:Total size of the per-disk cache (int)
parm:           zfs_vdev_cache_bshift:Shift size to inflate reads too (int)
parm:           metaslabs_per_vdev:Divide added vdev into approximately (but no more than) this number of metaslabs (int)
parm:           zfs_txg_timeout:Max seconds worth of delta per txg (int)
parm:           zfs_read_history:Historic statistics for the last N reads (int)
parm:           zfs_read_history_hits:Include cache hits in read history (int)
parm:           zfs_txg_history:Historic statistics for the last N txgs (int)
parm:           zfs_flags:Set additional debugging flags (uint)
parm:           zfs_recover:Set to attempt to recover from fatal errors (int)
parm:           zfs_free_leak_on_eio:Set to ignore IO errors during free and permanently leak the space (int)
parm:           zfs_deadman_synctime_ms:Expiration time in milliseconds (ulong)
parm:           zfs_deadman_enabled:Enable deadman timer (int)
parm:           spa_asize_inflation:SPA size estimate multiplication factor (int)
parm:           spa_slop_shift:Reserved free space in pool (int)
parm:           spa_config_path:SPA config file (/etc/zfs/zpool.cache) (charp)
parm:           zfs_autoimport_disable:Disable pool import at module load (int)
parm:           spa_load_verify_maxinflight:Max concurrent traversal I/Os while verifying pool during import -X (int)
parm:           spa_load_verify_metadata:Set to traverse metadata on pool import (int)
parm:           spa_load_verify_data:Set to traverse data on pool import (int)
parm:           zio_taskq_batch_pct:Percentage of CPUs to run an IO worker thread (uint)
parm:           metaslab_aliquot:allocation granularity (a.k.a. stripe size) (ulong)
parm:           metaslab_debug_load:load all metaslabs when pool is first opened (int)
parm:           metaslab_debug_unload:prevent metaslabs from being unloaded (int)
parm:           metaslab_preload_enabled:preload potential metaslabs during reassessment (int)
parm:           zfs_mg_noalloc_threshold:percentage of free space for metaslab group to allow allocation (int)
parm:           zfs_mg_fragmentation_threshold:fragmentation for metaslab group to allow allocation (int)
parm:           zfs_metaslab_fragmentation_threshold:fragmentation for metaslab to allow allocation (int)
parm:           metaslab_fragmentation_factor_enabled:use the fragmentation metric to prefer less fragmented metaslabs (int)
parm:           metaslab_lba_weighting_enabled:prefer metaslabs with lower LBAs (int)
parm:           metaslab_bias_enabled:enable metaslab group biasing (int)
parm:           zfs_zevent_len_max:Max event queue length (int)
parm:           zfs_zevent_cols:Max event column width (int)
parm:           zfs_zevent_console:Log events to the console (int)
parm:           zfs_top_maxinflight:Max I/Os per top-level (int)
parm:           zfs_resilver_delay:Number of ticks to delay resilver (int)
parm:           zfs_scrub_delay:Number of ticks to delay scrub (int)
parm:           zfs_scan_idle:Idle window in clock ticks (int)
parm:           zfs_scan_min_time_ms:Min millisecs to scrub per txg (int)
parm:           zfs_free_min_time_ms:Min millisecs to free per txg (int)
parm:           zfs_resilver_min_time_ms:Min millisecs to resilver per txg (int)
parm:           zfs_no_scrub_io:Set to disable scrub I/O (int)
parm:           zfs_no_scrub_prefetch:Set to disable scrub prefetching (int)
parm:           zfs_free_max_blocks:Max number of blocks freed in one txg (ulong)
parm:           zfs_dirty_data_max_percent:percent of ram can be dirty (int)
parm:           zfs_dirty_data_max_max_percent:zfs_dirty_data_max upper bound as % of RAM (int)
parm:           zfs_delay_min_dirty_percent:transaction delay threshold (int)
parm:           zfs_dirty_data_max:determines the dirty space limit (ulong)
parm:           zfs_dirty_data_max_max:zfs_dirty_data_max upper bound in bytes (ulong)
parm:           zfs_dirty_data_sync:sync txg when this much dirty data (ulong)
parm:           zfs_delay_scale:how quickly delay approaches infinity (ulong)
parm:           zfs_max_recordsize:Max allowed record size (int)
parm:           zfs_prefetch_disable:Disable all ZFS prefetching (int)
parm:           zfetch_max_streams:Max number of streams per zfetch (uint)
parm:           zfetch_min_sec_reap:Min time before stream reclaim (uint)
parm:           zfetch_block_cap:Max number of blocks to fetch at a time (uint)
parm:           zfetch_array_rd_sz:Number of bytes in a array_read (ulong)
parm:           zfs_pd_bytes_max:Max number of bytes to prefetch (int)
parm:           zfs_send_corrupt_data:Allow sending corrupt data (int)
parm:           zfs_mdcomp_disable:Disable meta data compression (int)
parm:           zfs_nopwrite_enabled:Enable NOP writes (int)
parm:           zfs_dedup_prefetch:Enable prefetching dedup-ed blks (int)
parm:           zfs_dbuf_state_index:Calculate arc header index (int)
parm:           zfs_arc_min:Min arc size (ulong)
parm:           zfs_arc_max:Max arc size (ulong)
parm:           zfs_arc_meta_limit:Meta limit for arc size (ulong)
parm:           zfs_arc_meta_min:Min arc metadata (ulong)
parm:           zfs_arc_meta_prune:Meta objects to scan for prune (int)
parm:           zfs_arc_meta_adjust_restarts:Limit number of restarts in arc_adjust_meta (int)
parm:           zfs_arc_meta_strategy:Meta reclaim strategy (int)
parm:           zfs_arc_grow_retry:Seconds before growing arc size (int)
parm:           zfs_arc_p_aggressive_disable:disable aggressive arc_p grow (int)
parm:           zfs_arc_p_dampener_disable:disable arc_p adapt dampener (int)
parm:           zfs_arc_shrink_shift:log2(fraction of arc to reclaim) (int)
parm:           zfs_arc_p_min_shift:arc_c shift to calc min/max arc_p (int)
parm:           zfs_disable_dup_eviction:disable duplicate buffer eviction (int)
parm:           zfs_arc_average_blocksize:Target average block size (int)
parm:           zfs_arc_min_prefetch_lifespan:Min life of prefetch block (int)
parm:           zfs_arc_num_sublists_per_state:Number of sublists used in each of the ARC state lists (int)
parm:           l2arc_write_max:Max write bytes per interval (ulong)
parm:           l2arc_write_boost:Extra write bytes during device warmup (ulong)
parm:           l2arc_headroom:Number of max device writes to precache (ulong)
parm:           l2arc_headroom_boost:Compressed l2arc_headroom multiplier (ulong)
parm:           l2arc_feed_secs:Seconds between L2ARC writing (ulong)
parm:           l2arc_feed_min_ms:Min feed interval in milliseconds (ulong)
parm:           l2arc_noprefetch:Skip caching prefetched buffers (int)
parm:           l2arc_nocompress:Skip compressing L2ARC buffers (int)
parm:           l2arc_feed_again:Turbo L2ARC warmup (int)
parm:           l2arc_norw:No reads during writes (int)
parm:           zfs_arc_lotsfree_percent:System free memory I/O throttle in bytes (int)
parm:           zfs_arc_sys_free:System free memory target size in bytes (ulong)

The command zfs receive of ZFS version 0.6.5.6 seems not to support -o options.
From help:

receive [-vnFu] <filesystem|volume|snapshot>
receive [-vnFu] [-d | -e] <filesystem>
stgraber commented 4 years ago

Sent a branch which should fix that.

One alternative for you would be to install the HWE kernel (4.15) which will then get the snap to use ZFS 0.7, avoiding this issue.

eviweb commented 4 years ago

Many thanks for your help @stgraber