Closed mike59999 closed 6 years ago
Hi, I wonder, you are on XenServer 7.5 right?
Note: our commit is a workaround against a changed behavior in latest XenServer. In short, if the data stream is slowed down (because of the NFS share), XS doesn't support to wait for sending data longer than 100ms or so (it was probably without a timeout before, or maybe minutes or hours, not that short).
As you can imagine, a stream will have some backpressure when the NFS share slows down. So we tried to implement a small cache to continue to "fetch" data on XS without stalling it to avoid the connection broken. Clearly, this is a workaround, not a good solution.
We already opened a case at Citrix, but you know, we are far from being on top of their list… (especially since XCP-ng)
FYI, the link toward the issue created at Citrix: https://bugs.xenserver.org/browse/XSO-873
Hi,
sorry for the late response. Yes we using version 7.5. Can you tell me which command is used for the export of the disk ist it export_raw_vdi?
So you are completely experiencing the issue. You can't reproduce it with xe
because it doesn't use HTTP to fetch data but a local socket (xe
is local to the host, XO isn't). You can avoid the issue if your remote is fast enough to not let the XS export "starve" more than 100ms.
The problem is very likely in the HTTP lib on its settings regarding a kind of timeout. I'd like to know where to check exactly in XS code, but I need an answer from Citrix.
Not sure if i missunderstand something. Isn't xoa using the xenapi for example https://xenhost/export_raw_vdi?vdi=xyz
Yes, but it's not a "command" in the xe
way. It's calling XAPI, that will returns us an URL where we'll then fetch the VDI content in HTTP. When you ask for export in VHD, basically:
vhd-tool
command that will create a HTTP handlerThen it's here when XO will create a NodeJS stream. It will "GET" the HTTP content ("input stream"), then pipe the result into a "remote" (eg NFS share), that's the "output" stream. If the remote is stalling for more than ~ 100ms, then by backpressure, the input stream (doing the GET) won't fetch new data for the same time.
And since 7.5, this breaks the XAPI export with VDI I/O error
.
We are investigating on this piece of code: https://github.com/djs55/vhd-tool/commit/1ec2644866294bae907e53094796b678e8c6306d
Not if it help's but here is part from xensource.log The errormessage that is also in xoa in on line 1962. xensource.log
Yeah, I'm pretty sure it's related to the piece of code I posted just before, see for yourself:
Jul 30 20:30:35 xenhez21001 xapi: [error|xenhez21001|455398 INET :::80|[XO] VDI Export R:dba1ee443b84|vhd_tool_wrapper] vhd-tool failed, returning VDI_IO_ERROR
Jul 30 20:30:35 xenhez21001 xapi: [error|xenhez21001|455398 INET :::80|[XO] VDI Export R:dba1ee443b84|vhd_tool_wrapper] vhd-tool output: vhd-tool: internal error, uncaught exception:#012 Unix.Unix_error(Unix.EAGAIN, "sendfile", "")#012 Raised at file "src/core/lwt.ml", line 3008, characters 20-29#012 Called from file "src/unix/lwt_main.ml", line 42, characters 8-18#012 Called from file "src/impl.ml", line 811, characters 4-23#012 Called from file "src/cmdliner_term.ml", line 27, characters 19-24#012 Called from file "src/cmdliner.ml", line 27, characters 27-34#012 Called from file "src/cmdliner.ml", line 106, characters 32-39
Ok, we have something cooking: https://github.com/xapi-project/vhd-tool/pull/68
We'll have probably the fix embed in XCP-ng 7.5 RC1, so people can confirm we solved the issue. We can't speak for how long Citrix will take to merge this in XenServer.
We'll remove our "workaround" in XO which isn't good anyway.
The workarround works fine for me at least with concurrency set to 1.
Would it be possible for you to keep it until Citrix merges this in XenServer? As this is the only way I can use XO in my environment, I will problably not be able to use it until the fix is implemented by Citrix if this workaround is removed.
I think we won't, the workaround got probably too many side effects. We don't want to ruin backup experience for everyone else.
This workaround can't deal with all cases anyway, it really depends on your remote speed. If it's blocked for more than 2 secs, it's done with VDI I/O error
.
Do you have Citrix support?
Alternatively, you can stay on this XO version until Citrix release the fix.
Ok, I see. Thank you for all the work on this matter.
Do you have Citrix support contract? if yes, contact them to speed up the process.
If you don't, maybe XCP-ng worth the shot?
I've opened a ticket with Citrix support and referenced https://bugs.xenserver.org/browse/XSO-873 - hopefully that helps things along.
Thanks @lbratch
Note: XCP-ng 7.5 RC1 is available with the fix inside it! https://xcp-ng.org/2018/07/31/release-candidate-for-xcp-ng-7-5/
Thanks you. I need to qualify before moving to XCP-NG. It is tempting.
I'm glad I checked issues before upgrading to 7.5... I've opened a ticket as well to keep tabs on the progress.
After installing an 10G-Nic on the Backupserver the backups are working now. Tested with Concurrency of 8 without problems.
Yeah, I'm not surprised. It depends on your remote to be faster or even than XS export speed. See https://xen-orchestra.com/blog/full-stack-power/ for more details on the issue.
We are working today with Citrix to get an optimal solution for this issue. And as I said earlier, our initial fix is already embed in XCP-ng 7.5 :+1:
This is now fixed in XS 7.6. Tested working on multiple pools.
Hehe, thanks to us :dancer:
Thanks for your feedback @lbratch
@olivierlambert I just started experiencing this issue (Error: VDI_IO_ERROR(Device I/O errors)
) last night. I'm on XCP-ng 7.5 (fully patched), so I thought this was already addressed.
It is. It can be another issue. Please restart the toolstack first.
Did that, but still fails at approx 1 hour mark. Should I open a separate issue?
Please provide the full stack, also I'm a bit bothered about where to create a new issue: hard to triage without more info (XO packages, versions, node, and the better stacktrace in the xensource.log)
I'd like to have the trace for XCP-ng log please :)
Also your exact XO version
Version is 5.28 for both xo-server and xo-web.
Also, attached is a dump from the xensource.log xcp.zip
Thanks, taking a look.
"Cleaned version" (with only what's important):
[XO] VDI Export R:939f07f5b818|mscgen] xapi=>xapi [label="session.logout"];
[XO] VDI Export R:939f07f5b818|taskhelper] the status of R:939f07f5b818 is failure; cannot set it to `failure
[XO] VDI Export R:939f07f5b818|taskhelper] forwarded task destroyed
VDI.export_raw_vdi [XO] VDI Export R:939f07f5b818 failed with exception Server_error(VDI_IO_ERROR, [ Device I/O errors ])
VDI.export_raw_vdi Raised Server_error(VDI_IO_ERROR, [ Device I/O errors ])
VDI.export_raw_vdi 1/13 xapi @ xenserver-slnqfzrh Raised at file ocaml/xapi/vhd_tool_wrapper.ml, line 59
VDI.export_raw_vdi 2/13 xapi @ xenserver-slnqfzrh Called from file lib/xapi-stdext-pervasives/pervasiveext.ml, line 22
VDI.export_raw_vdi 3/13 xapi @ xenserver-slnqfzrh Called from file lib/xapi-stdext-pervasives/pervasiveext.ml, line 26
VDI.export_raw_vdi 4/13 xapi @ xenserver-slnqfzrh Called from file ocaml/xapi/export_raw_vdi.ml, line 47
VDI.export_raw_vdi 5/13 xapi @ xenserver-slnqfzrh Called from file lib/xapi-stdext-pervasives/pervasiveext.ml, line 22
VDI.export_raw_vdi 6/13 xapi @ xenserver-slnqfzrh Called from file lib/xapi-stdext-pervasives/pervasiveext.ml, line 26
VDI.export_raw_vdi 7/13 xapi @ xenserver-slnqfzrh Called from file ocaml/xapi/export_raw_vdi.ml, line 54
VDI.export_raw_vdi 8/13 xapi @ xenserver-slnqfzrh Called from file ocaml/xapi/export_raw_vdi.ml, line 65
VDI.export_raw_vdi 9/13 xapi @ xenserver-slnqfzrh Called from file ocaml/xapi/server_helpers.ml, line 73
VDI.export_raw_vdi 10/13 xapi @ xenserver-slnqfzrh Called from file ocaml/xapi/server_helpers.ml, line 91
VDI.export_raw_vdi 11/13 xapi @ xenserver-slnqfzrh Called from file lib/xapi-stdext-pervasives/pervasiveext.ml, line 22
VDI.export_raw_vdi 12/13 xapi @ xenserver-slnqfzrh Called from file lib/xapi-stdext-pervasives/pervasiveext.ml, line 26
VDI.export_raw_vdi 13/13 xapi @ xenserver-slnqfzrh Called from file lib/backtrace.ml, line 177
VDI.export_raw_vdi D:e970c98c64c8|backtrace]
VDI.export_raw_vdi D:e970c98c64c8|mscgen] xapi=>xapi [label="session.logout"];
VDI.export_raw_vdi VDI.export_raw_vdi D:e970c98c64c8 failed with exception Server_error(VDI_IO_ERROR, [ Device I/O errors ])
VDI.export_raw_vdi Raised Server_error(VDI_IO_ERROR, [ Device I/O errors ])
VDI.export_raw_vdi 1/13 xapi @ xenserver-slnqfzrh Raised at file lib/debug.ml, line 187
VDI.export_raw_vdi 2/13 xapi @ xenserver-slnqfzrh Called from file lib/xapi-stdext-pervasives/pervasiveext.ml, line 22
VDI.export_raw_vdi 3/13 xapi @ xenserver-slnqfzrh Called from file lib/xapi-stdext-pervasives/pervasiveext.ml, line 26
VDI.export_raw_vdi 4/13 xapi @ xenserver-slnqfzrh Called from file lib/xapi-stdext-pervasives/pervasiveext.ml, line 22
VDI.export_raw_vdi 5/13 xapi @ xenserver-slnqfzrh Called from file lib/xapi-stdext-pervasives/pervasiveext.ml, line 26
VDI.export_raw_vdi 6/13 xapi @ xenserver-slnqfzrh Called from file ocaml/xapi/xapi_http.ml, line 185
VDI.export_raw_vdi 7/13 xapi @ xenserver-slnqfzrh Called from file lib/xapi-stdext-pervasives/pervasiveext.ml, line 22
VDI.export_raw_vdi 8/13 xapi @ xenserver-slnqfzrh Called from file lib/xapi-stdext-pervasives/pervasiveext.ml, line 26
VDI.export_raw_vdi 9/13 xapi @ xenserver-slnqfzrh Called from file ocaml/xapi/server_helpers.ml, line 73
VDI.export_raw_vdi 10/13 xapi @ xenserver-slnqfzrh Called from file ocaml/xapi/server_helpers.ml, line 91
VDI.export_raw_vdi 11/13 xapi @ xenserver-slnqfzrh Called from file lib/xapi-stdext-pervasives/pervasiveext.ml, line 22
VDI.export_raw_vdi 12/13 xapi @ xenserver-slnqfzrh Called from file lib/xapi-stdext-pervasives/pervasiveext.ml, line 26
VDI.export_raw_vdi 13/13 xapi @ xenserver-slnqfzrh Called from file lib/backtrace.ml, line 177
VDI.export_raw_vdi D:8c81a0ee962b|backtrace]
|backtrace] VDI.export_raw_vdi D:8c81a0ee962b failed with exception Server_error(VDI_IO_ERROR, [ Device I/O errors ])
|backtrace] Raised Server_error(VDI_IO_ERROR, [ Device I/O errors ])
|backtrace] 1/1 xapi @ xenserver-slnqfzrh Raised at file (Thread 6217 has no backtrace table. Was with_backtraces called?, line 0
|backtrace]
|xapi] Unhandled Api_errors.Server_error(VDI_IO_ERROR, [ Device I/O errors ])
@nraynaud does it ring any bell?
Hi together, i have upgraded my XO Installation (from source) to version 5.28.0 today and now i experiance the exact same behaviour. (Only Difference is that this pool ist still on Citrix Xenserver 7.1)
Delta Backup failing while transfer with "VDI_IO_ERROR(Device I/O errors)" My xensource log looks exactly like the cleaned version above.
Just to let you know. Kind Regards Alex
Hmm, if you install from the sources, it's possible that you are using old versions of xo-server
's dependencies, because the new ones have not been published yet.
The release is schedule for the beginning of next week, we'll see if it fixes your problem.
hi all, the underlying error in @Danp2's log is Unix.Unix_error(Unix.ECONNRESET, "sendfile", "")
in vhd-tool.
as a rule of the thumb, if the VDI_IO_ERROR is linked to vhd_tool_wrapper.ml
, there is more information about vhd-tool earlier in the file.
here is the mangled stacktrace:
Oct 19 07:58:51 xenserver-slnqfzrh xapi: [error|xenserver-slnqfzrh|8321 INET :::80|[XO] VDI Export R:f1c0d9307a6c|vhd_tool_wrapper] vhd-tool failed, returning VDI_IO_ERROR
Oct 19 07:58:51 xenserver-slnqfzrh xapi: [error|xenserver-slnqfzrh|8321 INET :::80|[XO] VDI Export R:f1c0d9307a6c|vhd_tool_wrapper] vhd-tool output: vhd-tool: internal error, uncaught exception:#012 Unix.Unix_error(Unix.ECONNRESET, "sendfile", "")#012 Raised at file "src/core/lwt.ml", line 3008, characters 20-29#012 Called from file "src/unix/lwt_main.ml", line 42, characters 8-18#012 Called from file "src/impl.ml", line 811, characters 4-23#012 Called from file "src/cmdliner_term.ml", line 27, characters 19-24#012 Called from file "src/cmdliner.ml", line 27, characters 27-34#012 Called from file "src/cmdliner.ml", line 106, characters 32-39
I am trying to understand the issue, sendfile(2) is not meant to raise this error. But we are probably in the networking territory if you want to look around.
@julien-f In my case, my XO was last updated on 10/5 right after 5.28.0 was released.
FWIW, all 3 of my failing VMs were recently rebooted after performing a Windows Update.
@Danp2 Can you look into the logs of the destination of your backup please? It looks like the destination closed the TCP socket, and they might have explained why in their logs (https://stackoverflow.com/questions/17245881/node-js-econnreset#17637900) .
I am trying to understand the issue, sendfile(2) is not meant to raise this error. But we are probably in the networking territory if you want to look around.
Just to verify if its a problem with the nfs-remote, i added a local disk in my xo-vm and created a new remote of type local and changed the remote in the backup-job to that new local one.
When i start the job now, i see the folders being created (xo-vm-backups/[vm-uuid]) on the local disk. A few seconds later the transfer stops again with the same error "VDI_IO_ERROR(Device I/O errors)".
So if it is a network issue, it has to be between xenserver and xo, not between xo and the remote. The disks are all on local storage.
Additional Info: The folders are beeing created, but there a no files inside!
@AlexD2006 that's a good point, I forgot that we do the proxy in XO. are you on the same infrastructure as @Danp2 ? can you check that you have the ECONNRESET like above before we lump the cases together please.
Do you have something in the xo-server
output? We don't have any reported cases in production (XOA). @AlexD2006 and @Danp2 can you try the same thing with a XOA up-to-date? This will be interesting to compare!
edit: if you need extended trial to test, please let me know your XO registered email so I can unlock it
ECONNRESET
As i said before, i am still on Citrix Xenserver 7.1. xo freshly updated (from sources, not XOA) to 5.28.0. No ECONNRESET as far as i can see. Here is my xensource.log from the last (local-remote) try:
Oct 19 18:50:52 wklxen33 xapi: [debug|wklxen33|4012 INET :::80||xapi] Got the jsonrpc body: {"id":0,"jsonrpc":"2.0","method":"task.create","params":["OpaqueRef:a785f7ac-4221-3b82-a059-09b0277a6cb2","[XO] VDI Export","hobbit41 0"]}
Oct 19 18:50:52 wklxen33 xapi: [ info|wklxen33|4012 INET :::80|task.create D:c15c58e05cef|taskhelper] task [XO] VDI Export R:9d962471223f (uuid:1939fc2f-bf7e-671f-ae3a-edcf252fb052) created (trackid=2c7a0d5a482a31957f5463bd0e748045) by task D:c15c58e05cef
Oct 19 18:50:52 wklxen33 xapi: [debug|wklxen33|4013 INET :::80||export_raw_vdi] export_raw_vdi handler
Oct 19 18:50:52 wklxen33 xapi: [debug|wklxen33|4013 INET :::80|VDI.export_raw_vdi D:216af653d07a|mscgen] xapi=>xapi [label="session.slave_login"];
Oct 19 18:50:52 wklxen33 xapi: [debug|wklxen33|4013 INET :::80|VDI.export_raw_vdi D:216af653d07a|export_raw_vdi] Checking whether localhost can see SR: OpaqueRef:35fa24fe-fa8b-0ba0-a1ae-985c357df99c
Oct 19 18:50:52 wklxen33 xapi: [ info|wklxen33|4013 INET :::80|VDI.export_raw_vdi D:216af653d07a|taskhelper] task [XO] VDI Export R:9d962471223f forwarded (trackid=2c7a0d5a482a31957f5463bd0e748045)
Oct 19 18:50:52 wklxen33 xapi: [debug|wklxen33|4013 INET :::80|[XO] VDI Export R:9d962471223f|export_raw_vdi] export_raw_vdi task_id = OpaqueRef:9d962471-223f-de6c-f7bb-330a4075fc97; vdi = OpaqueRef:8756ae9c-1d30-07c7-4f74-5e29a8b850d2; format = vhd; content-type = application/vhd; filename = 2986adbb-de6e-4470-9e35-41f407b8a576.vhd
Oct 19 18:50:52 wklxen33 xapi: [debug|wklxen33|4013 INET :::80|[XO] VDI Export R:9d962471223f|mscgen] xapi=>xapi [label="VBD.create"];
Oct 19 18:50:52 wklxen33 xapi: [debug|wklxen33|4013 INET :::80|[XO] VDI Export R:9d962471223f|mscgen] xapi=>xapi [label="VBD.get_uuid"];
Oct 19 18:50:52 wklxen33 xapi: [debug|wklxen33|4013 INET :::80|[XO] VDI Export R:9d962471223f|mscgen] xapi=>xapi [label="VM.get_uuid"];
Oct 19 18:50:52 wklxen33 xapi: [debug|wklxen33|4013 INET :::80|[XO] VDI Export R:9d962471223f|xapi] created VBD (uuid e2b7e1b8-0437-9383-1460-429f85e62292); attempting to hotplug to VM (uuid: b7dc49f7-b419-4192-9484-6ed8b93b9729)
Oct 19 18:50:52 wklxen33 xapi: [debug|wklxen33|4013 INET :::80|[XO] VDI Export R:9d962471223f|mscgen] xapi=>xapi [label="VBD.plug"];
Oct 19 18:50:54 wklxen33 xapi: [debug|wklxen33|4013 INET :::80|[XO] VDI Export R:9d962471223f|export_raw_vdi] Copying VDI contents...
Oct 19 18:50:54 wklxen33 xapi: [ info|wklxen33|4013 INET :::80|[XO] VDI Export R:9d962471223f|vhd_tool_wrapper] Executing /bin/vhd-tool stream --source-protocol none --source-format hybrid --source /dev/sm/backend/9822f3cb-45e7-aa6d-7c42-18542b99e222/2986adbb-de6e-4470-9e35-41f407b8a576:/dev/VG_XenStorage-9822f3cb-45e7-aa6d-7c42-18542b99e222/VHD-2986adbb-de6e-4470-9e35-41f407b8a576 --destination-protocol none --destination-format vhd --destination-fd 56bb8e1d-9389-43a3-b25a-9346bf9de685 --tar-filename-prefix --progress --machine --direct --path /dev/mapper:.
Oct 19 18:51:09 wklxen33 xapi: [error|wklxen33|4013 INET :::80|[XO] VDI Export R:9d962471223f|vhd_tool_wrapper] vhd-tool failed, returning VDI_IO_ERROR
Oct 19 18:51:09 wklxen33 xapi: [error|wklxen33|4013 INET :::80|[XO] VDI Export R:9d962471223f|vhd_tool_wrapper] vhd-tool output: vhd-tool: internal error, uncaught exception:#012 Unix.Unix_error(Unix.EPIPE, "sendfile", "")
Oct 19 18:51:09 wklxen33 xapi: [debug|wklxen33|4013 INET :::80|[XO] VDI Export R:9d962471223f|mscgen] xapi=>xapi [label="session.slave_login"];
Oct 19 18:51:09 wklxen33 xapi: [debug|wklxen33|4013 INET :::80|[XO] VDI Export R:9d962471223f|mscgen] xapi=>xapi [label="VBD.unplug"];
Oct 19 18:51:11 wklxen33 xapi: [debug|wklxen33|4013 INET :::80|[XO] VDI Export R:9d962471223f|mscgen] xapi=>xapi [label="VBD.destroy"];
Oct 19 18:51:11 wklxen33 xapi: [debug|wklxen33|4013 INET :::80|[XO] VDI Export R:9d962471223f|mscgen] xapi=>xapi [label="session.logout"];
Oct 19 18:51:11 wklxen33 xapi: [debug|wklxen33|4013 INET :::80|[XO] VDI Export R:9d962471223f|taskhelper] the status of R:9d962471223f is failure; cannot set it to `failure
Oct 19 18:51:11 wklxen33 xapi: [debug|wklxen33|4013 INET :::80|[XO] VDI Export R:9d962471223f|taskhelper] forwarded task destroyed
Oct 19 18:51:11 wklxen33 xapi: [error|wklxen33|4013 INET :::80|VDI.export_raw_vdi D:216af653d07a|backtrace] [XO] VDI Export R:9d962471223f failed with exception Server_error(VDI_IO_ERROR, [ Device I/O errors ])
Oct 19 18:51:11 wklxen33 xapi: [error|wklxen33|4013 INET :::80|VDI.export_raw_vdi D:216af653d07a|backtrace] Raised Server_error(VDI_IO_ERROR, [ Device I/O errors ])
Oct 19 18:51:11 wklxen33 xapi: [error|wklxen33|4013 INET :::80|VDI.export_raw_vdi D:216af653d07a|backtrace] 1/13 xapi @ wklxen33 Raised at file vhd_tool_wrapper.ml, line 61
Oct 19 18:51:11 wklxen33 xapi: [error|wklxen33|4013 INET :::80|VDI.export_raw_vdi D:216af653d07a|backtrace] 2/13 xapi @ wklxen33 Called from file lib/pervasiveext.ml, line 22
Oct 19 18:51:11 wklxen33 xapi: [error|wklxen33|4013 INET :::80|VDI.export_raw_vdi D:216af653d07a|backtrace] 3/13 xapi @ wklxen33 Called from file lib/pervasiveext.ml, line 26
Oct 19 18:51:11 wklxen33 xapi: [error|wklxen33|4013 INET :::80|VDI.export_raw_vdi D:216af653d07a|backtrace] 4/13 xapi @ wklxen33 Called from file export_raw_vdi.ml, line 47
Oct 19 18:51:11 wklxen33 xapi: [error|wklxen33|4013 INET :::80|VDI.export_raw_vdi D:216af653d07a|backtrace] 5/13 xapi @ wklxen33 Called from file lib/pervasiveext.ml, line 22
Oct 19 18:51:11 wklxen33 xapi: [error|wklxen33|4013 INET :::80|VDI.export_raw_vdi D:216af653d07a|backtrace] 6/13 xapi @ wklxen33 Called from file lib/pervasiveext.ml, line 26
Oct 19 18:51:11 wklxen33 xapi: [error|wklxen33|4013 INET :::80|VDI.export_raw_vdi D:216af653d07a|backtrace] 7/13 xapi @ wklxen33 Called from file export_raw_vdi.ml, line 54
Oct 19 18:51:11 wklxen33 xapi: [error|wklxen33|4013 INET :::80|VDI.export_raw_vdi D:216af653d07a|backtrace] 8/13 xapi @ wklxen33 Called from file export_raw_vdi.ml, line 65
Oct 19 18:51:11 wklxen33 xapi: [error|wklxen33|4013 INET :::80|VDI.export_raw_vdi D:216af653d07a|backtrace] 9/13 xapi @ wklxen33 Called from file server_helpers.ml, line 73
Oct 19 18:51:11 wklxen33 xapi: [error|wklxen33|4013 INET :::80|VDI.export_raw_vdi D:216af653d07a|backtrace] 10/13 xapi @ wklxen33 Called from file server_helpers.ml, line 91
Oct 19 18:51:11 wklxen33 xapi: [error|wklxen33|4013 INET :::80|VDI.export_raw_vdi D:216af653d07a|backtrace] 11/13 xapi @ wklxen33 Called from file lib/pervasiveext.ml, line 22
Oct 19 18:51:11 wklxen33 xapi: [error|wklxen33|4013 INET :::80|VDI.export_raw_vdi D:216af653d07a|backtrace] 12/13 xapi @ wklxen33 Called from file lib/pervasiveext.ml, line 26
Oct 19 18:51:11 wklxen33 xapi: [error|wklxen33|4013 INET :::80|VDI.export_raw_vdi D:216af653d07a|backtrace] 13/13 xapi @ wklxen33 Called from file lib/backtrace.ml, line 176
Oct 19 18:51:11 wklxen33 xapi: [error|wklxen33|4013 INET :::80|VDI.export_raw_vdi D:216af653d07a|backtrace]
Oct 19 18:51:11 wklxen33 xapi: [debug|wklxen33|4013 INET :::80|VDI.export_raw_vdi D:216af653d07a|mscgen] xapi=>xapi [label="session.logout"];
Oct 19 18:51:11 wklxen33 xapi: [error|wklxen33|4013 INET :::80|VDI.export_raw_vdi D:bd8905131301|backtrace] VDI.export_raw_vdi D:216af653d07a failed with exception Server_error(VDI_IO_ERROR, [ Device I/O errors ])
Oct 19 18:51:11 wklxen33 xapi: [error|wklxen33|4013 INET :::80|VDI.export_raw_vdi D:bd8905131301|backtrace] Raised Server_error(VDI_IO_ERROR, [ Device I/O errors ])
Oct 19 18:51:11 wklxen33 xapi: [error|wklxen33|4013 INET :::80|VDI.export_raw_vdi D:bd8905131301|backtrace] 1/13 xapi @ wklxen33 Raised at file lib/debug.ml, line 185
Oct 19 18:51:11 wklxen33 xapi: [error|wklxen33|4013 INET :::80|VDI.export_raw_vdi D:bd8905131301|backtrace] 2/13 xapi @ wklxen33 Called from file lib/pervasiveext.ml, line 22
Oct 19 18:51:11 wklxen33 xapi: [error|wklxen33|4013 INET :::80|VDI.export_raw_vdi D:bd8905131301|backtrace] 3/13 xapi @ wklxen33 Called from file lib/pervasiveext.ml, line 26
Oct 19 18:51:11 wklxen33 xapi: [error|wklxen33|4013 INET :::80|VDI.export_raw_vdi D:bd8905131301|backtrace] 4/13 xapi @ wklxen33 Called from file lib/pervasiveext.ml, line 22
Oct 19 18:51:11 wklxen33 xapi: [error|wklxen33|4013 INET :::80|VDI.export_raw_vdi D:bd8905131301|backtrace] 5/13 xapi @ wklxen33 Called from file lib/pervasiveext.ml, line 26
Oct 19 18:51:11 wklxen33 xapi: [error|wklxen33|4013 INET :::80|VDI.export_raw_vdi D:bd8905131301|backtrace] 6/13 xapi @ wklxen33 Called from file xapi_http.ml, line 199
Oct 19 18:51:11 wklxen33 xapi: [error|wklxen33|4013 INET :::80|VDI.export_raw_vdi D:bd8905131301|backtrace] 7/13 xapi @ wklxen33 Called from file lib/pervasiveext.ml, line 22
Oct 19 18:51:11 wklxen33 xapi: [error|wklxen33|4013 INET :::80|VDI.export_raw_vdi D:bd8905131301|backtrace] 8/13 xapi @ wklxen33 Called from file lib/pervasiveext.ml, line 26
Oct 19 18:51:11 wklxen33 xapi: [error|wklxen33|4013 INET :::80|VDI.export_raw_vdi D:bd8905131301|backtrace] 9/13 xapi @ wklxen33 Called from file server_helpers.ml, line 73
Oct 19 18:51:11 wklxen33 xapi: [error|wklxen33|4013 INET :::80|VDI.export_raw_vdi D:bd8905131301|backtrace] 10/13 xapi @ wklxen33 Called from file server_helpers.ml, line 91
Oct 19 18:51:11 wklxen33 xapi: [error|wklxen33|4013 INET :::80|VDI.export_raw_vdi D:bd8905131301|backtrace] 11/13 xapi @ wklxen33 Called from file lib/pervasiveext.ml, line 22
Oct 19 18:51:11 wklxen33 xapi: [error|wklxen33|4013 INET :::80|VDI.export_raw_vdi D:bd8905131301|backtrace] 12/13 xapi @ wklxen33 Called from file lib/pervasiveext.ml, line 26
Oct 19 18:51:11 wklxen33 xapi: [error|wklxen33|4013 INET :::80|VDI.export_raw_vdi D:bd8905131301|backtrace] 13/13 xapi @ wklxen33 Called from file lib/backtrace.ml, line 176
Oct 19 18:51:11 wklxen33 xapi: [error|wklxen33|4013 INET :::80|VDI.export_raw_vdi D:bd8905131301|backtrace]
Oct 19 18:51:11 wklxen33 xapi: [error|wklxen33|4013 INET :::80||backtrace] VDI.export_raw_vdi D:bd8905131301 failed with exception Server_error(VDI_IO_ERROR, [ Device I/O errors ])
Oct 19 18:51:11 wklxen33 xapi: [debug|wklxen33|4189 INET :::80||xapi] Got the jsonrpc body: {"id":0,"jsonrpc":"2.0","method":"VM.remove_from_other_config","params":["OpaqueRef:a785f7ac-4221-3b82-a059-09b0277a6cb2","OpaqueRef:a53496e4-8943-4197-ebad-0b44d3a8b632","xo:backup:exported"]}
Oct 19 18:51:11 wklxen33 xapi: [debug|wklxen33|4189 INET :::80||xapi] Got the jsonrpc body: {"id":0,"jsonrpc":"2.0","method":"VM.remove_from_other_config","params":["OpaqueRef:a785f7ac-4221-3b82-a059-09b0277a6cb2","OpaqueRef:a53496e4-8943-4197-ebad-0b44d3a8b632","xo:backup:exported"]}
Oct 19 18:51:11 wklxen33 xapi: [debug|wklxen33|4190 INET :::80||xapi] Got the jsonrpc body: {"id":0,"jsonrpc":"2.0","method":"VM.add_to_other_config","params":["OpaqueRef:a785f7ac-4221-3b82-a059-09b0277a6cb2","OpaqueRef:a53496e4-8943-4197-ebad-0b44d3a8b632","xo:backup:exported","true"]}
Oct 19 18:51:11 wklxen33 xapi: [debug|wklxen33|4190 INET :::80||xapi] Got the jsonrpc body: {"id":0,"jsonrpc":"2.0","method":"VM.add_to_other_config","params":["OpaqueRef:a785f7ac-4221-3b82-a059-09b0277a6cb2","OpaqueRef:a53496e4-8943-4197-ebad-0b44d3a8b632","xo:backup:exported","true"]}
Testing with XOA would be really helpful. If it works there (XOA fully up to date), it means we could just check what's different there vs your current from the source version and track this easily.
Also, node/npm version could be useful on your side. There is many things that could differ from a controlled env (XOA), we must be able to test both before digging further.
Testing with XOA would be really helpful. If it works there (XOA fully up to date), it means we could just check what's different there vs your current from the source version and track this easily.
Also, node/npm version could be useful on your side. There is many things that could differ from a controlled env (XOA), we must be able to test both before digging further.
Downloading XOA right now and give it a try. Will push a report after installing and testing.
Okay, again, if your trial is expired, ping me I can extend it :+1:
mhhh, the registration shows me "error unknown error from the peer". what connections (URL/Port) is the registration connecting to? maybe i can not reach it in this environment cause our DC-firewall is blocking it.
Use xoa check
to see if you can reach https://xen-orchestra.com
(updater requests on HTTPS)
yup, thats the problem. can not reach it cause your ip is not in the list of allowed sources in this environment. so, i will not have a chance to fix this today. will get back tomorrow after i had a talk to the network guys :-(
After the latest commit the error descriped in the issue #3205 still exists. From 20 VM's one is backed up and the other 19 are failing with the reason "VDI_IO_ERROR(Device I/O errors)". We have 2 XEN-Nodes with the latest patches applied and nfs for the backupstorage. In the attachment is the the error log from xoa, if you need more logs please tell me which files you need. Not sure if this helps but the backuptype from the delta is full because we created an new backupjob.
xo.log
PS: We using currently nfs for the vdi's with a 10Gbit-Card also for the XEN-Hosts. Only the Backupserver has currently a 1Gbitnic because we have no free slots for a 10Gbit Card.