Closed vokac closed 1 year ago
Hi Petar,
I think the issue stems from directory "/upload" being outside the scope defined in the token. I wonder if this would succeed if you defined
dcache.upload-directory=.upload
(this needs to be defined most of all on PnfsManager host and, I believe on door hosts). Can you give it a try?
P.S.: BTW I expect WebDAV uploads to fail likewise. Do they not?
Similar error
$ gfal-copy -f file:///etc/services roots://dcache.farm.particle.cz//wlcg/x
Copying file:///etc/services [FAILED] after 0s
gfal-copy error: 13 (Permission denied) - Error on XrdCl::CopyProcess::Run(): [ERROR] Server responded with an error: [3010] Restriction MultiTargetedRestriction[Authorisation{allowing [LIST, DOWNLOAD, MANAGE, UPLOAD, DELETE, READ_METADATA, UPDATE_METADATA] on /wlcg}] denied activity UPLOAD on /.upload/0/c2759f19-1441-45df-935e-e549e0d45bcb/x (destination)
for WLCG token issuer we use /wlcg
basepath
gplazma.oidc.provider!wlcg = https://wlcg.cloud.cnaf.infn.it/ -profile=wlcg -prefix=/wlcg -authz-id=gid:1198
WebDAV uploads works fine.
Client details for successful xrdcopy
transfer
[2023-01-13 04:33:28.475796 +0100][Debug ][Utility ] Initializing xrootd client version: v5.5.1
[2023-01-13 04:33:28.476204 +0100][Debug ][Utility ] Unable to process user config file: [ERROR] OS Error: no such file or directory
[2023-01-13 04:33:28.476492 +0100][Debug ][PlugInMgr ] Initializing plug-in manager...
[2023-01-13 04:33:28.476508 +0100][Debug ][PlugInMgr ] No default plug-in, loading plug-in configs...
[2023-01-13 04:33:28.476519 +0100][Debug ][PlugInMgr ] Processing plug-in definitions in /etc/xrootd/client.plugins.d...
[2023-01-13 04:33:28.476655 +0100][Debug ][PlugInMgr ] Trying to disable plug-in for '*'
[2023-01-13 04:33:28.476674 +0100][Debug ][PlugInMgr ] Processing plug-in definitions in /home/vokac/.xrootd/client.plugins.d...
[2023-01-13 04:33:28.476689 +0100][Debug ][PlugInMgr ] Unable to process directory /home/vokac/.xrootd/client.plugins.d: [ERROR] OS Error: no such file or directory
[2023-01-13 04:33:28.477923 +0100][Debug ][Utility ] Env: trying to get a non-existent string entry: pollerpreference
[2023-01-13 04:33:28.477938 +0100][Debug ][Poller ] Available pollers: built-in
[2023-01-13 04:33:28.477942 +0100][Debug ][Poller ] Attempting to create a poller according to preference: built-in
[2023-01-13 04:33:28.477946 +0100][Debug ][Poller ] Creating poller: built-in
[2023-01-13 04:33:28.477954 +0100][Debug ][Poller ] Creating and starting the built-in poller...
[2023-01-13 04:33:28.478149 +0100][Debug ][Poller ] Using 1 poller threads
[2023-01-13 04:33:28.478157 +0100][Debug ][TaskMgr ] Starting the task manager...
[2023-01-13 04:33:28.478184 +0100][Debug ][TaskMgr ] Task manager started
[2023-01-13 04:33:28.478190 +0100][Debug ][JobMgr ] Starting the job manager...
[2023-01-13 04:33:28.478239 +0100][Debug ][JobMgr ] Job manager started, 3 workers
[2023-01-13 04:33:28.478249 +0100][Debug ][TaskMgr ] Registering task: "FileTimer task" to be run at: [2023-01-13 04:33:28 +0100]
[2023-01-13 04:33:28.478274 +0100][Debug ][ExDbgMsg ] [dcache.farm.particle.cz:1094] MsgHandler created: 0x13452c0 (message: kXR_stat (path: /wlcg/x, flags: none) ).
[2023-01-13 04:33:28.478318 +0100][Debug ][PostMaster ] Creating new channel to: roots://dcache.farm.particle.cz:1094
[2023-01-13 04:33:28.478422 +0100][Debug ][PostMaster ] [dcache.farm.particle.cz:1094] Stream parameters: Network Stack: IPAuto, Connection Window: 120, ConnectionRetry: 5, Stream Error Window: 1800
[2023-01-13 04:33:28.478443 +0100][Debug ][TaskMgr ] Registering task: "TickGeneratorTask for: roots://dcache.farm.particle.cz:1094" to be run at: [2023-01-13 04:33:43 +0100]
[2023-01-13 04:33:28.478549 +0100][Debug ][PostMaster ] [dcache.farm.particle.cz:1094] Found 2 address(es): [::ffff:147.231.25.80]:1094, [2001:718:401:6025:1:0:1:80]:1094
[2023-01-13 04:33:28.478580 +0100][Debug ][AsyncSock ] [dcache.farm.particle.cz:1094.0] Attempting connection to [2001:718:401:6025:1:0:1:80]:1094
[2023-01-13 04:33:28.478644 +0100][Debug ][Poller ] Adding socket 0x13459f0 to the poller
[2023-01-13 04:33:28.479019 +0100][Debug ][AsyncSock ] [dcache.farm.particle.cz:1094.0] Async connection call returned
[2023-01-13 04:33:28.479100 +0100][Debug ][XRootDTransport ] [dcache.farm.particle.cz:1094.0] Sending out the initial hand shake + kXR_protocol
[2023-01-13 04:33:28.485339 +0100][Debug ][XRootDTransport ] [dcache.farm.particle.cz:1094.0] Got the server hand shake response (type: manager [], protocol version 500)
[2023-01-13 04:33:28.494863 +0100][Debug ][XRootDTransport ] [dcache.farm.particle.cz:1094.0] kXR_protocol successful (type: manager [], protocol version 500)
[2023-01-13 04:33:28.495045 +0100][Debug ][XRootDTransport ] [dcache.farm.particle.cz:1094.0] Sending out kXR_login request, username: vokac, cgi: ?xrd.cc=cz&xrd.tz=1&xrd.appname=xrdcp&xrd.info=&xrd.hostname=ui1.farm.particle.cz&xrd.rn=v5.5.1, dual-stack: true, private IPv4: false, private IPv6: false
[2023-01-13 04:33:28.495070 +0100][Debug ][AsyncSock ] [dcache.farm.particle.cz:1094.0] TLS hand-shake exchange.
[2023-01-13 04:33:28.509897 +0100][Debug ][AsyncSock ] [dcache.farm.particle.cz:1094.0] TLS hand-shake exchange.
[2023-01-13 04:33:28.515875 +0100][Debug ][AsyncSock ] [dcache.farm.particle.cz:1094.0] TLS hand-shake exchange.
[2023-01-13 04:33:28.515947 +0100][Info ][AsyncSock ] [dcache.farm.particle.cz:1094.0] TLS hand-shake done.
[2023-01-13 04:33:28.518308 +0100][Debug ][XRootDTransport ] [dcache.farm.particle.cz:1094.0] Logged in, session: e4a5228469bfb0ec35bc74bb5d629449
[2023-01-13 04:33:28.518323 +0100][Debug ][XRootDTransport ] [dcache.farm.particle.cz:1094.0] Authentication is required: &P=ztn,0:4096:&P=gsi,v:10400,c:ssl,ca:dec71a0b&P=unix
[2023-01-13 04:33:28.518329 +0100][Debug ][XRootDTransport ] [dcache.farm.particle.cz:1094.0] Sending authentication data
[2023-01-13 04:33:28.518800 +0100][Debug ][XRootDTransport ] [dcache.farm.particle.cz:1094.0] Trying to authenticate using ztn
[2023-01-13 04:33:28.520972 +0100][Debug ][XRootDTransport ] [dcache.farm.particle.cz:1094.0] Authenticated with ztn.
[2023-01-13 04:33:28.520986 +0100][Debug ][PostMaster ] [dcache.farm.particle.cz:1094] Stream 0 connected (IPv6).
[2023-01-13 04:33:28.520993 +0100][Debug ][Utility ] Monitor library name not set. No monitoring
[2023-01-13 04:33:28.521029 +0100][Debug ][ExDbgMsg ] [dcache.farm.particle.cz:1094] Moving MsgHandler: 0x13452c0 (message: kXR_stat (path: /wlcg/x, flags: none) ) from out-queu to in-queue.
[2023-01-13 04:33:28.577993 +0100][Debug ][ExDbgMsg ] [msg: 0x90040700] Assigned MsgHandler: 0x13452c0.
[2023-01-13 04:33:28.578026 +0100][Debug ][ExDbgMsg ] [handler: 0x13452c0] Removed MsgHandler: 0x13452c0 from the in-queue.
[2023-01-13 04:33:28.578185 +0100][Debug ][XRootD ] [dcache.farm.particle.cz:1094] Handling error while processing kXR_stat (path: /wlcg/x, flags: none): [ERROR] Error response: no such file or directory.
[2023-01-13 04:33:28.578237 +0100][Debug ][ExDbgMsg ] [dcache.farm.particle.cz:1094] Calling MsgHandler: 0x13452c0 (message: kXR_stat (path: /wlcg/x, flags: none) ) with status: [ERROR] Error response: no such file or directory.
[2023-01-13 04:33:28.578278 +0100][Debug ][ExDbgMsg ] [dcache.farm.particle.cz:1094] Destroying MsgHandler: 0x13452c0.
[2023-01-13 04:33:28.578535 +0100][Debug ][Utility ] CopyProcess: 2 jobs to prepare
[2023-01-13 04:33:28.578646 +0100][Debug ][Utility ] Creating a classic copy job, from file://localhost/etc/services to roots://dcache.farm.particle.cz:1094//wlcg/x
[2023-01-13 04:33:28.578754 +0100][Debug ][Utility ] Opening file://localhost/etc/services for reading
[2023-01-13 04:33:28.578876 +0100][Debug ][File ] [0x1347980@file://localhost/etc/services?xrdcl.requuid=52310b33-7845-4394-a804-e2193719db89] Sending an open command
[2023-01-13 04:33:28.579062 +0100][Debug ][File ] [0x1347980@file://localhost/etc/services?xrdcl.requuid=52310b33-7845-4394-a804-e2193719db89] Open has returned with status [SUCCESS]
[2023-01-13 04:33:28.579081 +0100][Debug ][File ] [0x1347980@file://localhost/etc/services?xrdcl.requuid=52310b33-7845-4394-a804-e2193719db89] successfully opened at localhost, handle: 0xc, session id: 1
[2023-01-13 04:33:28.579150 +0100][Debug ][Utility ] Opening roots://dcache.farm.particle.cz:1094//wlcg/x?oss.asize=670293 for writing
[2023-01-13 04:33:28.579240 +0100][Debug ][File ] [0x1348370@roots://dcache.farm.particle.cz:1094//wlcg/x?oss.asize=670293&xrdcl.requuid=46e47e5f-413d-4fcd-bc17-4ce84c0eb1c0] Sending an open command
[2023-01-13 04:33:28.579265 +0100][Debug ][ExDbgMsg ] [dcache.farm.particle.cz:1094] MsgHandler created: 0x1348ed0 (message: kXR_open (file: /wlcg/x?oss.asize=670293, mode: 0644, flags: kXR_delete kXR_open_updt kXR_async kXR_retstat ) ).
[2023-01-13 04:33:28.579362 +0100][Debug ][ExDbgMsg ] [dcache.farm.particle.cz:1094] Moving MsgHandler: 0x1348ed0 (message: kXR_open (file: /wlcg/x?oss.asize=670293, mode: 0644, flags: kXR_delete kXR_open_updt kXR_async kXR_retstat ) ) from out-queu to in-queue.
[2023-01-13 04:33:28.879774 +0100][Debug ][ExDbgMsg ] [msg: 0x900008d0] Assigned MsgHandler: 0x1348ed0.
[2023-01-13 04:33:28.879818 +0100][Debug ][ExDbgMsg ] [handler: 0x1348ed0] Removed MsgHandler: 0x1348ed0 from the in-queue.
[2023-01-13 04:33:28.880169 +0100][Debug ][ExDbgMsg ] [dcache.farm.particle.cz:33136] Retry at server MsgHandler: 0x1348ed0 (message: kXR_open (file: /wlcg/x?org.dcache.uuid=ed8f0625-ece4-4091-bb66-8f91e81a3ab8?org.dcache.door=dcache.farm.particle.cz:1094&org.dcache.xrootd.client=vokac.8163@ui.farm.particle.cz&oss.asize=670293, mode: 0644, flags: kXR_delete kXR_open_updt kXR_async kXR_retstat ) ).
[2023-01-13 04:33:28.880262 +0100][Debug ][PostMaster ] Creating new channel to: roots://dcache.farm.particle.cz:33136
[2023-01-13 04:33:28.880442 +0100][Debug ][PostMaster ] [dcache.farm.particle.cz:33136] Stream parameters: Network Stack: IPAuto, Connection Window: 120, ConnectionRetry: 5, Stream Error Window: 1800
[2023-01-13 04:33:28.880464 +0100][Debug ][TaskMgr ] Registering task: "TickGeneratorTask for: roots://dcache.farm.particle.cz:33136" to be run at: [2023-01-13 04:33:43 +0100]
[2023-01-13 04:33:28.880623 +0100][Debug ][PostMaster ] [dcache.farm.particle.cz:33136] Found 2 address(es): [::ffff:147.231.25.80]:33136, [2001:718:401:6025:1:0:1:80]:33136
[2023-01-13 04:33:28.880652 +0100][Debug ][AsyncSock ] [dcache.farm.particle.cz:33136.0] Attempting connection to [2001:718:401:6025:1:0:1:80]:33136
[2023-01-13 04:33:28.880738 +0100][Debug ][Poller ] Adding socket 0x80001bd0 to the poller
[2023-01-13 04:33:28.880997 +0100][Debug ][AsyncSock ] [dcache.farm.particle.cz:33136.0] Async connection call returned
[2023-01-13 04:33:28.881026 +0100][Debug ][XRootDTransport ] [dcache.farm.particle.cz:33136.0] Sending out the initial hand shake + kXR_protocol
[2023-01-13 04:33:28.919985 +0100][Debug ][XRootDTransport ] [dcache.farm.particle.cz:33136.0] Got the server hand shake response (type: server [], protocol version 500)
[2023-01-13 04:33:28.921049 +0100][Debug ][XRootDTransport ] [dcache.farm.particle.cz:33136.0] kXR_protocol successful (type: server [], protocol version 500)
[2023-01-13 04:33:28.921138 +0100][Debug ][XRootDTransport ] [dcache.farm.particle.cz:33136.0] Sending out kXR_login request, username: vokac, cgi: ?xrd.cc=cz&xrd.tz=1&xrd.appname=xrdcp&xrd.info=&xrd.hostname=ui1.farm.particle.cz&xrd.rn=v5.5.1, dual-stack: true, private IPv4: false, private IPv6: false
[2023-01-13 04:33:28.921145 +0100][Debug ][AsyncSock ] [dcache.farm.particle.cz:33136.0] TLS hand-shake exchange.
[2023-01-13 04:33:28.944946 +0100][Debug ][AsyncSock ] [dcache.farm.particle.cz:33136.0] TLS hand-shake exchange.
[2023-01-13 04:33:28.955179 +0100][Debug ][AsyncSock ] [dcache.farm.particle.cz:33136.0] TLS hand-shake exchange.
[2023-01-13 04:33:28.955235 +0100][Info ][AsyncSock ] [dcache.farm.particle.cz:33136.0] TLS hand-shake done.
[2023-01-13 04:33:28.965597 +0100][Debug ][XRootDTransport ] [dcache.farm.particle.cz:33136.0] Logged in, session: cedfef76601abd8ad7a27f79d5d0ebab
[2023-01-13 04:33:28.965630 +0100][Debug ][PostMaster ] [dcache.farm.particle.cz:33136] Stream 0 connected (IPv6).
[2023-01-13 04:33:28.965686 +0100][Debug ][ExDbgMsg ] [dcache.farm.particle.cz:33136] Moving MsgHandler: 0x1348ed0 (message: kXR_open (file: /wlcg/x?org.dcache.uuid=ed8f0625-ece4-4091-bb66-8f91e81a3ab8?org.dcache.door=dcache.farm.particle.cz:1094&org.dcache.xrootd.client=vokac.8163@ui.farm.particle.cz&oss.asize=670293, mode: 0644, flags: kXR_delete kXR_open_updt kXR_async kXR_retstat ) ) from out-queu to in-queue.
[2023-01-13 04:33:28.970605 +0100][Debug ][ExDbgMsg ] [msg: 0x900009c0] Assigned MsgHandler: 0x1348ed0.
[2023-01-13 04:33:28.970643 +0100][Debug ][ExDbgMsg ] [handler: 0x1348ed0] Removed MsgHandler: 0x1348ed0 from the in-queue.
[2023-01-13 04:33:28.970756 +0100][Debug ][ExDbgMsg ] [dcache.farm.particle.cz:33136] Calling MsgHandler: 0x1348ed0 (message: kXR_open (file: /wlcg/x?org.dcache.uuid=ed8f0625-ece4-4091-bb66-8f91e81a3ab8?org.dcache.door=dcache.farm.particle.cz:1094&org.dcache.xrootd.client=vokac.8163@ui.farm.particle.cz&oss.asize=670293, mode: 0644, flags: kXR_delete kXR_open_updt kXR_async kXR_retstat ) ) with status: [SUCCESS] .
[2023-01-13 04:33:28.970838 +0100][Debug ][File ] [0x1348370@roots://dcache.farm.particle.cz:1094//wlcg/x?oss.asize=670293&xrdcl.requuid=46e47e5f-413d-4fcd-bc17-4ce84c0eb1c0] Open has returned with status [SUCCESS]
[2023-01-13 04:33:28.970854 +0100][Debug ][File ] [0x1348370@roots://dcache.farm.particle.cz:1094//wlcg/x?oss.asize=670293&xrdcl.requuid=46e47e5f-413d-4fcd-bc17-4ce84c0eb1c0] successfully opened at dcache.farm.particle.cz:33136, handle: 0x0, session id: 1
[2023-01-13 04:33:28.970892 +0100][Debug ][XRootD ] Redirect trace-back:
[2023-01-13 04:33:28.970892 +0100][Debug ][XRootD ] 0. Redirected from: roots://dcache.farm.particle.cz:1094//wlcg/x to: roots://dcache.farm.particle.cz:33136/
[2023-01-13 04:33:28.970903 +0100][Debug ][ExDbgMsg ] [dcache.farm.particle.cz:33136] Destroying MsgHandler: 0x1348ed0.
[2023-01-13 04:33:28.971024 +0100][Debug ][File ] [0x1347980@file://localhost/etc/services?xrdcl.requuid=52310b33-7845-4394-a804-e2193719db89] Sending a read command for handle 0xc to localhost
[2023-01-13 04:33:28.971776 +0100][Debug ][File ] [0x1348370@roots://dcache.farm.particle.cz:1094//wlcg/x?oss.asize=670293&xrdcl.requuid=46e47e5f-413d-4fcd-bc17-4ce84c0eb1c0] Sending a write command for handle 0x0 to dcache.farm.particle.cz:33136
[2023-01-13 04:33:28.971798 +0100][Debug ][ExDbgMsg ] [dcache.farm.particle.cz:33136] MsgHandler created: 0x134a0a0 (message: kXR_write (handle: 0x00000000, offset: 0, size: 670293) ).
[2023-01-13 04:33:29.000855 +0100][Debug ][ExDbgMsg ] [dcache.farm.particle.cz:33136] Moving MsgHandler: 0x134a0a0 (message: kXR_write (handle: 0x00000000, offset: 0, size: 670293) ) from out-queu to in-queue.
[2023-01-13 04:33:29.022614 +0100][Debug ][ExDbgMsg ] [msg: 0x900009c0] Assigned MsgHandler: 0x134a0a0.
[2023-01-13 04:33:29.022645 +0100][Debug ][ExDbgMsg ] [handler: 0x134a0a0] Removed MsgHandler: 0x134a0a0 from the in-queue.
[2023-01-13 04:33:29.022730 +0100][Debug ][ExDbgMsg ] [dcache.farm.particle.cz:33136] Calling MsgHandler: 0x134a0a0 (message: kXR_write (handle: 0x00000000, offset: 0, size: 670293) ) with status: [SUCCESS] .
[2023-01-13 04:33:29.022778 +0100][Debug ][ExDbgMsg ] [dcache.farm.particle.cz:33136] Destroying MsgHandler: 0x134a0a0.
[2023-01-13 04:33:29.022923 +0100][Debug ][File ] [0x1348370@roots://dcache.farm.particle.cz:1094//wlcg/x?oss.asize=670293&xrdcl.requuid=46e47e5f-413d-4fcd-bc17-4ce84c0eb1c0] Sending a close command for handle 0x0 to dcache.farm.particle.cz:33136
[2023-01-13 04:33:29.022949 +0100][Debug ][ExDbgMsg ] [dcache.farm.particle.cz:33136] MsgHandler created: 0x134a0a0 (message: kXR_close (handle: 0x00000000) ).
[2023-01-13 04:33:29.023081 +0100][Debug ][ExDbgMsg ] [dcache.farm.particle.cz:33136] Moving MsgHandler: 0x134a0a0 (message: kXR_close (handle: 0x00000000) ) from out-queu to in-queue.
[2023-01-13 04:33:29.177905 +0100][Debug ][ExDbgMsg ] [msg: 0x900009c0] Assigned MsgHandler: 0x134a0a0.
[2023-01-13 04:33:29.177936 +0100][Debug ][ExDbgMsg ] [handler: 0x134a0a0] Removed MsgHandler: 0x134a0a0 from the in-queue.
[2023-01-13 04:33:29.178027 +0100][Debug ][ExDbgMsg ] [dcache.farm.particle.cz:33136] Calling MsgHandler: 0x134a0a0 (message: kXR_close (handle: 0x00000000) ) with status: [SUCCESS] .
[2023-01-13 04:33:29.178063 +0100][Debug ][File ] [0x1348370@roots://dcache.farm.particle.cz:1094//wlcg/x?oss.asize=670293&xrdcl.requuid=46e47e5f-413d-4fcd-bc17-4ce84c0eb1c0] Close returned from dcache.farm.particle.cz:33136 with: [SUCCESS]
[2023-01-13 04:33:29.178090 +0100][Debug ][ExDbgMsg ] [dcache.farm.particle.cz:33136] Destroying MsgHandler: 0x134a0a0.
[2023-01-13 04:33:29.178211 +0100][Debug ][File ] [0x1347980@file://localhost/etc/services?xrdcl.requuid=52310b33-7845-4394-a804-e2193719db89] Sending a close command for handle 0xc to localhost
[2023-01-13 04:33:29.178321 +0100][Debug ][File ] [0x1347980@file://localhost/etc/services?xrdcl.requuid=52310b33-7845-4394-a804-e2193719db89] Close returned from localhost with: [SUCCESS]
[2023-01-13 04:33:29.178810 +0100][Debug ][JobMgr ] Stopping the job manager...
[2023-01-13 04:33:29.179098 +0100][Debug ][JobMgr ] Job manager stopped
[2023-01-13 04:33:29.179106 +0100][Debug ][TaskMgr ] Stopping the task manager...
[2023-01-13 04:33:29.179226 +0100][Debug ][TaskMgr ] Task manager stopped
[2023-01-13 04:33:29.179231 +0100][Debug ][Poller ] Stopping the poller...
[2023-01-13 04:33:29.179291 +0100][Debug ][TaskMgr ] Requesting unregistration of: "TickGeneratorTask for: roots://dcache.farm.particle.cz:1094"
[2023-01-13 04:33:29.179301 +0100][Debug ][AsyncSock ] [dcache.farm.particle.cz:1094.0] Closing the socket
[2023-01-13 04:33:29.179309 +0100][Debug ][Poller ] <[2001:718:401:6025:10::1]:35314><--><[2001:718:401:6025:1:0:1:80]:1094> Removing socket from the poller
[2023-01-13 04:33:29.179457 +0100][Debug ][PostMaster ] [dcache.farm.particle.cz:1094] Destroying stream
[2023-01-13 04:33:29.179466 +0100][Debug ][AsyncSock ] [dcache.farm.particle.cz:1094.0] Closing the socket
[2023-01-13 04:33:29.179485 +0100][Debug ][TaskMgr ] Requesting unregistration of: "TickGeneratorTask for: roots://dcache.farm.particle.cz:33136"
[2023-01-13 04:33:29.179489 +0100][Debug ][AsyncSock ] [dcache.farm.particle.cz:33136.0] Closing the socket
[2023-01-13 04:33:29.179493 +0100][Debug ][Poller ] <[2001:718:401:6025:10::1]:37854><--><[2001:718:401:6025:1:0:1:80]:33136> Removing socket from the poller
[2023-01-13 04:33:29.179572 +0100][Debug ][PostMaster ] [dcache.farm.particle.cz:33136] Destroying stream
[2023-01-13 04:33:29.179579 +0100][Debug ][AsyncSock ] [dcache.farm.particle.cz:33136.0] Closing the socket
and client output for failed gfal-copy
transfer
[2023-01-13 04:33:33.303041 +0100][Debug ][Utility ] Initializing xrootd client version: v5.5.1
[2023-01-13 04:33:33.303239 +0100][Debug ][Utility ] Unable to process user config file: [ERROR] OS Error: no such file or directory
[2023-01-13 04:33:33.303333 +0100][Debug ][PlugInMgr ] Initializing plug-in manager...
[2023-01-13 04:33:33.303339 +0100][Debug ][PlugInMgr ] No default plug-in, loading plug-in configs...
[2023-01-13 04:33:33.303344 +0100][Debug ][PlugInMgr ] Processing plug-in definitions in /etc/xrootd/client.plugins.d...
[2023-01-13 04:33:33.303381 +0100][Debug ][PlugInMgr ] Trying to disable plug-in for '*'
[2023-01-13 04:33:33.303388 +0100][Debug ][PlugInMgr ] Processing plug-in definitions in /home/vokac/.xrootd/client.plugins.d...
[2023-01-13 04:33:33.303395 +0100][Debug ][PlugInMgr ] Unable to process directory /home/vokac/.xrootd/client.plugins.d: [ERROR] OS Error: no such file or directory
[2023-01-13 04:33:33.312513 +0100][Debug ][Utility ] Env: trying to get a non-existent string entry: pollerpreference
[2023-01-13 04:33:33.312527 +0100][Debug ][Poller ] Available pollers: built-in
[2023-01-13 04:33:33.312532 +0100][Debug ][Poller ] Attempting to create a poller according to preference: built-in
[2023-01-13 04:33:33.312536 +0100][Debug ][Poller ] Creating poller: built-in
[2023-01-13 04:33:33.312542 +0100][Debug ][Poller ] Creating and starting the built-in poller...
[2023-01-13 04:33:33.312623 +0100][Debug ][Poller ] Using 1 poller threads
[2023-01-13 04:33:33.312632 +0100][Debug ][TaskMgr ] Starting the task manager...
[2023-01-13 04:33:33.312655 +0100][Debug ][TaskMgr ] Task manager started
[2023-01-13 04:33:33.312660 +0100][Debug ][JobMgr ] Starting the job manager...
[2023-01-13 04:33:33.312708 +0100][Debug ][JobMgr ] Job manager started, 3 workers
[2023-01-13 04:33:33.312714 +0100][Debug ][TaskMgr ] Registering task: "FileTimer task" to be run at: [2023-01-13 04:33:33 +0100]
[2023-01-13 04:33:33.312728 +0100][Debug ][ExDbgMsg ] [dcache.farm.particle.cz:1094] MsgHandler created: 0x380031b0 (message: kXR_stat (path: //wlcg/x, flags: none) ).
[2023-01-13 04:33:33.312758 +0100][Debug ][PostMaster ] Creating new channel to: roots://dcache.farm.particle.cz:1094
[2023-01-13 04:33:33.312828 +0100][Debug ][PostMaster ] [dcache.farm.particle.cz:1094] Stream parameters: Network Stack: IPAuto, Connection Window: 120, ConnectionRetry: 5, Stream Error Window: 1800
[2023-01-13 04:33:33.312842 +0100][Debug ][TaskMgr ] Registering task: "TickGeneratorTask for: roots://dcache.farm.particle.cz:1094" to be run at: [2023-01-13 04:33:48 +0100]
[2023-01-13 04:33:33.312894 +0100][Debug ][PostMaster ] [dcache.farm.particle.cz:1094] Found 2 address(es): [::ffff:147.231.25.80]:1094, [2001:718:401:6025:1:0:1:80]:1094
[2023-01-13 04:33:33.312908 +0100][Debug ][AsyncSock ] [dcache.farm.particle.cz:1094.0] Attempting connection to [2001:718:401:6025:1:0:1:80]:1094
[2023-01-13 04:33:33.312953 +0100][Debug ][Poller ] Adding socket 0x38003980 to the poller
[2023-01-13 04:33:33.313390 +0100][Debug ][AsyncSock ] [dcache.farm.particle.cz:1094.0] Async connection call returned
[2023-01-13 04:33:33.313450 +0100][Debug ][XRootDTransport ] [dcache.farm.particle.cz:1094.0] Sending out the initial hand shake + kXR_protocol
[2023-01-13 04:33:33.318063 +0100][Debug ][XRootDTransport ] [dcache.farm.particle.cz:1094.0] Got the server hand shake response (type: manager [], protocol version 500)
[2023-01-13 04:33:33.335428 +0100][Debug ][XRootDTransport ] [dcache.farm.particle.cz:1094.0] kXR_protocol successful (type: manager [], protocol version 500)
[2023-01-13 04:33:33.335622 +0100][Debug ][XRootDTransport ] [dcache.farm.particle.cz:1094.0] Sending out kXR_login request, username: vokac, cgi: ?xrd.cc=cz&xrd.tz=1&xrd.appname=python2.7&xrd.info=&xrd.hostname=ui1.farm.particle.cz&xrd.rn=v5.5.1, dual-stack: true, private IPv4: false, private IPv6: false
[2023-01-13 04:33:33.335651 +0100][Debug ][AsyncSock ] [dcache.farm.particle.cz:1094.0] TLS hand-shake exchange.
[2023-01-13 04:33:33.346942 +0100][Debug ][AsyncSock ] [dcache.farm.particle.cz:1094.0] TLS hand-shake exchange.
[2023-01-13 04:33:33.355068 +0100][Debug ][AsyncSock ] [dcache.farm.particle.cz:1094.0] TLS hand-shake exchange.
[2023-01-13 04:33:33.355123 +0100][Info ][AsyncSock ] [dcache.farm.particle.cz:1094.0] TLS hand-shake done.
[2023-01-13 04:33:33.360323 +0100][Debug ][XRootDTransport ] [dcache.farm.particle.cz:1094.0] Logged in, session: c62badaf52163100515a8db17d8e822e
[2023-01-13 04:33:33.360337 +0100][Debug ][XRootDTransport ] [dcache.farm.particle.cz:1094.0] Authentication is required: &P=ztn,0:4096:&P=gsi,v:10400,c:ssl,ca:dec71a0b&P=unix
[2023-01-13 04:33:33.360342 +0100][Debug ][XRootDTransport ] [dcache.farm.particle.cz:1094.0] Sending authentication data
[2023-01-13 04:33:33.360796 +0100][Debug ][XRootDTransport ] [dcache.farm.particle.cz:1094.0] Trying to authenticate using ztn
[2023-01-13 04:33:33.369652 +0100][Debug ][XRootDTransport ] [dcache.farm.particle.cz:1094.0] Authenticated with ztn.
[2023-01-13 04:33:33.369671 +0100][Debug ][PostMaster ] [dcache.farm.particle.cz:1094] Stream 0 connected (IPv6).
[2023-01-13 04:33:33.369678 +0100][Debug ][Utility ] Monitor library name not set. No monitoring
[2023-01-13 04:33:33.369714 +0100][Debug ][ExDbgMsg ] [dcache.farm.particle.cz:1094] Moving MsgHandler: 0x380031b0 (message: kXR_stat (path: //wlcg/x, flags: none) ) from out-queu to in-queue.
[2023-01-13 04:33:33.420498 +0100][Debug ][ExDbgMsg ] [msg: 0x2c0238d0] Assigned MsgHandler: 0x380031b0.
[2023-01-13 04:33:33.420527 +0100][Debug ][ExDbgMsg ] [handler: 0x380031b0] Removed MsgHandler: 0x380031b0 from the in-queue.
[2023-01-13 04:33:33.420609 +0100][Debug ][ExDbgMsg ] [dcache.farm.particle.cz:1094] Calling MsgHandler: 0x380031b0 (message: kXR_stat (path: //wlcg/x, flags: none) ) with status: [SUCCESS] .
[2023-01-13 04:33:33.420659 +0100][Debug ][ExDbgMsg ] [dcache.farm.particle.cz:1094] Destroying MsgHandler: 0x380031b0.
[2023-01-13 04:33:33.421419 +0100][Debug ][ExDbgMsg ] [dcache.farm.particle.cz:1094] MsgHandler created: 0x3800c660 (message: kXR_stat (path: //wlcg/x, flags: none) ).
[2023-01-13 04:33:33.421527 +0100][Debug ][ExDbgMsg ] [dcache.farm.particle.cz:1094] Moving MsgHandler: 0x3800c660 (message: kXR_stat (path: //wlcg/x, flags: none) ) from out-queu to in-queue.
[2023-01-13 04:33:33.449784 +0100][Debug ][ExDbgMsg ] [msg: 0x2c0008d0] Assigned MsgHandler: 0x3800c660.
[2023-01-13 04:33:33.449808 +0100][Debug ][ExDbgMsg ] [handler: 0x3800c660] Removed MsgHandler: 0x3800c660 from the in-queue.
[2023-01-13 04:33:33.449841 +0100][Debug ][ExDbgMsg ] [dcache.farm.particle.cz:1094] Calling MsgHandler: 0x3800c660 (message: kXR_stat (path: //wlcg/x, flags: none) ) with status: [SUCCESS] .
[2023-01-13 04:33:33.449858 +0100][Debug ][ExDbgMsg ] [dcache.farm.particle.cz:1094] Destroying MsgHandler: 0x3800c660.
Copying 670293 bytes file:///etc/services => roots://dcache.farm.particle.cz//wlcg/x
[2023-01-13 04:33:33.450314 +0100][Debug ][Utility ] CopyProcess: 2 jobs to prepare
[2023-01-13 04:33:33.450367 +0100][Debug ][Utility ] Creating a third party fall back copy job, from file://localhost///etc/services?xrdcl.intent=tpc to roots://dcache.farm.particle.cz:1094///wlcg/x?xrdcl.intent=tpc
[2023-01-13 04:33:33.450382 +0100][Debug ][JobMgr ] Starting the job manager...
[2023-01-13 04:33:33.450414 +0100][Debug ][JobMgr ] Job manager started, 1 workers
[2023-01-13 04:33:33.450618 +0100][Debug ][Utility ] Creating a third party copy job, from file://localhost///etc/services?xrdcl.intent=tpc to roots://dcache.farm.particle.cz:1094///wlcg/x?xrdcl.intent=tpc
[2023-01-13 04:33:33.450632 +0100][Debug ][Utility ] TPC is not supported, falling back to streaming mode.
[2023-01-13 04:33:33.450664 +0100][Debug ][Utility ] Creating a classic copy job, from file://localhost///etc/services?xrdcl.intent=tpc to roots://dcache.farm.particle.cz:1094///wlcg/x?xrdcl.intent=tpc
[2023-01-13 04:33:33.450704 +0100][Debug ][Utility ] Opening file://localhost///etc/services?xrdcl.intent=tpc for reading
[2023-01-13 04:33:33.450770 +0100][Debug ][File ] [0x1c0010f0@file://localhost///etc/services?xrdcl.intent=tpc&xrdcl.requuid=5ceb0880-f768-409d-be73-f76a973faec4] Sending an open command
[2023-01-13 04:33:33.450861 +0100][Debug ][File ] [0x1c0010f0@file://localhost///etc/services?xrdcl.intent=tpc&xrdcl.requuid=5ceb0880-f768-409d-be73-f76a973faec4] Open has returned with status [SUCCESS]
[2023-01-13 04:33:33.450868 +0100][Debug ][File ] [0x1c0010f0@file://localhost///etc/services?xrdcl.intent=tpc&xrdcl.requuid=5ceb0880-f768-409d-be73-f76a973faec4] successfully opened at localhost, handle: 0x10, session id: 1
[2023-01-13 04:33:33.450892 +0100][Debug ][Utility ] Opening roots://dcache.farm.particle.cz:1094///wlcg/x?oss.asize=670293&xrdcl.intent=tpc for writing
[2023-01-13 04:33:33.450939 +0100][Debug ][File ] [0x1c001e60@roots://dcache.farm.particle.cz:1094///wlcg/x?oss.asize=670293&xrdcl.intent=tpc&xrdcl.requuid=e86a50b7-8e62-43a4-92c6-133d5bddee4e] Sending an open command
[2023-01-13 04:33:33.450955 +0100][Debug ][ExDbgMsg ] [dcache.farm.particle.cz:1094] MsgHandler created: 0x1c002ca0 (message: kXR_open (file: //wlcg/x?oss.asize=670293, mode: 0644, flags: kXR_delete kXR_open_updt kXR_posc kXR_async kXR_retstat ) ).
[2023-01-13 04:33:33.450973 +0100][Debug ][PostMaster ] Creating new channel to: roots://dcache.farm.particle.cz:1094?xrdcl.intent=tpc
[2023-01-13 04:33:33.451071 +0100][Debug ][PostMaster ] [dcache.farm.particle.cz:1094] Stream parameters: Network Stack: IPAuto, Connection Window: 120, ConnectionRetry: 5, Stream Error Window: 1800
[2023-01-13 04:33:33.451082 +0100][Debug ][TaskMgr ] Registering task: "TickGeneratorTask for: roots://dcache.farm.particle.cz:1094?xrdcl.intent=tpc" to be run at: [2023-01-13 04:33:48 +0100]
[2023-01-13 04:33:33.451128 +0100][Debug ][PostMaster ] [dcache.farm.particle.cz:1094] Found 2 address(es): [::ffff:147.231.25.80]:1094, [2001:718:401:6025:1:0:1:80]:1094
[2023-01-13 04:33:33.451140 +0100][Debug ][AsyncSock ] [dcache.farm.particle.cz:1094.0] Attempting connection to [2001:718:401:6025:1:0:1:80]:1094
[2023-01-13 04:33:33.451185 +0100][Debug ][Poller ] Adding socket 0x1c003250 to the poller
[2023-01-13 04:33:33.451482 +0100][Debug ][AsyncSock ] [dcache.farm.particle.cz:1094.0] Async connection call returned
[2023-01-13 04:33:33.451492 +0100][Debug ][XRootDTransport ] [dcache.farm.particle.cz:1094.0] Sending out the initial hand shake + kXR_protocol
[2023-01-13 04:33:33.453854 +0100][Debug ][XRootDTransport ] [dcache.farm.particle.cz:1094.0] Got the server hand shake response (type: manager [], protocol version 500)
[2023-01-13 04:33:33.457854 +0100][Debug ][XRootDTransport ] [dcache.farm.particle.cz:1094.0] kXR_protocol successful (type: manager [], protocol version 500)
[2023-01-13 04:33:33.457905 +0100][Debug ][XRootDTransport ] [dcache.farm.particle.cz:1094.0] Sending out kXR_login request, username: vokac, cgi: ?xrd.cc=cz&xrd.tz=1&xrd.appname=python2.7&xrd.info=&xrd.hostname=ui1.farm.particle.cz&xrd.rn=v5.5.1, dual-stack: true, private IPv4: false, private IPv6: false
[2023-01-13 04:33:33.457911 +0100][Debug ][AsyncSock ] [dcache.farm.particle.cz:1094.0] TLS hand-shake exchange.
[2023-01-13 04:33:33.466670 +0100][Debug ][AsyncSock ] [dcache.farm.particle.cz:1094.0] TLS hand-shake exchange.
[2023-01-13 04:33:33.475394 +0100][Debug ][AsyncSock ] [dcache.farm.particle.cz:1094.0] TLS hand-shake exchange.
[2023-01-13 04:33:33.475513 +0100][Info ][AsyncSock ] [dcache.farm.particle.cz:1094.0] TLS hand-shake done.
[2023-01-13 04:33:33.481617 +0100][Debug ][XRootDTransport ] [dcache.farm.particle.cz:1094.0] Logged in, session: 22bf78d18a1fb01a37d70bd9a976c4a5
[2023-01-13 04:33:33.481644 +0100][Debug ][XRootDTransport ] [dcache.farm.particle.cz:1094.0] Authentication is required: &P=ztn,0:4096:&P=gsi,v:10400,c:ssl,ca:dec71a0b&P=unix
[2023-01-13 04:33:33.481653 +0100][Debug ][XRootDTransport ] [dcache.farm.particle.cz:1094.0] Sending authentication data
[2023-01-13 04:33:33.481671 +0100][Debug ][XRootDTransport ] [dcache.farm.particle.cz:1094.0] Trying to authenticate using ztn
[2023-01-13 04:33:33.486754 +0100][Debug ][XRootDTransport ] [dcache.farm.particle.cz:1094.0] Authenticated with ztn.
[2023-01-13 04:33:33.486787 +0100][Debug ][PostMaster ] [dcache.farm.particle.cz:1094] Stream 0 connected (IPv6).
[2023-01-13 04:33:33.486838 +0100][Debug ][ExDbgMsg ] [dcache.farm.particle.cz:1094] Moving MsgHandler: 0x1c002ca0 (message: kXR_open (file: //wlcg/x?oss.asize=670293, mode: 0644, flags: kXR_delete kXR_open_updt kXR_posc kXR_async kXR_retstat ) ) from out-queu to in-queue.
[2023-01-13 04:33:35.892181 +0100][Debug ][ExDbgMsg ] [msg: 0x2c037d90] Assigned MsgHandler: 0x1c002ca0.
[2023-01-13 04:33:35.892253 +0100][Debug ][ExDbgMsg ] [handler: 0x1c002ca0] Removed MsgHandler: 0x1c002ca0 from the in-queue.
[2023-01-13 04:33:35.892401 +0100][Debug ][XRootD ] [dcache.farm.particle.cz:1094] Handling error while processing kXR_open (file: //wlcg/x?oss.asize=670293, mode: 0644, flags: kXR_delete kXR_open_updt kXR_posc kXR_async kXR_retstat ): [ERROR] Error response: permission denied.
[2023-01-13 04:33:35.892483 +0100][Debug ][ExDbgMsg ] [dcache.farm.particle.cz:1094] Calling MsgHandler: 0x1c002ca0 (message: kXR_open (file: //wlcg/x?oss.asize=670293, mode: 0644, flags: kXR_delete kXR_open_updt kXR_posc kXR_async kXR_retstat ) ) with status: [ERROR] Error response: permission denied.
[2023-01-13 04:33:35.892525 +0100][Debug ][File ] [0x1c001e60@roots://dcache.farm.particle.cz:1094///wlcg/x?oss.asize=670293&xrdcl.intent=tpc&xrdcl.requuid=e86a50b7-8e62-43a4-92c6-133d5bddee4e] Open has returned with status [ERROR] Server responded with an error: [3010] Restriction MultiTargetedRestriction[Authorisation{allowing [LIST, DOWNLOAD, MANAGE, UPLOAD, DELETE, READ_METADATA, UPDATE_METADATA] on /wlcg}] denied activity UPLOAD on /.upload/0/7d39d06b-e4c1-42c5-a544-46d96cc02ca3/x
[2023-01-13 04:33:35.892540 +0100][Debug ][File ] [0x1c001e60@roots://dcache.farm.particle.cz:1094///wlcg/x?oss.asize=670293&xrdcl.intent=tpc&xrdcl.requuid=e86a50b7-8e62-43a4-92c6-133d5bddee4e] Error while opening at dcache.farm.particle.cz:1094: [ERROR] Server responded with an error: [3010] Restriction MultiTargetedRestriction[Authorisation{allowing [LIST, DOWNLOAD, MANAGE, UPLOAD, DELETE, READ_METADATA, UPDATE_METADATA] on /wlcg}] denied activity UPLOAD on /.upload/0/7d39d06b-e4c1-42c5-a544-46d96cc02ca3/x
[2023-01-13 04:33:35.892590 +0100][Debug ][ExDbgMsg ] [dcache.farm.particle.cz:1094] Destroying MsgHandler: 0x1c002ca0.
[2023-01-13 04:33:35.892647 +0100][Debug ][File ] [0x1c0010f0@file://localhost///etc/services?xrdcl.intent=tpc&xrdcl.requuid=5ceb0880-f768-409d-be73-f76a973faec4] Sending a close command for handle 0x10 to localhost
[2023-01-13 04:33:35.892691 +0100][Debug ][File ] [0x1c0010f0@file://localhost///etc/services?xrdcl.intent=tpc&xrdcl.requuid=5ceb0880-f768-409d-be73-f76a973faec4] Close returned from localhost with: [SUCCESS]
[2023-01-13 04:33:35.892915 +0100][Debug ][JobMgr ] Stopping the job manager...
[2023-01-13 04:33:35.893142 +0100][Debug ][JobMgr ] Job manager stopped
[2023-01-13 04:33:35.893403 +0100][Debug ][ExDbgMsg ] [dcache.farm.particle.cz:1094] MsgHandler created: 0x38004800 (message: kXR_rm (path: //wlcg/x) ).
[2023-01-13 04:33:35.893509 +0100][Debug ][ExDbgMsg ] [dcache.farm.particle.cz:1094] Moving MsgHandler: 0x38004800 (message: kXR_rm (path: //wlcg/x) ) from out-queu to in-queue.
[2023-01-13 04:33:35.924601 +0100][Debug ][ExDbgMsg ] [msg: 0x2c0237e0] Assigned MsgHandler: 0x38004800.
[2023-01-13 04:33:35.924633 +0100][Debug ][ExDbgMsg ] [handler: 0x38004800] Removed MsgHandler: 0x38004800 from the in-queue.
[2023-01-13 04:33:35.924732 +0100][Debug ][ExDbgMsg ] [dcache.farm.particle.cz:1094] Calling MsgHandler: 0x38004800 (message: kXR_rm (path: //wlcg/x) ) with status: [SUCCESS] .
[2023-01-13 04:33:35.924777 +0100][Debug ][ExDbgMsg ] [dcache.farm.particle.cz:1094] Destroying MsgHandler: 0x38004800.
gfal-copy error: 13 (Permission denied) - Error on XrdCl::CopyProcess::Run(): [ERROR] Server responded with an error: [3010] Restriction MultiTargetedRestriction[Authorisation{allowing [LIST, DOWNLOAD, MANAGE, UPLOAD, DELETE, READ_METADATA, UPDATE_METADATA] on /wlcg}] denied activity UPLOAD on /.upload/0/7d39d06b-e4c1-42c5-a544-46d96cc02ca3/x (destination)
[2023-01-13 04:33:35.934634 +0100][Debug ][JobMgr ] Stopping the job manager...
[2023-01-13 04:33:35.934866 +0100][Debug ][JobMgr ] Job manager stopped
[2023-01-13 04:33:35.934876 +0100][Debug ][TaskMgr ] Stopping the task manager...
[2023-01-13 04:33:35.934969 +0100][Debug ][TaskMgr ] Task manager stopped
[2023-01-13 04:33:35.934976 +0100][Debug ][Poller ] Stopping the poller...
[2023-01-13 04:33:35.935041 +0100][Debug ][TaskMgr ] Requesting unregistration of: "TickGeneratorTask for: roots://dcache.farm.particle.cz:1094"
[2023-01-13 04:33:35.935052 +0100][Debug ][AsyncSock ] [dcache.farm.particle.cz:1094.0] Closing the socket
[2023-01-13 04:33:35.935059 +0100][Debug ][Poller ] <[2001:718:401:6025:10::1]:35318><--><[2001:718:401:6025:1:0:1:80]:1094> Removing socket from the poller
[2023-01-13 04:33:35.935179 +0100][Debug ][PostMaster ] [dcache.farm.particle.cz:1094] Destroying stream
[2023-01-13 04:33:35.935186 +0100][Debug ][AsyncSock ] [dcache.farm.particle.cz:1094.0] Closing the socket
[2023-01-13 04:33:35.935196 +0100][Debug ][TaskMgr ] Requesting unregistration of: "TickGeneratorTask for: roots://dcache.farm.particle.cz:1094?xrdcl.intent=tpc"
[2023-01-13 04:33:35.935200 +0100][Debug ][AsyncSock ] [dcache.farm.particle.cz:1094.0] Closing the socket
[2023-01-13 04:33:35.935204 +0100][Debug ][Poller ] <[2001:718:401:6025:10::1]:35320><--><[2001:718:401:6025:1:0:1:80]:1094> Removing socket from the poller
[2023-01-13 04:33:35.935274 +0100][Debug ][PostMaster ] [dcache.farm.particle.cz:1094] Destroying stream
[2023-01-13 04:33:35.935279 +0100][Debug ][AsyncSock ] [dcache.farm.particle.cz:1094.0] Closing the socket
There seems to be two bugs here:
Gplazma2LoginStrategy
, the authorisation to upload into the dcache.upload-directory
directory is only added if the user has a non-/
root directory. I think this is wrong, but would need some careful checking.LoginReply
, the getRestriction
method returns a concatenation of all Restriction
objects in the login-attributes where each restriction may veto the request. This doesn't work with how Gplazma2LoginStrategy
is intended to work: where it injects permission to upload into the dcache.upload-directory
directory by injecting an additional restriction in the Login attributes.Both changes are, theoretically, easy to fix.
In order to fix 1. I think the algorithm should be that, rather than checking if the user has a non-/
root directory, it should check whether the login has any restrictions. If there are any restrictions then it should (somehow) add permission to upload into the dcache.upload-directory
directory. This requires some careful checking.
Fixing 2. One way to fix this would be to combine restrictions so that a request needs at least one restriction to authorise the request. However, such a change requires very, very careful checking.
@paulmillar I have just one question here: since this doesn't seem to affect WebDAV, we are still sure we want to repair this in gPlazma and not try for some solution specific to the xroot door? That would limit the scope of the change and perhaps avoid introducing other possible regressions.
As I understood it, this problem is visible because of how xrootd implements kXR_POSC
, (re-)using the upload directory concept from SRM. So, you should see the same problem if you do SRM uploads with HTTP and tokens ... except nobody does that ;-)
So, one option might be to change how the xrootd door implements kXR_POSC
, as an alternative approach.
IOW, put the upload area in a directory that the user has permissions on?
Let me go back to @DmitryLitvintsev 's comment.
The reason his solution did not work is because the root directory being passed to the the namespace from the xroot door is the root defined for the door.
In the case of POSC, we could define that root instead in terms of the user's root.
Then, with
dcache.upload-directory=.upload
I think it should work.
Correction. That should already be happening.
I need to check another place in the code that may be failing, because Dmitry's solution seems like it should work.
I also noticed this from the logs.
In the case of the xrdcp client, the path is expressed with a single slash:
kXR_open (file: /wlcg/x?
but with gfal:
kXR_open (file: //wlcg/x
@vokac Petr, are you sure you set
dcache.upload-directory=.upload
and not
dcache.upload-directory=/.upload
The returned error looks strange:
denied activity UPLOAD on /.upload/0/...
This is my testbed centralDomain
configuration
[centralDomain]
dcache.broker.scheme = core
dcache.upload-directory=.upload
[centralDomain/zookeeper]
[centralDomain/pnfsmanager]
[centralDomain/cleaner]
[centralDomain/poolmanager]
[centralDomain/spacemanager]
[centralDomain/pinmanager]
[centralDomain/gplazma]
gplazma.oidc.provider!wlcg = https://wlcg.cloud.cnaf.infn.it/ -profile=wlcg -prefix=/wlcg -authz-id=gid:1198
gplazma.oidc.provider!altas = https://atlas-auth.web.cern.ch/ -profile=wlcg -prefix=/atlas -authz-id=username:atlas_oidc_test
gplazma.oidc.provider!cms = https://cms-auth.web.cern.ch/ -profile=wlcg -prefix=/cms
gplazma.oidc.provider!dune = https://cilogon.org/dune -profile=wlcg -prefix=/dune
gplazma.oidc.audience-targets = https://wlcg.cern.ch/jwt/v1/any https://dcache.farm.particle.cz https://dcache.farm.particle.cz:2880 roots://dcache.farm.particle.cz roots://dcache.farm.particle.cz:1094
gplazma.argus.endpoint = https://argus1.farm.particle.cz:8154/authz
For xroot protocol I use in /etc/dcache.conf
xrootd.authz.write-paths = /
dcache.xrootd.security.level=4
dcache.xrootd.security.force-signing=true
plus just simple [doorsDomain/xrootd]
in layout file.
I'm using default gfal-copy
configuration which also include
# Normalize the path (this is, turn root://host/path into root://host//path)
NORMALIZE_PATH=true
When I call gfal-copy
with -D 'XROOTD PLUGIN:NORMALIZE_PATH=false'
than there is just one slash in the output as in the native xrdcopy
case. Error is still same
$ gfal-copy -f -D 'XROOTD PLUGIN:NORMALIZE_PATH=false' /etc/services roots://dcache.farm.particle.cz//wlcg/x
Copying file:///etc/services [FAILED] after 0s
gfal-copy error: 13 (Permission denied) - Error on XrdCl::CopyProcess::Run(): [ERROR] Server responded with an error: [3010] Restriction MultiTargetedRestriction[Authorisation{allowing [LIST, DOWNLOAD, MANAGE, UPLOAD, DELETE, READ_METADATA, UPDATE_METADATA] on /wlcg}] denied activity UPLOAD on /.upload/0/c55ac531-5e01-463c-b327-08030115aa81/x (destination)
You should be able to reproduce this issue with your WLCG IAM account (but if you try to use our testbed than it is necessary to add Let's Encrypt CA in the trusted /etc/grid-security/certificates
).
Thanks @vokac . This bears more investigation then.
@paulmillar I am curious as to why you suspect gPlazma when the login strategy is the same in the xroot door regardless of the client? And I assume the token is identical.
to me , I may be wrong, this seems suspicioius:
... denied activity UPLOAD on /.upload/0/...
the ".upload" has to be relative to root + home path. Ain't it? So I would have expected above to be "/wlcg/.upload" in the above (unless of course the print just drops the root path, but still "/" in front indicates that it seem to be pre-pended, but why not "/wlcg" then? This is rather confusing.
Yes, and my question is why this does not happen with the xrdcopy client.
This is why I thought the double slash might be significant. Somehow the user root is being replaced by '/'
Just for completeness server side log for $ gfal-copy --just-copy -D 'XROOTD PLUGIN:NORMALIZE_PATH=false' /etc/services roots://dcache.farm.particle.cz//wlcg/x
Is not a bit weird that gfal-copy
with local file starts with Creating a third party fall back copy job, from file://localhost///etc/services?xrdcl.intent=tpc to roots://dcache.farm.particle.cz:1094///wlcg/x?xrdcl.intent=tpc
I noticed that too. Question for Mihai?
I'll have a look at the log, thanks.
@vokac Petr could you also please attach the server log for when you do a successful run with xrdcopy? would be helpful to compare ...
So I am wondering about this:
,MultiTargetedRestriction[Authorisation{allowing [LIST, DOWNLOAD, MANAGE, UPLOAD, DELETE, READ_METADATA, UPDATE_METADATA] on /wlcg}],RootDirectory[/],HomeDirectory[/]]
If root directory is indeed '/' as given by xroot to the namespace, that explains the '/.upload/'.
But again, why does this only happen with gfal?
Hi @vokac,
You should be able to reproduce the gfal-copy
problem with xrdcopy
by enabling the "persist on successful close" semantics for the transfer. You can do this by adding the -P
or --posc
command-line flag to your xrdcopy
command.
Cheers, Paul.
why does this only happen with gfal?
I believe this is because gfal-copy
is enabling POSC semantics by default while xrdcopy
does not. Enabling POSC should reproduce the problem with xrdcopy
.
Ah, I see now Petr's command line does not have -P.
[arossi@fndcatemp1 xroot]$ xrdcp5x -P data/data_1b xroots://fndcatemp2.fnal.gov:1094//pnfs/fs/usr/fermilab/users/arossi/volatile/data-suffix
[1B/1B][100%][==================================================][1B/s]
works for me with my token.
But I need to see what the gPlazma restrictions look like.
It also depends on what the token looks like.
If I've understand it correctly, the problem comes from the Restriction, which is added when the token contains explicit authorisation (a.k.a "capabilities") statements. These are present in the "scope" claim.
I'm guessing Petr has configured his dCache to have the prefix /wlcg
for this OP, in which case his token's "scope" claim should include terms like storage.read:/
and storage.modify:/
.
Al, if your token doesn't include these terms (or something similar) in the "scope" claim then you won't be able to reproduce the problem.
Yes, I realize that.
I am trying to make getting my token a little easier (:smile:) I'll check it when I have re-installed the httokencode client.
[arossi@fndcatemp1 ~]$ httokendecode
{
"wlcg.ver": "1.0",
"aud": "https://wlcg.cern.ch/jwt/v1/any",
"sub": "0444f372-52e7-468d-aae4-911444898dbe",
"nbf": 1673643889,
"scope": "storage.create:/fermigrid/jobsub/jobs storage.create:/fermilab/users/arossi compute.create compute.read compute.cancel compute.modify storage.read:/fermilab/users/arossi",
"iss": "https://cilogon.org/fermilab",
"exp": 1673654694,
"iat": 1673643894,
"wlcg.groups": [
"/fermilab"
],
"jti": "https://cilogon.org/oauth2/770093f418b0b0837aa22baa6e6a2f6a?type=accessToken&ts=1673643894785&version=v2.0&lifetime=10800000"
}
I can confirm that using POSC xrdcopy
fails with same error
$ xrdcopy -P /etc/services roots://dcache.farm.particle.cz//wlcg/x
[0B/0B][100%][==================================================][0B/s]
Run: [ERROR] Server responded with an error: [3010] Restriction MultiTargetedRestriction[Authorisation{allowing [LIST, DOWNLOAD, MANAGE, UPLOAD, DELETE, READ_METADATA, UPDATE_METADATA] on /wlcg}] denied activity UPLOAD on /.upload/0/626ad6f6-7e36-4971-b7aa-0b7b4a1ce399/x (destination)
Yes, I'm using capabilities -> following token for storage configured with gplazma.oidc.provider!wlcg = https://wlcg.cloud.cnaf.infn.it/ -profile=wlcg -prefix=/wlcg
{
"wlcg.ver": "1.0",
"sub": "819bc496-e4cc-4a5d-832b-f349604376bd",
"aud": "https://dcache.farm.particle.cz",
"nbf": 1673653607,
"scope": "storage.create:/ storage.read:/ storage.modify:/",
"iss": "https://wlcg.cloud.cnaf.infn.it/",
"exp": 1673657207,
"iat": 1673653607,
"jti": "c857d1d8-9ab5-4936-aee9-af31e1b7d7ff",
"client_id": "ca0448b9-d897-427f-9271-a0c9389f43b1"
}
OK, this makes a bit more sense now.
I will be out through Monday, but as soon as I am back on Tuesday I will try to address this issue. I think it needs to happen in xroot, not gPlazma. It is just a matter of correctly computing the user home (I believe).
Cheers, Al
Hi Petr,
One other thing we could check is whether the upload work (with POSC enabled) if the token does not contain any capabilities (i.e., no "storage.*" statements in the "scope" claim) but has group-membership information?
If the problem is as I imagine then it should be limited to POSC support in xroot when uploading with a token that has explicit authorisation (i.e., capability). Changing any one of these (not using POSC, using a different protocol, using a token without capabilities) should allow the client to upload.
Cheers, Paul.
With wlcg.groups
everything works even with POSC enabled
$ xrdfs roots://dcache.farm.particle.cz rm /wlcg/x
$ xrdcopy /etc/services roots://dcache.farm.particle.cz//wlcg/x
[654.6kB/654.6kB][100%][==================================================][654.6kB/s]
$ xrdfs roots://dcache.farm.particle.cz rm /wlcg/x
$ xrdcopy -P /etc/services roots://dcache.farm.particle.cz//wlcg/x
[654.6kB/654.6kB][100%][==================================================][654.6kB/s]
with following token
{
"wlcg.ver": "1.0",
"sub": "819bc496-e4cc-4a5d-832b-f349604376bd",
"aud": "https://wlcg.cern.ch/jwt/v1/any",
"nbf": 1673987327,
"scope": "wlcg.groups",
"iss": "https://wlcg.cloud.cnaf.infn.it/",
"exp": 1673990927,
"iat": 1673987327,
"jti": "2c9a3580-8198-473e-9bc8-25f962e30086",
"client_id": "ca0448b9-d897-427f-9271-a0c9389f43b1",
"wlcg.groups": [
"/wlcg",
"/wlcg/pilots",
"/wlcg/xfers"
]
}
Great, thanks for checking!
Just an update here. I have managed to emulate the problem on my testbed (after struggling a bit with the gPlazma config, thanks for the leg-up, @paulmillar ).
[arossi@fndcatemp1 ~]$ xrdcp5x -P data/data_1b roots://fndcatemp2.fnal.gov:1095//pnfs/fs/usr/fermilab/users/arossi/volatile/data-`suffix`
[0B/0B][100%][==================================================][0B/s]
Run: [ERROR] Server responded with an error: [3010] Restriction CompositeRestriction[MultiTargetedRestriction[Authorisation{allowing [LIST, DOWNLOAD, READ_METADATA] on /pnfs/fs/usr/fermilab/users/arossi}, Authorisation{allowing [MANAGE, UPLOAD, READ_METADATA, UPDATE_METADATA] on /pnfs/fs/usr/fermilab/users/arossi/volatile}], PrefixRestrict[prefixes={/pnfs/fs/usr/fermilab/users/arossi,/upload}]] denied activity UPLOAD on /pnfs/fs/usr/fermilab/users/arossi/.upload/1/1fa60fb4-e30f-4edf-9050-62b410752266/data-2023011908021674136950032400840 (destination)
where the token gives write permission on a subdir or the user root, and upload is relative (.upload).
Now I can try to do what Paul suggested above (in gPlazma) and test the results.
Just in the interested of consistency, it's worth noting here that, while chatting with Al, I changed my mind somewhat on how to resolve this issue.
The approach I would suggest involves creating a new Restriction that is a composite of (zero or more) "inner" restrictions. This new Restriction allows an operation if at least one inner restriction allows it. If none of the restrictions allows the operation then the operation is rejected.
The existing code that adds a Restriction (specifically, a PrefixRestriction
) in Gplazma2LoginStrategy
is refactored so that:
the presence of a non-/
root login attribute adds a PrefixRestriction
If any Restrictions have been added, the new restrictions selects the inner restriction and a PrefixRestriction that allows access to the upload directory.
With this new restriction, the existing PrefixRestriction
may be simplified to allow only a single path.
I will try to follow this guide, thanks again @paulmillar
https://rb.dcache.org/r/13855/ posted for review
I can upload files with
roots://
protocol usingxrdcopy
, but transfer fails withgfal-copy
, e.g.while XRootD works fine for both clients