whole-tale / globus_handler

Whole Tale Globus Transfer Handler plugin
0 stars 1 forks source link

Globus transfers fail with error "host endpoint is no longer associated with an active subscription" #4

Closed craig-willis closed 2 years ago

craig-willis commented 2 years ago

Background: To transfer files via Globus on behalf of the logged in user, we:

Problem: The Globus transfer handler was failing to initialize, which we've resolved by downgrading globusconnectpersonal to 3.0.4 (https://github.com/whole-tale/globus_handler/issues/1). Now, when we try to initiate a transfer it fails with the following error:

globus_sdk.services.transfer.errors.TransferAPIError: 
('POST', 'https://transfer.api.globus.org/v0.10/transfer', 'Bearer', 409, 'NotLicensedException', 
"You tried to access 'u_bovhyba76zgplavtadoup2bi3e#01142a9c-90fc-11ec-bf86-ab28bf5d96bb' 
which is hosted on\n'u_bovhyba76zgplavtadoup2bi3e#faf0aa08-90fb-11ec-bf86-ab28bf5d96bb'. 
The host endpoint (u_bovhyba76zgplavtadoup2bi3e#faf0aa08-90fb-11ec-bf86-ab28bf5d96bb) 
is\nno longer associated with an active subscription. See\nhttps://www.globus.org/subscriptions for 
details.", '8FaCMSFCm')

The file being transferred in this case is globus_metadata.json from http://dx.doi.org/doi:10.18126/M2301J.

Additional details Globus Client ID 0baa7c04-1ff6-4cf5-82b3-00dd47e828d9, which apparently has sharing enabled.

$ /opt/globusconnectpersonal/globusconnect -version
Globus Connect Personal 3.0.4

Setup appears to succeed (from Girder logs):

[2022-02-18 20:47:27,536] INFO: Output from command 
('/opt/globusconnectpersonal/globusconnectpersonal', '-setup', '31926b8e-532a-4c12-984c-1eca79b62a40', 
'-dir', '/home/girder/.WholeTale/0baa7c04-1ff6-4cf5-82b3-00dd47e828d9'): b"Could not read the directory 
'/home/girder/.WholeTale/0baa7c04-1ff6-4cf5-82b3-00dd47e828d9' with configuration files.\n
Creating the directory... Done\nConfiguration directory: 
/home/girder/.WholeTale/0baa7c04-1ff6-4cf5-82b3-00dd47e828d9/lta\n
Contacting relay.globusonline.org:2223\nDone!\n", b''

GCP is running:

$ ps -ef | grep globus
girder       796       1  0 20:47 ?        00:00:00 python /opt/globusconnectpersonal/gc-ctrl.py -start rw/tmp/wt-globus /tmp/wt-globus /home/girder/.WholeTale/0baa7c04-1ff6-4cf5-82b3-00dd47e828d9 forward
girder       800     796  0 20:47 ?        00:00:00 python ./gc.py /home/girder/.WholeTale/0baa7c04-1ff6-4cf5-82b3-00dd47e828d9 rw/tmp/wt-globus /tmp/wt-globus forward
girder       802     801  0 20:47 ?        00:00:00 ssh -F /dev/null -o CheckHostIP no -o ServerAliveInterval 15 -o ServerAliveCountMax 8 -o StrictHostKeyChecking yes -o UserKnownHostsFile "/home/girder/.WholeTale/0baa7c04-1ff6-4cf5-82b3-00dd47e828d9/lta/relay-known-hosts.txt" -o IdentitiesOnly yes -o IdentityAgent none -o IPQoS lowdelay -T -l x_relay2 -i /home/girder/.WholeTale/0baa7c04-1ff6-4cf5-82b3-00dd47e828d9/lta/relay-anonymous-key.pem -R 0:127.0.0.1:44341 -p 2223 relay.globusonline.org

Attempting to transfer from MDF endpoint to the running endpoint #01142a9c-90fc-11ec-bf86-ab28bf5d96bb (in Globus online, this is wt-ddf1e7fc-2661-4228-aa80-63c63adc325a-willis8 a shared endpoint on wt-ddf1e7fc-2661-4228-aa80-63c63adc325a) results in the following error:

Screen Shot 2022-02-18 at 3 49 31 PM
craig-willis commented 2 years ago

Issue was caused by policy change on the Globus end of things. The change was reverted on 2/18 and things are working again.