ovh / svfs

The Swift Virtual File System
Other
374 stars 54 forks source link

Timeout when reading or writing data #123

Closed spprod35 closed 7 years ago

spprod35 commented 7 years ago

Hello,

I encounter an error while mounting my cloud hubic

Context

Error

root@scw-720b65:~# sudo mount -t svfs -o hubic_auth=YXBp******Xo=,hubic_token=xHac***Sv,container=default hubic /mnt/hubic

root@scw-720b65:~# cd /mnt/hubic
FATA[2017-02-26T13:02:29Z] cannot obtain root node: Timeout when reading or writing data
-bash: cd: /mnt/hubic: Transport endpoint is not connected

Register application

root@scw-720b65:~# hubic-application
Did you registered an application under your hubic account ? (y/N) y
 ~> Application redirect URL: http://localhost/
 ~> Application client_id: api_hubic_S*****0
 ~> Application client_secret:
1) Setting scope ... OK
 ~> Email: p*******@gmail.com
 ~> Password:
2) Granting access ... OK
3) Getting refresh token ... OK

 == Your mount options ==
 ~> hubic_auth=YX**Xo=
 ~> hubic_token=xHa****Sv

Do you have an idea ?

Regards,

xlucas commented 7 years ago

Hello,

Can you make sure these work before mounting :

$ nc -v lb1.hubic.ovh.net 443
$ nc -v lb1040.hubic.ovh.net 443
$ nc -v lb9911.hubic.ovh.net 443
spprod35 commented 7 years ago

Hello,

Here is the result of the commands above :

# nc -v lb1.hubic.ovh.net 443
nc: 178.33.213.113 (178.33.213.113) 443 [https] open
^C
# nc -v lb1040.hubic.ovh.net 443
nc: 91.121.225.130 (91.121.225.130) 443 [https] open
^C
# nc -v lb9911.hubic.ovh.net 443
nc: 94.23.181.179 (94.23.181.179) 443 [https] open
^C
#

Thanks for your help

xlucas commented 7 years ago

Allright, sounds good. Is you former issue still occuring ?

spprod35 commented 7 years ago

Yes, I have registered my application again. When mounting everything happens correctly. However when I want to put myself in the mount point, the execution of the command "CD" that freezes and the error occurs after +/- 30s

J'utilise, en parti, google translate, j'ai vu que tu étais Français, peux-être serait-il mieux que je m'exprime en Français ? :)

xlucas commented 7 years ago

Sorry need to stick to english, that's how github works :)

Can you test again using the connect_timeout=5m mount option ?

Probably linked to #100

spprod35 commented 7 years ago

No problem, that English is :) The result is not good :

# mount -t svfs -o hubic_auth=YXB****U4=,hubic_token=Uuc****XK,connect_timeout=5m,container=default hubic /hubic
# cd /hubic
FATA[2017-03-04T18:49:36+01:00] cannot obtain root node: HTTP Error: 500: 500 Internal Error
-bash: cd: /hubic: Noeud final de transport n'est pas connecté
spprod35 commented 7 years ago

There would be a bug at hubic? :)

silasistefan commented 7 years ago

I have the exact same issue, but i'm using RPM svfs-0.9.1-1.x86_64 on Centos 7 dedicated box. Setting the connect_timeout=5m didn't help.

strace on the svfs mount process looks like this:

Process 3567 attached futex(0xb9d510, FUTEX_WAIT, 0, NULL) = 0 futex(0xb9d510, FUTEX_WAIT, 0, NULL) = 0 epoll_wait(6, {}, 128, 0) = 0 epoll_wait(6, <unfinished ...> +++ exited with 1 +++`

proea commented 7 years ago

I have the same issue. CentOS 7.3

xlucas commented 7 years ago

It appears to be linked to container mount option for hubiC, mounting without it works around this problem. I need to free some time to investigate.

pirafrank commented 7 years ago

Thank you. I confirm that removing container=default from the mount command worked. Just for who may read this, be aware that the mountpoint adds a level to the path (e.g. /mnt/hubic before took you to your storage space, now you have to navigate /mnt/hubic/default)

spprod35 commented 7 years ago

Indeed, this works without the container argument. What do we lose as functionality by removing this argument?

xlucas commented 7 years ago

@spprod35 Not much, only the possibility to mount a specific container from your account instead of all of them at once.

spprod35 commented 7 years ago

Thank you for coming back. I tried to copy a 500MB file with the following commands: cd /hubic2/default/Documents cp /var/log/myfile.log .

The file does not finish completely at the end, 223 MB. I encounter the error: cp: Write error of "./myfile.log": Input / output error cp: Can not expand "./myfile.log": Input / Output Error

pirafrank commented 7 years ago

Same here. Doesn't copy anything above that size.

Il giorno 28 apr 2017, alle ore 11:29, spprod35 notifications@github.com ha scritto:

Thank you for coming back. I tried to copy a 500MB file with the following commands: cd /hubic2/default/Documents cp /var/log/myfile.log .

The file does not finish completely at the end, 223 MB. I encounter the error: cp: Write error of "./myfile.log": Input / output error cp: Can not expand "./myfile.log": Input / Output Error

— You are receiving this because you are subscribed to this thread. Reply to this email directly, view it on GitHub, or mute the thread.

spprod35 commented 7 years ago

I'm not sure I understand:

We can not send files of more than 223 MB with svfs? If so, what is the utility of svfs? In fact, today the files are voluminous

xlucas commented 7 years ago

There's no file size limit, svfs uses Swift DLO in order to handle large files. What's the debug log ? And your mount options when this occured ?

romanrm commented 7 years ago

By default svfs will segment the files on each 256MB. In my experience this might not always succeed. Try disabling the segmentation for most cases, i.e. set segment_size to 5GB. And then also don't upload files larger than 5GB.

xlucas commented 7 years ago

@romanrm Most failures during segmentations are due to the connect_timeout mount option that needs to be set to a higher value. This is a client-side timeout which means if no response to a request is seen after that amount of time, the swift library will throw an HTTP 408. Make sure you put it to a higher value, for instance 5m. If that still doesn't help, please post the relevant debug log so I can investigate with details about the underlying error.

spprod35 commented 7 years ago

Thanks for the replies. I was already using the "connect_timeout" argument at 5m but it did not allow me to send a larger file. The "segment_size" argument allowed me to send my large file.

I also used, for the moment, the "mode" argument, so that my application accesses the data.

My command : mount -t svfs -o hubic_auth=YXBpX***TU4=,hubic_token=r28*****rZns,connect_timeout=5m,segment_size=5000,mode=0777 hubic /hubic2

Concerning the bandwidth, I am on a fiber optic link (500mbps down, 200mbps up). During a transfer, I am limited to about 10 mbps (up and down), is this a limit specific to Hubic? What is the bandwidth limit on Hubic's 10TB offer?

romanrm commented 7 years ago

Yes Hubic is currently limited to 10 Mbit, you can check in its TOS.

spprod35 commented 7 years ago

https://hubic.com/fr/synchroniser-hubic

I can read on this page:

"However, in order to prevent some customers from monopolizing hubiC resources and penalizing all others, we reserve the right to limit the download speed per client for downloading (downloading to hubiC) and downloading it (download from hubiC) . "

Nothing talks about 10Mbits.

I find that this bandwidth is extremely limited, after I realize that this is not the place to express my displeasure.

In any case, thank you for your help to all.

romanrm commented 7 years ago

As I said look in the TOS which is in a separate PDF file. https://hubic.com/fr/contracts/Contrat_hubiC_2014.pdf (Article 2)