Closed spprod35 closed 7 years ago
Hello,
Can you make sure these work before mounting :
$ nc -v lb1.hubic.ovh.net 443
$ nc -v lb1040.hubic.ovh.net 443
$ nc -v lb9911.hubic.ovh.net 443
Hello,
Here is the result of the commands above :
# nc -v lb1.hubic.ovh.net 443
nc: 178.33.213.113 (178.33.213.113) 443 [https] open
^C
# nc -v lb1040.hubic.ovh.net 443
nc: 91.121.225.130 (91.121.225.130) 443 [https] open
^C
# nc -v lb9911.hubic.ovh.net 443
nc: 94.23.181.179 (94.23.181.179) 443 [https] open
^C
#
Thanks for your help
Allright, sounds good. Is you former issue still occuring ?
Yes, I have registered my application again. When mounting everything happens correctly. However when I want to put myself in the mount point, the execution of the command "CD" that freezes and the error occurs after +/- 30s
J'utilise, en parti, google translate, j'ai vu que tu étais Français, peux-être serait-il mieux que je m'exprime en Français ? :)
Sorry need to stick to english, that's how github works :)
Can you test again using the connect_timeout=5m
mount option ?
Probably linked to #100
No problem, that English is :) The result is not good :
# mount -t svfs -o hubic_auth=YXB****U4=,hubic_token=Uuc****XK,connect_timeout=5m,container=default hubic /hubic
# cd /hubic
FATA[2017-03-04T18:49:36+01:00] cannot obtain root node: HTTP Error: 500: 500 Internal Error
-bash: cd: /hubic: Noeud final de transport n'est pas connecté
There would be a bug at hubic? :)
I have the exact same issue, but i'm using RPM svfs-0.9.1-1.x86_64 on Centos 7 dedicated box. Setting the connect_timeout=5m didn't help.
strace on the svfs mount process looks like this:
Process 3567 attached futex(0xb9d510, FUTEX_WAIT, 0, NULL) = 0 futex(0xb9d510, FUTEX_WAIT, 0, NULL) = 0 epoll_wait(6, {}, 128, 0) = 0 epoll_wait(6, <unfinished ...> +++ exited with 1 +++`
I have the same issue. CentOS 7.3
It appears to be linked to container mount option for hubiC, mounting without it works around this problem. I need to free some time to investigate.
Thank you. I confirm that removing container=default
from the mount
command worked.
Just for who may read this, be aware that the mountpoint adds a level to the path (e.g. /mnt/hubic before took you to your storage space, now you have to navigate /mnt/hubic/default)
Indeed, this works without the container argument. What do we lose as functionality by removing this argument?
@spprod35 Not much, only the possibility to mount a specific container from your account instead of all of them at once.
Thank you for coming back. I tried to copy a 500MB file with the following commands: cd /hubic2/default/Documents cp /var/log/myfile.log .
The file does not finish completely at the end, 223 MB. I encounter the error: cp: Write error of "./myfile.log": Input / output error cp: Can not expand "./myfile.log": Input / Output Error
Same here. Doesn't copy anything above that size.
Il giorno 28 apr 2017, alle ore 11:29, spprod35 notifications@github.com ha scritto:
Thank you for coming back. I tried to copy a 500MB file with the following commands: cd /hubic2/default/Documents cp /var/log/myfile.log .
The file does not finish completely at the end, 223 MB. I encounter the error: cp: Write error of "./myfile.log": Input / output error cp: Can not expand "./myfile.log": Input / Output Error
— You are receiving this because you are subscribed to this thread. Reply to this email directly, view it on GitHub, or mute the thread.
I'm not sure I understand:
We can not send files of more than 223 MB with svfs? If so, what is the utility of svfs? In fact, today the files are voluminous
There's no file size limit, svfs uses Swift DLO in order to handle large files. What's the debug log ? And your mount options when this occured ?
By default svfs will segment the files on each 256MB. In my experience this might not always succeed. Try disabling the segmentation for most cases, i.e. set segment_size
to 5GB. And then also don't upload files larger than 5GB.
@romanrm Most failures during segmentations are due to the connect_timeout
mount option that needs to be set to a higher value. This is a client-side timeout which means if no response to a request is seen after that amount of time, the swift library will throw an HTTP 408. Make sure you put it to a higher value, for instance 5m
. If that still doesn't help, please post the relevant debug log so I can investigate with details about the underlying error.
Thanks for the replies. I was already using the "connect_timeout" argument at 5m but it did not allow me to send a larger file. The "segment_size" argument allowed me to send my large file.
I also used, for the moment, the "mode" argument, so that my application accesses the data.
My command :
mount -t svfs -o hubic_auth=YXBpX***TU4=,hubic_token=r28*****rZns,connect_timeout=5m,segment_size=5000,mode=0777 hubic /hubic2
Concerning the bandwidth, I am on a fiber optic link (500mbps down, 200mbps up). During a transfer, I am limited to about 10 mbps (up and down), is this a limit specific to Hubic? What is the bandwidth limit on Hubic's 10TB offer?
Yes Hubic is currently limited to 10 Mbit, you can check in its TOS.
https://hubic.com/fr/synchroniser-hubic
I can read on this page:
"However, in order to prevent some customers from monopolizing hubiC resources and penalizing all others, we reserve the right to limit the download speed per client for downloading (downloading to hubiC) and downloading it (download from hubiC) . "
Nothing talks about 10Mbits.
I find that this bandwidth is extremely limited, after I realize that this is not the place to express my displeasure.
In any case, thank you for your help to all.
As I said look in the TOS which is in a separate PDF file. https://hubic.com/fr/contracts/Contrat_hubiC_2014.pdf (Article 2)
Hello,
I encounter an error while mounting my cloud hubic
Context
Error
Register application
Do you have an idea ?
Regards,