bashedev / win-sshfs

Automatically exported from code.google.com/p/win-sshfs
1 stars 0 forks source link

Disk space not reported correct larger address space needed for > 16 TiB. #28

Open GoogleCodeExporter opened 9 years ago

GoogleCodeExporter commented 9 years ago
What steps will reproduce the problem?
1. Mount remote file-system that is > 32 TiB
2.
3.

What is the expected output? What do you see instead?
Correct used/free disk space. Instead I see bad values (in this case negative 
space)

What version of the product are you using?

win-sshfs: 0.0.1.5
Client OS: Windows 7 x64
Server OS: Linux
Ssh server: OpenSSH_5.8p1-hpn13v11, OpenSSL 1.0.0d 8 Feb 2011

Please provide any additional information below.

please compare df output to screenshot.

I believe the values are exceeding some sort of integer size in the code which 
is causing it to reset to 0 every 16 TiB of space (used or free).

First drive (H:) usage:

root@dekabutsu: 04:04 PM :~# df -h /data
Filesystem      Size  Used Avail Use% Mounted on
/dev/sdd1        40T   29T   12T  71% /data

Actual block device size:

 sd 0:0:0:3: [sdd] 85374958592 512-byte logical blocks: (43.7 TB/39.7 TiB)

39.7 Tib (-32 TiB for 16 TiB x2) = 7.7 TiB. Listed total space in screen shot 
7.75 TiB. I believe used is calculated by free space? free space is correctly 
reported as 11.5 TiB which matches with df. It freaks out cuase free > total.

Second situation:

root@dekabutsu: 04:09 PM :~# df -h /data2
Filesystem      Size  Used Avail Use% Mounted on
/dev/sde1        77T   48T   29T  63% /data2

block device size:

sd 1:0:1:0: [sde] 164062474240 512-byte logical blocks: (83.9 TB/76.3 TiB)

76.3 Tib - 64 TiB (16 TiB x4) = 12.3 TiB. In screenshot total size listed as 
12.3 TiB. Free is 12.8 TiB. In this case 29 Tib is available (29 - 16 = 13) 
which is ~13 TiB which is how much it lists free. Again free space exceeds used 
and it freaks out.

Please see third screen shot of myth. It is <16 TiB and space is reported 
correctly:

myth ~ # df -h /tv
Filesystem            Size  Used Avail Use% Mounted on
/dev/sdb1              11T  9.4T  1.5T  87% /tv

Original issue reported on code.google.com by houkouon...@gmail.com on 18 Jul 2012 at 11:14

Attachments:

GoogleCodeExporter commented 9 years ago
Sorry this:

1. Mount remote file-system that is > 32 TiB

should have been:
1. Mount remote file-system that is > 16 TiB

Original comment by houkouon...@gmail.com on 18 Jul 2012 at 11:18

GoogleCodeExporter commented 9 years ago

Original comment by mladenov...@gmail.com on 7 Oct 2012 at 7:01

GoogleCodeExporter commented 9 years ago
You have to wonder what a tiny integer overflow can do. You'll have your fix 
tomorrow.

Original comment by mladenov...@gmail.com on 8 Oct 2012 at 9:07

GoogleCodeExporter commented 9 years ago

Original comment by mladenov...@gmail.com on 9 Oct 2012 at 2:58

Attachments:

GoogleCodeExporter commented 9 years ago
Thank you for the fix. It displays now the correct size of large files.
BUT: There is still the problem with uploading larger files. Today I wanted to 
copy a file with a size of 3.306.489.856 bytes to my ssh drive and it still 
failed. Is there another integer overflow?

Original comment by baaa...@gmail.com on 30 Oct 2012 at 7:02

GoogleCodeExporter commented 9 years ago
Large files can sometimes be very tricky because they can overflood server with 
requests. I can't test that large files right now , but I would suggest you 
specialized file copy software(TerraCopy or similar) for that large files . It 
might help and you have pause and resume.

Original comment by mladenov...@gmail.com on 30 Oct 2012 at 10:07

GoogleCodeExporter commented 9 years ago
Thank you for the fast fix! :)

Original comment by goo...@boni-arche-camp.de on 6 Nov 2012 at 7:05

GoogleCodeExporter commented 9 years ago
Fast fix but not integrated in main release! Fixed problem for me too. Thx!

Original comment by antonio....@gmail.com on 26 Oct 2014 at 10:46