Closed stewartadam closed 4 years ago
1. Mounted directory doesn't expose existing blobs
This is an interesting issue, so when you already have blobs and you see them through the portal. And then when you mount blobfuse, in the directory you don't see the blobs in the directory container?
2. New blobs (e.g. echo 1 > foo) are created under an empty-named directory on the server (i.e. /container//foo)
I'm confused. So it makes an empty folder called foo? But your blob is called foo. Or is container supposed to be the empty directory
3. Segfaults during umount after some simple file interactions:
So when you attempt to umount your mounted container, then it seg faults? Or you umount and then do some file interactions and that segfaults?
4. Umount requires root permissions
Depending on where your folder is, especially if its mounted at the root, it may need root permissions. Have you made sure to give the user access to the mounted folder and temporary cache directory? (E.g. chown
Here's a blob container I populated with a file (top-level-existing.dat
and a nested file in what blob considers a "subdirectory" (subdir/nested-level-existing.dat
). Screenshot below using Cyberduck (it appeared similarly in Storage Explorer, but I can't show nested levels in a single screenshot since it doesn't have treeview support):
Now let's mount that with blobfuse:
AZURE_STORAGE_ACCOUNT=myaccount AZURE_STORAGE_ACCESS_KEY='mykey==' blobfuse --tmp-path=./cache --container-name='blobfuse-test' ./mountpoint
ls -l ./mountpoint ./mountpoint/subdir
./mountpoint:
total 0
drwxrwx---. 3 root root 4096 Dec 31 1969 subdir
-rwxrwx---. 1 root root 10485760 Jul 12 11:28 top-level-existing.dat
./mountpoint/subdir:
total 0
-rwxrwx---. 1 root root 10485760 Jul 12 11:28 nested-level-existing.dat
Looks good. Let's create some new files and then umount, it crashes:
touch top-level-created-sibling.dat
touch subdir/nested-level-created-sibling.dat
fusermount -u mountpoint
dmesg | tail -n 2
853576.766124] blobfuse[32380]: segfault at 3172c60 ip 00007f96afcae158 sp 00007f96adf489c8 error 4 in libc-2.28.so[7f96afb6e000+14d000]
[853576.766133] Code: fe 7f 71 e0 c5 fe 7f 79 c0 c5 7e 7f 41 a0 c4 c1 7e 7f 23 c5 f8 77 c3 c5 fe 6f 26 c5 fe 6f 6e 20 c5 fe 6f 76 40 c5 fe 6f 7e 60 <c5> 7e 6f 44 16 e0 4c 8d 5c 17 e0 48 8d 4c 16 e0 4d 89 d9 4d 89 d8
Let's double-check blobfuse did do what we asked despite the crash on umount - you can see those files do exist on the server, but with incorrectly encoded filenames:
Note that because blobfuse considered the filename not the path to be /top-level-created-sibling.dat
, it presents as an empty-named directory:
Let's mount it again and list files, the created files are gone despite existing on the server:
AZURE_STORAGE_ACCOUNT=myaccount AZURE_STORAGE_ACCESS_KEY='mykey==' blobfuse --tmp-path=./cache --container-name='blobfuse-test' ./mountpoint
ls -l ./mountpoint ./mountpoint/subdir
./mountpoint:
total 0
drwxrwx---. 3 root root 4096 Dec 31 1969 subdir
-rwxrwx---. 1 root root 10485760 Jul 12 11:28 top-level-existing.dat
./mountpoint/subdir:
total 0
-rwxrwx---. 1 root root 10485760 Jul 12 11:28 nested-level-existing.dat
So to be more specific -- existing files created with blobfuse do not appear on re-mount. Given the above, answering point-by-point --
1. Mounted directory doesn't expose existing blobs Existing blobs do appear, but existing blobs that were created via blobfuse do not appear.
2. New blobs (e.g. echo 1 > foo) are created under an empty-named directory on the server (i.e. /container//foo)
Blob was called foo
as created, but blobfuse uses /foo
so it creates an empty-string-named "directory" which confuses lots of tools (including blobfuse it appears :))
IMO, we should probably disallow filenames with leading slashes entirely at the blob service level. Not sure if that would represent a breaking change for many customers though.
Segfaults during umount after some simple file interactions:
The unmount operation itself segfaults. It happens with both umount
(as root) or fusermount -u
.
Umount requires root permissions
I tested the above again as non-root, and my user owned the mountpoint
& cache
folder and all leading paths, but still no permissions to umount
:
umount mountpoint
umount: mountpoint: Permission denied
I did try with fusermount -u
which did not spit out a permission error (despite segfaulting per above), but using a different umount command hasn't been necessary for other FUSE filesystems.
Looking at the output from your
ls -l ./mountpoint ./mountpoint/subdir
command, it doesn't look like you've given your user permissions to the mounted container directory before you mounted the container. You need to give your user the permissions to the mounted container (./mountpoint) and the temporary cache (./cache). This might help with requiring root permissions when umount as well.
sudo chown
I will also look into why you're seg faulting, unless these permissions issues are related to your seg fault.
I ran the snippets above as root to rule out any permission issues. I ran chown myuser: ./mountpoint ./cache
prior to executing testing non-root umount (edit: and for clarity, switched users as well).
To be clear there aren't permission issues with files or the cache once mounted - it's the umount operation that fails as a regular user using the usual umount
command. I started with an empty cache
and mountpoint
folder owned by my user, so an immediate umount
as the same user shouldn't be giving permission denied.
I can confirm the crashes with 1.1.1 on RHEL/CentOS 7 as well. Any updates to this @amnguye?
Dec 11 01:51:42 linuxvm blobfuse[90157]: Function azs_destroy, in file /home/seguler/azure-storage-fuse/blobfuse/utilities.cpp, line 481: azs_destroy called.
Dec 11 01:51:42 linuxvm blobfuse[90157]: Function run_gc_cache, in file /home/seguler/azure-storage-fuse/blobfuse/utilities.cpp, line 72: File being considered for deletion by file cache GC.
Dec 11 01:51:42 linuxvm kernel: blobfuse[90162]: segfault at 2ba07c0 ip 00007ffb940b5426 sp 00007ffb8d345bd8 error 4 in libc-2.17.so[7ffb93f5f000+1c3000]
The original pathing issues look to be fixed now in 1.1.1, although an interesting quirk I found (resuming issue enumeration from above) if one creates an account-level SAS with only container rwdlac
permissions, and unchecks object
SAS, blobfuse doesn't complain at all and silently hides the failure to modify, create or delete any of the files:
Dec 11 02:01:59 linuxvm blobfuse[90584]: Function azs_getattr, in file /home/seguler/azure-storage-fuse/blobfuse/utilities.cpp, line 349: azs_getattr called with path = /foo
Dec 11 02:01:59 linuxvm blobfuse[90584]: Function azs_getattr, in file /home/seguler/azure-storage-fuse/blobfuse/utilities.cpp, line 391: Object /mnt/resource/blobfusetmp/root/foo is not in the local cache during get_attr.
Dec 11 02:01:59 linuxvm blobfuse[90584]: Function azs_getattr, in file /home/seguler/azure-storage-fuse/blobfuse/utilities.cpp, line 437: Directory foo found on the service.
Dec 11 02:01:59 linuxvm blobfuse[90584]: Function azs_getattr, in file /home/seguler/azure-storage-fuse/blobfuse/utilities.cpp, line 349: azs_getattr called with path = /foo/bar2.txt
Dec 11 02:01:59 linuxvm blobfuse[90584]: Function azs_getattr, in file /home/seguler/azure-storage-fuse/blobfuse/utilities.cpp, line 391: Object /mnt/resource/blobfusetmp/root/foo/bar2.txt is not in the local cache during get_attr.
Dec 11 02:01:59 linuxvm blobfuse[90584]: Function azs_getattr, in file /home/seguler/azure-storage-fuse/blobfuse/utilities.cpp, line 449: Entity /foo/bar2.txt does not exist. Returning ENOENT (2) from get_attr.
Dec 11 02:01:59 linuxvm blobfuse[90584]: Function azs_create, in file /home/seguler/azure-storage-fuse/blobfuse/fileapis.cpp, line 201: azs_create called with path = /foo/bar2.txt, mode = 33188, fi->flags = 8241
Dec 11 02:01:59 linuxvm blobfuse[90584]: Function ensure_files_directory_exists_in_cache, in file /home/seguler/azure-storage-fuse/blobfuse/utilities.cpp, line 202: Making cache directory /mnt.
Dec 11 02:01:59 linuxvm blobfuse[90584]: Function ensure_files_directory_exists_in_cache, in file /home/seguler/azure-storage-fuse/blobfuse/utilities.cpp, line 202: Making cache directory /mnt/resource.
Dec 11 02:01:59 linuxvm blobfuse[90584]: Function ensure_files_directory_exists_in_cache, in file /home/seguler/azure-storage-fuse/blobfuse/utilities.cpp, line 202: Making cache directory /mnt/resource/blobfusetmp.
Dec 11 02:01:59 linuxvm blobfuse[90584]: Function ensure_files_directory_exists_in_cache, in file /home/seguler/azure-storage-fuse/blobfuse/utilities.cpp, line 202: Making cache directory /mnt/resource/blobfusetmp/root.
Dec 11 02:01:59 linuxvm blobfuse[90584]: Function ensure_files_directory_exists_in_cache, in file /home/seguler/azure-storage-fuse/blobfuse/utilities.cpp, line 202: Making cache directory /mnt/resource/blobfusetmp/root/foo.
Dec 11 02:01:59 linuxvm blobfuse[90584]: Successfully created file /foo/bar2.txt in file cache.
Dec 11 02:01:59 linuxvm blobfuse[90584]: Function azs_create, in file /home/seguler/azure-storage-fuse/blobfuse/fileapis.cpp, line 231: Returning success from azs_create with file /foo/bar2.txt.
Dec 11 02:01:59 linuxvm blobfuse[90584]: Function azs_getattr, in file /home/seguler/azure-storage-fuse/blobfuse/utilities.cpp, line 349: azs_getattr called with path = /foo/bar2.txt
Dec 11 02:01:59 linuxvm blobfuse[90584]: Function azs_getattr, in file /home/seguler/azure-storage-fuse/blobfuse/utilities.cpp, line 374: Accessing mntPath = /mnt/resource/blobfusetmp/root/foo/bar2.txt for get_attr succeeded; object is in the local cache.
Dec 11 02:01:59 linuxvm blobfuse[90584]: Function azs_getattr, in file /home/seguler/azure-storage-fuse/blobfuse/utilities.cpp, line 385: lstat on file /mnt/resource/blobfusetmp/root/foo/bar2.txt in local cache succeeded.
Dec 11 02:01:59 linuxvm blobfuse[90584]: Function azs_flush, in file /home/seguler/azure-storage-fuse/blobfuse/fileapis.cpp, line 265: azs_flush called with path = /foo/bar2.txt, fi->flags = 0, (((struct fhwrapper *)fi->fh)->fh) = 14.
Dec 11 02:01:59 linuxvm blobfuse[90584]: Function azs_flush, in file /home/seguler/azure-storage-fuse/blobfuse/fileapis.cpp, line 283: Successfully looked up mntPath. Input path = /foo/bar2.txt, path_link_buffer = /proc/self/fd/14, path_buffer = /mnt/resource/blobfusetmp/root/foo/bar2.txt
Dec 11 02:02:00 linuxvm blobfuse[90584]: Successfully uploaded file /foo/bar2.txt to blob foo/bar2.txt.
Dec 11 02:02:00 linuxvm blobfuse[90584]: Function azs_flush, in file /home/seguler/azure-storage-fuse/blobfuse/fileapis.cpp, line 265: azs_flush called with path = /foo/bar2.txt, fi->flags = 0, (((struct fhwrapper *)fi->fh)->fh) = 14.
Dec 11 02:02:00 linuxvm blobfuse[90584]: Function azs_flush, in file /home/seguler/azure-storage-fuse/blobfuse/fileapis.cpp, line 283: Successfully looked up mntPath. Input path = /foo/bar2.txt, path_link_buffer = /proc/self/fd/14, path_buffer = /mnt/resource/blobfusetmp/root/foo/bar2.txt
Dec 11 02:02:00 linuxvm blobfuse[90584]: Successfully uploaded file /foo/bar2.txt to blob foo/bar2.txt.
Dec 11 02:02:00 linuxvm blobfuse[90584]: Function azs_release, in file /home/seguler/azure-storage-fuse/blobfuse/fileapis.cpp, line 361: azs_release called with path = /foo/bar2.txt, fi->flags = 32769
Dec 11 02:02:00 linuxvm blobfuse[90584]: Function azs_release, in file /home/seguler/azure-storage-fuse/blobfuse/fileapis.cpp, line 380: Adding file to the GC from azs_release. File = /mnt/resource/blobfusetmp/root/foo/bar2.txt
.
Dec 11 02:04:01 linuxvm blobfuse[90584]: Function run_gc_cache, in file /home/seguler/azure-storage-fuse/blobfuse/utilities.cpp, line 72: File /foo/bar2.txt being considered for deletion by file cache GC.
Dec 11 02:04:01 linuxvm blobfuse[90584]: Function run_gc_cache, in file /home/seguler/azure-storage-fuse/blobfuse/utilities.cpp, line 109: GC cleanup of cached file /mnt/resource/blobfusetmp/root/foo/bar2.txt.
Dec 11 02:04:56 linuxvm blobfuse[90584]: Function azs_getattr, in file /home/seguler/azure-storage-fuse/blobfuse/utilities.cpp, line 349: azs_getattr called with path = /
Dec 11 02:04:57 linuxvm blobfuse[90584]: Function azs_destroy, in file /home/seguler/azure-storage-fuse/blobfuse/utilities.cpp, line 481: azs_destroy called.
There was no user-facing error nor any indication of failure in the logs (but due to the SAS token, we know any object API calls would be 403s). bar2.txt
was therefore lost silently.
@stewartadam kindly take the latest version and try out your SAS related test case. In my test I do see 'Permissions denied' error if 'object' was checked off from the permissions while creating SAS.
Closing as there is no update on this. Kindly re-open if the issue persists.
@vibhansa-msft I tried creating an account-level SAS de-selecting 'object' in Azure Storage Explorer and mount succeeds, but any access to the mountpoint causes a silent failure. The process accessing the mountpoint will become a zombie (unkillable) process, hanging in I/O:
$ ls mountpoint
^C^Z^C
During which in the system logs, I do now see errors reported:
Oct 08 17:20:13 linuxvm blobfuse[26766]: ==> REQUEST/RESPONSE :: GET https://accountName.blob.core.windows.net/blobfuse-test?comp=list&delimiter=/&include=metadata&maxresults=5000&restype=container&sv=2019-12-12&ss=btqf&srt=sc&st=2020-10-08T16%3A46%3A24Z&se=2020-10-09T16%3A46%3A24Z&sp=rwdlacu&sig=REDACTED?&User-Agent=azure-storage-fuse/1.3.4&x-ms-date=Thu, 08 Oct 2020 17:19:59 GMT&x-ms-version=2018-11-09&Transfer-Encoding=--------------------------------------------------------------------------------RESPONSE Status :: 0 ::
Oct 08 17:20:30 linuxvm blobfuse[26766]: ==> REQUEST/RESPONSE :: GET https://accountName.blob.core.windows.net/blobfuse-test?comp=list&delimiter=/&include=metadata&maxresults=5000&restype=container&sv=2019-12-12&ss=btqf&srt=sc&st=2020-10-08T16%3A46%3A24Z&se=2020-10-09T16%3A46%3A24Z&sp=rwdlacu&sig=REDACTED?&User-Agent=azure-storage-fuse/1.3.4&x-ms-date=Thu, 08 Oct 2020 17:20:13 GMT&x-ms-version=2018-11-09&Transfer-Encoding=--------------------------------------------------------------------------------RESPONSE Status :: 0 ::
Oct 08 17:20:48 linuxvm blobfuse[26766]: ==> REQUEST/RESPONSE :: GET https://accountName.blob.core.windows.net/blobfuse-test?comp=list&delimiter=/&include=metadata&maxresults=5000&restype=container&sv=2019-12-12&ss=btqf&srt=sc&st=2020-10-08T16%3A46%3A24Z&se=2020-10-09T16%3A46%3A24Z&sp=rwdlacu&sig=REDACTED?&User-Agent=azure-storage-fuse/1.3.4&x-ms-date=Thu, 08 Oct 2020 17:20:30 GMT&x-ms-version=2018-11-09&Transfer-Encoding=--------------------------------------------------------------------------------RESPONSE Status :: 0 ::
Oct 08 17:21:12 linuxvm blobfuse[26766]: ==> REQUEST/RESPONSE :: GET https://accountName.blob.core.windows.net/blobfuse-test?comp=list&delimiter=/&include=metadata&maxresults=5000&restype=container&sv=2019-12-12&ss=btqf&srt=sc&st=2020-10-08T16%3A46%3A24Z&se=2020-10-09T16%3A46%3A24Z&sp=rwdlacu&sig=REDACTED?&User-Agent=azure-storage-fuse/1.3.4&x-ms-date=Thu, 08 Oct 2020 17:20:48 GMT&x-ms-version=2018-11-09&Transfer-Encoding=--------------------------------------------------------------------------------RESPONSE Status :: 0 ::
... after a few minutes ...
Oct 08 17:26:43 linuxvm kernel: INFO: task ls:26778 blocked for more than 120 seconds.
Oct 08 17:26:43 linuxvm kernel: "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message.
Oct 08 17:26:43 linuxvm kernel: ls D ffff9425ebb8e2a0 0 26778 3087 0x00000084
Oct 08 17:26:43 linuxvm kernel: Call Trace:
Oct 08 17:26:43 linuxvm kernel: [<ffffffffb6180a09>] schedule+0x29/0x70
Oct 08 17:26:44 linuxvm kernel: [<ffffffffc06434d5>] __fuse_request_send+0xf5/0x2e0 [fuse]
Oct 08 17:26:44 linuxvm kernel: [<ffffffffb5ac72e0>] ? wake_up_atomic_t+0x30/0x30
Oct 08 17:26:44 linuxvm kernel: [<ffffffffc06436d2>] fuse_request_send+0x12/0x20 [fuse]
Oct 08 17:26:44 linuxvm kernel: [<ffffffffc06493b1>] fuse_readdir+0x121/0x730 [fuse]
Oct 08 17:26:44 linuxvm kernel: [<ffffffffb5c5ff30>] ? iterate_dir+0x130/0x130
Oct 08 17:26:44 linuxvm kernel: [<ffffffffb5c5fe97>] iterate_dir+0x97/0x130
Oct 08 17:26:44 linuxvm kernel: [<ffffffffb6188678>] ? __do_page_fault+0x238/0x500
Oct 08 17:26:44 linuxvm kernel: [<ffffffffb5c603b2>] SyS_getdents+0xa2/0x130
Oct 08 17:26:44 linuxvm kernel: [<ffffffffb5c5ff30>] ? iterate_dir+0x130/0x130
Oct 08 17:26:44 linuxvm kernel: [<ffffffffb6188975>] ? do_page_fault+0x35/0x90
Oct 08 17:26:45 linuxvm kernel: [<ffffffffb618dede>] system_call_fastpath+0x25/0x2a
However the mountpoint remains locked up, and can't even be unmounted:
# fusermount -u mountpoint
fusermount: failed to unmount /home/stewartadam/mountpoint: Device or resource busy
So it looks like I/O errors are not being surfaced up to the calling processes through FUSE.
Also, note I don't seem to have permissions to reopen
@stewartadam : this observation is with latest version ? As per the logs you have shared, REST calls are returning back with "RESPONSE Status :: 0 ::", which points to some sort of timeout in connection to your container. In such situation blobfuse will have extensive retry to connect and while this is going on the mount point will be locked. Only way to come out of this locking is to kill the blobfuse process forcefully. For the timeout issue I suggest you to check your cURL version. Most of the time we observe this issue when cURL version is not compatible with blobfuse.
cURL version shall be 7.35+ for blobfuse to work. If not so, kindly upgrade your cURL and retry.
Only way to come out of this locking is to kill the blobfuse process forcefully
This seems like bad behavior, why not return EAGAIN or ENODEV to the process while continuing to connect in the background? Hanging every process that accesses the mountpoint is dangerous.
cURL version shall be 7.35+ for blobfuse to work.
RHEL7 uses curl-7.29.0 (curl-7.29.0-57.el7_8.1.x86_64). Is blobfuse no longer compatible with RHEL7?
As I conveyed earlier there is extensive retry logic in place to connect to the storage. If at all you want to stop it in between that then 'KILL' is the only option I meant. Blobfuse does work on RHEL 7, provided that you upgrade the cURL. Try upgrading it to a higher version, 7.67 may be.
As I conveyed earlier there is extensive retry logic in place to connect to the storage. If at all you want to stop it in between that then 'KILL' is the only option I meant.
I'm advocating for the "don't break userspace" philosophy that the Linux kernel follows. A blobfuse connection error should not hang every process that attempts to access its mountpoint. This can have knock-off effects for processes that scan the filesystem or iterate over mountpoints.
The process can still have an I/O error returned while blobfuse attempts to connect in the background.
Blobfuse does work on RHEL 7, provided that you upgrade the cURL. Try upgrading it to a higher version, 7.67 may be.
The test VM I'm using is RHEL equivalent (CentOS Linux release 7.8.2003 (Core)
specifically) and the version of curl above is the latest available from the repos -- are you seeing 7.67 available via dnf update?
Also note the invalid URI in the logs:
GET ... sig=REDACTED?&User-Agent=azure-storage-fuse/1.3.4&x-ms-date=Thu, 08 Oct 2020 17:20:48 GMT&x-ms-version=2018-11-09&Transfer-Encoding=
&?
is present halfway in the URI and the rest of the URI isn't URL-encoded. When I removed everything after (and including) the &?
and inserted my SAS token sig=
value, the request returned successfully so my mount values appear to be fine.
cURL 7.67 is not available directly via updates. I had to download the source and build. If you want to go that path, I can share the steps for the same.
This is a bad sysadmin practice for several reasons - (1) the from-source copy of curl will get overwritten by by the next curl RPM update unless the curl RPM is uninstalled first, which breaks a bunch of dependencies (2) installing from source keeps you on a static version that eventually will be vulnerable to security issues and remain unpatched.
If >=7.67 is required, it needs to be provided via a RPM repo alongside blobfuse (and at minimum, blobfuse RPM needs to Requires the correct curl package version).
That aside though, why is >=7.67 required?
cURL 7.35+ is required. 7.67 was just a suggestion. I do agree installing from source is not a good practice but as of now I do not see 7.35+ in standard packages. This dependency has come due to the SDK that blobfuse uses to connect to storage account.
Did you confirm your cURL version is lower then 7.35, otherwise there is no need to update.
This dependency has come due to the SDK that blobfuse uses to connect to storage account.
Do you have more information on what features specifically are required? RHEL7 uses curl 7.29.0 but Red Hat does often backport fixes from upstream. Without more information, we can only conclude that blobfuse is no longer compatible with RHEL7.
I have observed earlier that on RHEL 7 TLS handshake does not go through with old cURL versions. So my guess is something related to TLS/OpenSSL/Crypt needs upgrade in cURL.
blobfuse mounts and lists files correctly on RHEL7 when enabling FUSE debug (-d
) or foreground (-f
), but fails when those flags aren't present; so if I had to guess the issue might be a race condition not necessarily due to curl <7.35 (at least not for RHEL7's version of curl 7.29.0).
Heres's an excerpt of the log when -d
or -f
CLI arguments are present:
...
Oct 12 16:35:07 linuxvm blobfuse[3340]: Function azs_readdir, in file /usr/pipeline/blobfuse/azure-storage-fuse/blobfuse/directoryapis.cpp, line 133: azs_readdir : About to call list_blobs. Container = blobfuse-test, delimiter = /, continuation = , prefix =
Oct 12 16:35:07 linuxvm blobfuse[3340]: ==> REQUEST/RESPONSE :: GET https://accountName.blob.core.windows.net/blobfuse-test?comp=list&delimiter=/&include=metadata&maxresults=5000&restype=container&sv=2019-12-12&ss=btqf&srt=sco&st=2020-10-12T16%3A01%3A29Z&se=2020-11-13T17%3A01%3A00Z&sp=rl&sig=REDACTED?&User-Agent=azure-storage-fuse/1.3.4&x-ms-date=Mon, 12 Oct 2020 16:35:07 GMT&x-ms-version=2018-11-09&Transfer-Encoding=--------------------------------------------------------------------------------RESPONSE Status :: 200 :: REQ ID : 4575c20d-901e-0126-65b5-a0ec82000000
Oct 12 16:35:07 linuxvm blobfuse[3340]: Function azs_readdir, in file /usr/pipeline/blobfuse/azure-storage-fuse/blobfuse/directoryapis.cpp, line 205: #### So far 3 items retreived in 1 iterations.
...
And without them / mounting normally:
...
Oct 12 16:35:45 linuxvm blobfuse[3365]: Function azs_readdir, in file /usr/pipeline/blobfuse/azure-storage-fuse/blobfuse/directoryapis.cpp, line 133: azs_readdir : About to call list_blobs. Container = blobfuse-test, delimiter = /, continuation = , prefix =
Oct 12 16:35:45 linuxvm blobfuse[3365]: ==> REQUEST/RESPONSE :: GET https://accountName.blob.core.windows.net/blobfuse-test?comp=list&delimiter=/&include=metadata&maxresults=5000&restype=container&sv=2019-12-12&ss=btqf&srt=sco&st=2020-10-12T16%3A01%3A29Z&se=2020-11-13T17%3A01%3A00Z&sp=rl&sig=REDACTED?&User-Agent=azure-storage-fuse/1.3.4&x-ms-date=Mon, 12 Oct 2020 16:35:45 GMT&x-ms-version=2018-11-09&Transfer-Encoding=--------------------------------------------------------------------------------RESPONSE Status :: 0 ::
...
Closing in favor of two new reported issues.
Which version of the blobfuse was used?
1.0.3
Which OS (please include version) are you using?
Fedora 29
What problem was encountered?
echo 1 > foo
) are created under an empty-named directory on the server (i.e./container//foo
)Have you found a mitigation/solution?
No
By default, blobfuse logs errors to syslog. If this is relevant, is there anything in the syslog that might be helpful?
dmesg:
syslog, note how for example '/baz' is uploaded to the container instead of 'baz' resulting in a blob path of /container//baz':