Closed kumaaaah closed 1 year ago
As you have set "allow-other: true" ensure this feature is enable in /etc/fuse.conf file.
If this is FNS (non-HNS) account, ensure you have added "--virtual-directory=true" to cli options.
On this system it appears you do not have /etc/fstab, which is used by blobfuse to list and unmount. In such case you can always use "fusermount3 -u
As you have set "allow-other: true" ensure this feature is enable in /etc/fuse.conf file. If this is FNS (non-HNS) account, ensure you have added "--virtual-directory=true" to cli options. On this system it appears you do not have /etc/fstab, which is used by blobfuse to list and unmount. In such case you can always use "fusermount3 -u ' command to unmount the container.
Thank you. Before I try re-deploy blobfuse2, I want to confirm this situation: to install blobfuse2, I get information referred to https://docs.azure.cn/en-us/storage/blobs/blobfuse2-how-to-deploy#configure-the-microsoft-package-repository
In my case, I deployed on a Debian 11 distribution:
wget https://packages.microsoft.com/config/debian/11/packages-microsoft-prod.deb sudo dpkg -i packages-microsoft-prod.deb sudo apt-get update sudo apt-get install libfuse3-dev fuse3
Then I installed blobfuse2.
But in the ./docker/dockerfile the installation is as below:
RUN \ apt update && \ apt-get install -y ca-certificates vim rsyslog RUN if [ "$FUSE2" = "TRUE" ] ; then apt-get install -y fuse ; else apt-get install -y fuse3 ; fi
The different is I didn't install ca-certificates vim rsyslog and fuse but fuse3 Will this be impact?
If you are running blobfuse on a container you can follow this. And better to be consitent with fuse3 on container and host. To run container use
docker run -it --rm \
--cap-add=SYS_ADMIN \
--device=/dev/fuse \
--security-opt apparmor:unconfined \
-e AZURE_STORAGE_ACCOUNT \
-e AZURE_STORAGE_ACCESS_KEY \
-e AZURE_STORAGE_ACCOUNT_CONTAINER \
<image>
I think the AKS version is somewhat related. One of my cluster is using AKS 1.24.9
, I think it cannot even use blobfuse csi driver 2+
For instance, I got an error like the following. Even though I did az aks update --enable-blob-driver -n clusterName -g rg
, I suspect AKS 1.25.0
below does not have blobfuse 2.0 enabled. This is a guess, I still haven't found the relavent documentation just yet
MountVolume.MountDevice failed for volume "pv-stpabackupsprod-blob-container"
: rpc error: code = Internal desc = Mount failed with error: rpc error: code =
Unknown desc = exit status 1 fuse: unknown option `--virtual-directory=true'
no config filedone reading env vars, output:
Also on a related issue: https://github.com/Azure/azure-storage-fuse/issues/1078 (I think) Even after adding --virtual-directory=true
to the AKS 1.25.5
storageclass, I have a following weird behaviour.
btw, this is my PV mount options
apiVersion: v1
kind: PersistentVolume
metadata:
name: yourPV
spec:
capacity:
storage: 500Gi
accessModes:
- ReadOnlyMany
claimRef:
apiVersion: v1
kind: PersistentVolumeClaim
name: yourPVC
namespace: pa-data-services
# Retain allows the feature that even deleting the PVC & PV the external Azure Blob Container or Azure File Shares WILL NOT get deleted.
# That is quite important, because that means even somehow the ENTIRE Kubernetes cluster is gone. We still have our data.
persistentVolumeReclaimPolicy: Retain
storageClassName: azureblob-fuse-premium
csi:
# This driver, however, does not come pre-installed when creating an AKS. Also, it seems to be in preview.
# So there is an elaborated steps to enable and install it.
# See the README.md section of: ## About blob-csi-driver
driver: blob.csi.azure.com
readOnly: false
# make sure this volumeid is unique in the cluster
# `#` is not allowed in self defined volumeHandle
volumeHandle: rg-backups-prod#stpabackupsprod#backups-prod
volumeAttributes:
resourceGroup: rG
containerName: containerName
nodeStageSecretRef:
name: stpabackupsprod
namespace: pa-data-services
mountOptions:
- '-o allow_other'
- '--file-cache-timeout-in-seconds=120'
- '--use-attr-cache=true'
- '-o attr_timeout=120'
- '-o entry_timeout=120'
- '-o negative_timeout=120'
- '--log-level=LOG_DEBUG'
- '--cache-size-mb=1000'
- '--virtual-directory=true'
@vibhansa-msft
Wait a minute... I just re-read the whole Use Azure Blob storage Container Storage Interface (CSI) driver documentation by Azure. Specifically, the following paragraph & callout box, does that mean AKS 1.25.5
NO LONGER support blobfuse but only NFS v3, i.e., you can only use azureblob-nfs-premium
storageclass. In other words, your Azure Storage Account MUST have HNS & NFS v3 enabled?
This is what the azureblob-nfs-premium
storageclass looks like.
kind: StorageClass
apiVersion: storage.k8s.io/v1
metadata:
name: azureblob-nfs-premium
labels:
addonmanager.kubernetes.io/mode: EnsureExists
kubernetes.io/cluster-service: 'true'
provisioner: blob.csi.azure.com
parameters:
protocol: nfs
skuName: Premium_LRS
reclaimPolicy: Delete
allowVolumeExpansion: true
volumeBindingMode: Immediate
Blobfuse has no restriciton related to HNS, you can use it for both FNS and HNS. For you query on "device busy" causing unmount to fail, this generally hits when you have a console which has set its path to the mount point or someone is using the mount path and in parallel you try to unmount. If you exit all your shell and unmount it will work fine. Also, there are linux commands to force unmount as well. For the AKS related query @andyzhangx can answer.
Blobfuse has no restriciton related to HNS, you can use it for both FNS and HNS. For you query on "device busy" causing unmount to fail, this generally hits when you have a console which has set its path to the mount point or someone is using the mount path and in parallel you try to unmount. If you exit all your shell and unmount it will work fine. Also, there are linux commands to force unmount as well. For the AKS related query @andyzhangx can answer.
CC: @andyzhangx
Hm... but when I tried to mount an Azure Storage Account that does not have HNS & does not have NFS v3 enabled using storageClassName: azureblob-nfs-premium
, I got the following error.
E0309 03:21:35.312356 5969 utils.go:80] GRPC error: rpc error: code = Internal desc = volume(rg-backups-prod#stpabackupsprod#backups-prod) mount "stpabackupsprod.blob.core.windows.net:/stpabackupsprod/backups-prod" on "/var/lib/kubelet/plugins/kubernetes.io/csi/blob.csi.azure.com/a827afe57ea9da516011d9c85fceef41165a4bda0ace1d794dbb683b93ddd556/globalmount" failed with mount failed: exit status 32
Mounting command: mount
Mounting arguments: -t nfs -o sec=sys,vers=3,nolock stpabackupsprod.blob.core.windows.net:/stpabackupsprod/backups-prod /var/lib/kubelet/plugins/kubernetes.io/csi/blob.csi.azure.com/a827afe57ea9da516011d9c85fceef41165a4bda0ace1d794dbb683b93ddd556/globalmount
Output: mount.nfs: mounting stpabackupsprod.blob.core.windows.net:/stpabackupsprod/backups-prod failed, reason given by server: No such file or directory
But if I were to use the same storageClassName: azureblob-nfs-premium
to mount an Azure Storage Account that has both HNS & NFS v3 enabled, it can mount it correctly. Also, the problem of killing the pod randomly then the mount disappeard also went away.
Just a reminder, all these are under a specific version AKS, that is AKS 1.25.5
. I do not see any of these behaviours in AKS 1.24.9
, which is why I suspected that AKS 1.25.5
only supports NFS v3 now. Especially it seems the Azure AKS documentation specifically called it out?
@pa-mc first error is expected since NFSv3 is not enabled in your account in the beginning. BTW, we will update the AKS doc soon by remove that notion, currently AKS 1.25+ already supports blobfuse and NFSv3
btw, for HNS support, you need to set --use-adls=true
in mount options if you bring your own account.
https://github.com/kubernetes-sigs/blob-csi-driver/blob/master/docs/driver-parameters.md
isHnsEnabled: "true"
in storage class parameter to create ADLS account by driver in dynamic provisioning.--use-adls=true
must be specified to enable blobfuse access ADLS account in static provisioning.@pa-mc first error is expected since NFSv3 is not enabled in your account in the beginning. BTW, we will update the AKS doc soon by remove that notion, currently AKS 1.25+ already supports blobfuse and NFSv3
@andyzhangx thanks for the clarification! In my case, I am using static provisioning instead of dynamic provisioning.
A related question: How do I determine which version of blobfuse AKS is using? Or how do I force AKS
to use a specific version of blobfuse?
Because when I tried to use - '--virtual-directory=true'
mount option on AKS 1.24.9
, I got option not supported error. But after hopping into a csi-blob-node-pod
like the following. Both blobfuse binaries are there. When I'm using AKS 1.25.5
there is no problem using the - '--virtual-directory=true'
mount option.
❯ kubectl exec -it csi-blob-node-lfkx9 -c blob -n kube-system -- /bin/sh
# blobfuse -v
blobfuse 1.4.5
# blobfuse2 -v
blobfuse2 version 2.0.2
@pa-mc AKS 1.25 is on Ubuntu 22.04 and it only supports blobfuse v2, and before 1.25, AKS is on Ubuntu 18.04, both blobfuse v1 & v2 are supported. I would suggest using protocol: fuse2
in sc or pv config even before AKS 1.25, then your application won't break if you upgrade to 1.25 or later verson since you are always using blobfusev2 on any aks versions.
@andyzhangx @vibhansa-msft Sorry for the long reply below, but I think this will illustrate my specific edge case that causes blobfuse2 to break, please hear me out :)
We have a database (OrientDB) backup daily on an Azure blob store that does not support NFS v3 protocol, so all I can work with is Blobfuse v1 or v2.
I have two cluster one is AKS 1.25.5
, one is AKS 1.24.9
. Both have the exact same k8s Deployment
to be run. That is a statically provisioned PV
volume map to the Azure Blob Store mentioned above, and it has been claimed through PVC and mounted into a k8s Deployment type
with node selector and tolerations
so it only attaches to a very large node_pool
, so it can do daily OrientDB restore and kicking off the subsequent Data pipeline for ETL purposes.
Now here is the specific scenario, due to the limitation with OrientDB (long story... I'll spare you the details here...), for each data ETL run, we have to kill the k8s deployment type
to clear out the cache. (we are not deleting the whole Deployment type
, we only delete the undelrying Pods
, so AKS will recover the Pods
itself).
The following set of comments + csi driver logs will illustrate why I think perhaps there is a little bug in blobfuse2. Again, I could be totally wrong, but here is how I came to that logical conclusion.
AKS 1.25.5
+ protocol: fuse2
AKS 1.24.9
+ protocol: fuse2
AKS 1.24.9
+ protocol: fuse1
w/o --virtual-directory
mounting option.TL;DR: First of all, why w/o
--virtual-directory
because fuse1 does not support that mounting option, so I have to use fuse1. But this combination ofAKS 1.24.9
+protocol: fuse1
w/o--virtual-directory
does not have the issue of repeatedly deleting pods and restarting pods automatically by AKS. After numerous times, I can still see all my mounted Azure Blob files.
I'm sorry about this super long posts, but this is the best way I could think of to asynchronously communicate this issue to you guys. :)
P.S.1: According to this repo, it seems that blobfuse2 (2.0.2) is not in the compatibility table just yet?
❯ kubectl exec -it csi-blob-node-t8dt8 -c blob -n kube-system -- /bin/sh
# blobfuse2 -v
blobfuse2 version 2.0.2
# blobfuse -v
blobfuse 1.4.5
P.S.2: Here is how I'm coping with my edge case right now, which is to use a combination of AKS 1.25.5
+ NFSv3, assuming your Azure blob store has HNS & NFSv3 enabled.
@pa-mc thanks for the sharing. So with blobfuse2 on AKS 1.24 or 1.25, there is device or resource busy
during first unmount attempt, and does it break your scenario in the end? from the logs, I could see unmount always succeeded in the second unmount.
If some operation is going on or there is a shell connected to that path, unmount is expected to fail. Before you kill the pod you shall do a graceful unmount otherwise there are chances of data loss as well. If blobfuse was in between a transfer and you just killed the pod or container then transfer will be terminated, and next mount will lose the locally cached data as well.
If some operation is going on or there is a shell connected to that path, unmount is expected to fail. Before you kill the pod you shall do a graceful unmount otherwise there are chances of data loss as well. If blobfuse was in between a transfer and you just killed the pod or container then transfer will be terminated, and next mount will lose the locally cached data as well.
@vibhansa-msft one point is that with v1, it does not have such device or resource busy
error during first unmount, while with v2, there is such error. Do you know whether there is any unmount process difference between v1 and v2?
There is no difference in unmount process as such, also unmount is triggered from kernel and we just get a call from the system to shutdown the binary. Ideally I would expect both to fail in case some operations are going on. Quite possible that v1 was not reporting the problem while v2 does.
@pa-mc thanks for the sharing. So with blobfuse2 on AKS 1.24 or 1.25, there is
device or resource busy
during first unmount attempt, and does it break your scenario in the end? from the logs, I could see unmount always succeeded in the second unmount.
Yes, for the following two combinations, the first pod deletion you will see the device or resource busy
error. The subsequent deletion of the pod, you won't see that error message anymore. However, none of the blob files show up either.
AKS 1.25.5
+ protocol: fuse2
AKS 1.24.9
+ protocol: fuse2
Also I don't quite understand why would restart of a pod, why would CSI do a GRPC call: /csi.v1.Node/NodeUnpublishVolume
right after a GRPC call: /csi.v1.Node/NodePublishVolume
? I get when the pod restarted successfully, it should do a GRPC call: /csi.v1.Node/NodePublishVolume
, but why would it automatically do a GRPC call: /csi.v1.Node/NodeUnpublishVolume
even though the pod is healthy?
If some operation is going on or there is a shell connected to that path, unmount is expected to fail. Before you kill the pod you shall do a graceful unmount otherwise there are chances of data loss as well. If blobfuse was in between a transfer and you just killed the pod or container then transfer will be terminated, and next mount will lose the locally cached data as well.
I'm pretty sure there is nothing connected to that path while I did the experiments except the pod that has the path mounted to it.
In this case, I did deliberately restarted the Pod, but in practice I have no control over when or if the Pod would restart or not. The graceful unmount should be handled by the CSI driver or k8s Pod API right?
I'm not sure if this is asking too much, but it would be great if we get on a zoom / team call so that I can screenshare with you guys, much better than reading my super long experiments :P
If some operation is going on or there is a shell connected to that path, unmount is expected to fail. Before you kill the pod you shall do a graceful unmount otherwise there are chances of data loss as well. If blobfuse was in between a transfer and you just killed the pod or container then transfer will be terminated, and next mount will lose the locally cached data as well.
I'm pretty sure there is nothing connected to that path while I did the experiments except the pod that has the path mounted to it.
In this case, I did deliberately restarted the Pod, but in practice I have no control over when or if the Pod would restart or not. The graceful unmount should be handled by the CSI driver or k8s Pod API right?
@pa-mc there are two pods in the second delete
pod with id 42c84958-666e-467c-b86e-a7fcd70e3a26
is created, so you get
I0309 15:27:44.061632 7519 utils.go:75] GRPC call: /csi.v1.Node/NodePublishVolume
I0309 15:27:44.061653 7519 utils.go:76] GRPC request: {"staging_target_path":"/var/lib/kubelet/plugins/kubernetes.io/csi/blob.csi.azure.com/a827afe57ea9da516011d9c85fceef41165a4bda0ace1d794dbb683b93ddd556/globalmount","target_path":"/var/lib/kubelet/pods/42c84958-666e-467c-b86e-a7fcd70e3a26/volumes/kubernetes.io~csi/pv-stpabackupsprod-blob-container/mount","volume_capability":{"AccessType":{"Mount":{"mount_flags":["-o allow_other","--file-cache-timeout-in-seconds=120","--use-attr-cache=true","-o attr_timeout=120","-o entry_timeout=120","-o
pod with id 148f2d31-08ed-4af0-85b9-880ecf31d661
is being deleted, so you get:
I0309 15:27:45.268586 7519 utils.go:75] GRPC call: /csi.v1.Node/NodeUnpublishVolume
I0309 15:27:45.268602 7519 utils.go:76] GRPC request: {"target_path":"/var/lib/kubelet/pods/148f2d31-08ed-4af0-85b9-880ecf31d661/volumes/kubernetes.io~csi/pv-stpabackupsprod-blob-container/mount","volume_id":"rg-backups-prod#stpabackupsprod#backups-prod"}
that's expected.
If some operation is going on or there is a shell connected to that path, unmount is expected to fail. Before you kill the pod you shall do a graceful unmount otherwise there are chances of data loss as well. If blobfuse was in between a transfer and you just killed the pod or container then transfer will be terminated, and next mount will lose the locally cached data as well.
I'm pretty sure there is nothing connected to that path while I did the experiments except the pod that has the path mounted to it. In this case, I did deliberately restarted the Pod, but in practice I have no control over when or if the Pod would restart or not. The graceful unmount should be handled by the CSI driver or k8s Pod API right?
@pa-mc there are two pods in the
second delete
- pod with id
42c84958-666e-467c-b86e-a7fcd70e3a26
is created, so you getI0309 15:27:44.061632 7519 utils.go:75] GRPC call: /csi.v1.Node/NodePublishVolume I0309 15:27:44.061653 7519 utils.go:76] GRPC request: {"staging_target_path":"/var/lib/kubelet/plugins/kubernetes.io/csi/blob.csi.azure.com/a827afe57ea9da516011d9c85fceef41165a4bda0ace1d794dbb683b93ddd556/globalmount","target_path":"/var/lib/kubelet/pods/42c84958-666e-467c-b86e-a7fcd70e3a26/volumes/kubernetes.io~csi/pv-stpabackupsprod-blob-container/mount","volume_capability":{"AccessType":{"Mount":{"mount_flags":["-o allow_other","--file-cache-timeout-in-seconds=120","--use-attr-cache=true","-o attr_timeout=120","-o entry_timeout=120","-o
- pod with id
148f2d31-08ed-4af0-85b9-880ecf31d661
is being deleted, so you get:I0309 15:27:45.268586 7519 utils.go:75] GRPC call: /csi.v1.Node/NodeUnpublishVolume I0309 15:27:45.268602 7519 utils.go:76] GRPC request: {"target_path":"/var/lib/kubelet/pods/148f2d31-08ed-4af0-85b9-880ecf31d661/volumes/kubernetes.io~csi/pv-stpabackupsprod-blob-container/mount","volume_id":"rg-backups-prod#stpabackupsprod#backups-prod"}
that's expected.
Right, make sense. Because the killing of a Pod will remove the old Pod and create a new Pod under the same Deployment. I guess I just thought the /NodeUnpublishVolume would come first. Since the deletion of a Pod happend first. However, that's just a side note, the key mystry for me is why this behaviour wouldn't happen for AKS 1.24.9
+ protocol: fuse1
+ w/o --virtual-directory
? Because with that combination, no whatever how many time I restart the pod regardless of whether it's intentionally or in practice it could be restarted due to OOM or other reasons that Pod just crashed, there is no resource busy error
, and the mounted files still appears in the Pod.
Hi @pa-mc, what's the business impact for this behavior change? My assumption is that on AKS 1.25, it would create a new replica first, and then delete old replica, that's the reason why NodePublishVolume comes first and then NodeUnpublishVolume comes next. It could be related to RollingUpdateStrategy
in k8s deployment.
Hi @pa-mc, what's the business impact for this behavior change? My assumption is that on AKS 1.25, it would create a new replica first, and then delete old replica, that's the reason why NodePublishVolume comes first and then NodeUnpublishVolume comes next. It could be related to
RollingUpdateStrategy
in k8s deployment.
Hi @andyzhangx, yeah right now it's not a huge deal just yet. Because our production AKS cluster is still in 1.24.9
and it's still using blobfuse v1
w/o --virtual-directory
mounting options. For the time being, our development AKS cluster 1.25.5
, I've done a azcopy sync
nightly to sync our database backups that sitting in a non-NFS enabled Azure Blob Store an NFSv3-enabled Azure Blob Store and start using protocol: nfsv3
storage class. At the moment, I don't see any mounting issues even after deliberately deleting pods numerous times.
However, this is quite kludgy obviously, and I'd love to get to the bottom of this issue regardless things are "working" or not ;)
I've created an Azure Support ticket Support request ID | 2303070040008245
in the meantime and actually pasted this github issue in there as a reference. It's probably not fair to ask you guys to spend a lot of time on this one, but if you guys want to hop on that support call once scheduled, let me know :)
P.S.: Someone from Azure support reached out, and we will do a Team screenshare to show them exactly how I produced that error. I asked them to record it if they can (i don't mind). If they did record it, I will share it with you guys.
@pa-mc per our testing pipeline results, as long as you are using blobfuse v2 (no matter it's AKS 1.24 or 1.25), it would have such mount: device or resource busy
unmount failure, and second time unmount always succeeds after around 0.6s
mount: device or resource busy
unmount failure using blobfuse v2
https://storage.googleapis.com/kubernetes-jenkins/pr-logs/pull/kubernetes-sigs_blob-csi-driver/838/pull-blob-csi-driver-external-e2e-blobfuse-v2/1634186789612687360/build-log.txt
there is no mount: device or resource busy
unmount failure using blobfuse v1
https://storage.googleapis.com/kubernetes-jenkins/pr-logs/pull/kubernetes-sigs_blob-csi-driver/838/pull-blob-csi-driver-external-e2e-blobfuse/1634186788778020864/build-log.txt
So my assumption is that there could be implementation difference between v1 and v2, and it looks like there is no side effect since second time unmount always succeeds after around 0.6s
Hi @andyzhangx sorry for the late reply, wife & son gotta pretty sick and the last man standing :P Thank you so much for running those tests, and I will find some time this week and dig into it. I think my experimentation also showed the unmount will be successfull after the second restart of a Pod. However, it's the mounting had some problem. Because after second restart of a Pod, even thoug it said /NodePublishVolume
successful, but the mount was actually not working since there are no files in the mounted directory.
Regardless, I will keep you updated with my findings and perhaps do a screenrecording it and share it here.
Hi @andyzhangx sorry for the late reply, wife & son gotta pretty sick and the last man standing :P Thank you so much for running those tests, and I will find some time this week and dig into it. I think my experimentation also showed the unmount will be successfull after the second restart of a Pod. However, it's the mounting had some problem. Because after second restart of a Pod, even thoug it said
/NodePublishVolume
successful, but the mount was actually not working since there are no files in the mounted directory.Regardless, I will keep you updated with my findings and perhaps do a screenrecording it and share it here.
@pa-mc if second restart would make pod broken, can you ssh to the pod, and run df -h
command to check whether there is blobfuse mount inside that pod. e.g.
kubectl exec -it nginx-blob -- df -h
Filesystem Size Used Avail Use% Mounted on ... blobfuse 14G 41M 13G 1% /mnt/blob ...
Hi @andyzhangx sorry for the late reply, wife & son gotta pretty sick and the last man standing :P Thank you so much for running those tests, and I will find some time this week and dig into it. I think my experimentation also showed the unmount will be successfull after the second restart of a Pod. However, it's the mounting had some problem. Because after second restart of a Pod, even thoug it said
/NodePublishVolume
successful, but the mount was actually not working since there are no files in the mounted directory. Regardless, I will keep you updated with my findings and perhaps do a screenrecording it and share it here.@pa-mc if second restart would make pod broken, can you ssh to the pod, and run
df -h
command to check whether there is blobfuse mount inside that pod. e.g.kubectl exec -it nginx-blob -- df -h
Filesystem Size Used Avail Use% Mounted on ... blobfuse 14G 41M 13G 1% /mnt/blob ...
Hi @andyzhangx I tried, yup, as you anticipated, after the second restart, there is no blobfuse
FS anymore.
P.S.: I think you 100% know what exactly my sceniaro is, but I will try to find some time today to do a screen recording and share it anyway. Also, I really appreciate for all the help!
I have reproed the above issue, the second pod deletion would make blobfuse mount broken.
I find where the problem is, blob csi driver would mount blob storage to a globalmount dir first, and then bind mount during NodePublishVolume
for every pod running on the node, that could share the globalmount. when pod is terminated, dir unmount happens on bind mount, and globalmount is still there. there is no problem in blobfuse v1, while in blobfuse v2, unmount on bind mount would also make original globalmount dir unmount happen. That's the problem.
@vibhansa-msft could you help check what's the behavior difference on v2 when unmount happen on bind mount?
# mount | grep blobfuse | sort | uniq
blobfuse on /var/lib/kubelet/plugins/kubernetes.io/csi/blob.csi.azure.com/5f1241264f64ebdab71cf539cabb041e8715b1f6cd8b146fdff0ada5fc974eab/globalmount type fuse (rw,nosuid,nodev,relatime,user_id=0,group_id=0,allow_other)
blobfuse on /var/lib/kubelet/pods/cfa45d56-6862-4d65-9819-52e71ba45e62/volumes/kubernetes.io~csi/pvc-461464a9-8310-4cde-a3be-f9d4ce875419/mount type fuse (rw,nosuid,nodev,relatime,user_id=0,group_id=0,allow_other)
blobfuse2 on /var/lib/kubelet/plugins/kubernetes.io/csi/blob.csi.azure.com/1f778bb6414796cfefe77ad67ee6e5f21be127ed1adf86affb766488a15ebee4/globalmount type fuse (rw,nosuid,nodev,relatime,user_id=0,group_id=0,allow_other)
blobfuse2 on /var/lib/kubelet/pods/d5a00f29-4c5e-4e44-b54b-86690beaff6b/volumes/kubernetes.io~csi/pvc-32e09edb-ac53-4240-976a-6e810e270018/mount type fuse (rw,nosuid,nodev,relatime,user_id=0,group_id=0,allow_other)
root@aks-agentpool-20541019-vmss000000:/# umount /var/lib/kubelet/pods/d5a00f29-4c5e-4e44-b54b-86690beaff6b/volumes/kubernetes.io~csi/pvc-32e09edb-ac53-4240-976a-6e810e270018/mount
root@aks-agentpool-20541019-vmss000000:/# mount | grep blobfuse | sort | uniq blobfuse on /var/lib/kubelet/plugins/kubernetes.io/csi/blob.csi.azure.com/5f1241264f64ebdab71cf539cabb041e8715b1f6cd8b146fdff0ada5fc974eab/globalmount type fuse (rw,nosuid,nodev,relatime,user_id=0,group_id=0,allow_other) blobfuse on /var/lib/kubelet/pods/cfa45d56-6862-4d65-9819-52e71ba45e62/volumes/kubernetes.io~csi/pvc-461464a9-8310-4cde-a3be-f9d4ce875419/mount type fuse (rw,nosuid,nodev,relatime,user_id=0,group_id=0,allow_other)
root@aks-agentpool-20541019-vmss000000:/# umount /var/lib/kubelet/pods/cfa45d56-6862-4d65-9819-52e71ba45e62/volumes/kubernetes.io~csi/pvc-461464a9-8310-4cde-a3be-f9d4ce875419/mount
root@aks-agentpool-20541019-vmss000000:/# mount | grep blobfuse | sort | uniq blobfuse on /var/lib/kubelet/plugins/kubernetes.io/csi/blob.csi.azure.com/5f1241264f64ebdab71cf539cabb041e8715b1f6cd8b146fdff0ada5fc974eab/globalmount type fuse (rw,nosuid,nodev,relatime,user_id=0,group_id=0,allow_other)
I have reproed the above issue, the second pod deletion would make blobfuse mount broken.
I find where the problem is, blob csi driver would mount blob storage to a globalmount dir first, and then bind mount during
NodePublishVolume
for every pod running on the node, that could share the globalmount. when pod is terminated, dir unmount happens on bind mount, and globalmount is still there. there is no problem in blobfuse v1, while in blobfuse v2, unmount on bind mount would also make original globalmount dir unmount happen. That's the problem. @vibhansa-msft could you help check what's the behavior difference on v2 when unmount happen on bind mount?
- on below example, there are 1 blobfuse v1 globalmount, 1 bind mount and 1 blobfuse v2 globalmount, 1 bind mount
# mount | grep blobfuse | sort | uniq blobfuse on /var/lib/kubelet/plugins/kubernetes.io/csi/blob.csi.azure.com/5f1241264f64ebdab71cf539cabb041e8715b1f6cd8b146fdff0ada5fc974eab/globalmount type fuse (rw,nosuid,nodev,relatime,user_id=0,group_id=0,allow_other) blobfuse on /var/lib/kubelet/pods/cfa45d56-6862-4d65-9819-52e71ba45e62/volumes/kubernetes.io~csi/pvc-461464a9-8310-4cde-a3be-f9d4ce875419/mount type fuse (rw,nosuid,nodev,relatime,user_id=0,group_id=0,allow_other) blobfuse2 on /var/lib/kubelet/plugins/kubernetes.io/csi/blob.csi.azure.com/1f778bb6414796cfefe77ad67ee6e5f21be127ed1adf86affb766488a15ebee4/globalmount type fuse (rw,nosuid,nodev,relatime,user_id=0,group_id=0,allow_other) blobfuse2 on /var/lib/kubelet/pods/d5a00f29-4c5e-4e44-b54b-86690beaff6b/volumes/kubernetes.io~csi/pvc-32e09edb-ac53-4240-976a-6e810e270018/mount type fuse (rw,nosuid,nodev,relatime,user_id=0,group_id=0,allow_other) # unmount blobfuse v2 bind mount root@aks-agentpool-20541019-vmss000000:/# umount /var/lib/kubelet/pods/d5a00f29-4c5e-4e44-b54b-86690beaff6b/volumes/kubernetes.io~csi/pvc-32e09edb-ac53-4240-976a-6e810e270018/mount # blobfuse v2 global mount is also gone root@aks-agentpool-20541019-vmss000000:/# mount | grep blobfuse | sort | uniq blobfuse on /var/lib/kubelet/plugins/kubernetes.io/csi/blob.csi.azure.com/5f1241264f64ebdab71cf539cabb041e8715b1f6cd8b146fdff0ada5fc974eab/globalmount type fuse (rw,nosuid,nodev,relatime,user_id=0,group_id=0,allow_other) blobfuse on /var/lib/kubelet/pods/cfa45d56-6862-4d65-9819-52e71ba45e62/volumes/kubernetes.io~csi/pvc-461464a9-8310-4cde-a3be-f9d4ce875419/mount type fuse (rw,nosuid,nodev,relatime,user_id=0,group_id=0,allow_other) # unmount blobfuse v1 bind mount root@aks-agentpool-20541019-vmss000000:/# umount /var/lib/kubelet/pods/cfa45d56-6862-4d65-9819-52e71ba45e62/volumes/kubernetes.io~csi/pvc-461464a9-8310-4cde-a3be-f9d4ce875419/mount # blobfuse v1 global mount is still there root@aks-agentpool-20541019-vmss000000:/# mount | grep blobfuse | sort | uniq blobfuse on /var/lib/kubelet/plugins/kubernetes.io/csi/blob.csi.azure.com/5f1241264f64ebdab71cf539cabb041e8715b1f6cd8b146fdff0ada5fc974eab/globalmount type fuse (rw,nosuid,nodev,relatime,user_id=0,group_id=0,allow_other)
🐐
@andyzhangx : Sorry I did not understand it well but what I observe here is there are two types of mount. Global mount and a bind mount. I assume these two mounts are on two different directories. Unmount is generally a system level process, it just indicates the file-system driver to shutdown gracefully. From system point of view user will do a unmount with a path and notification will be sent to the file-system driver owning that path. If your global mount and bin mount are representing two different mount points then unmounting one can not wipe out another one. Can you share what command you use to unmount in case of v1 and v2?
@andyzhangx : Sorry I did not understand it well but what I observe here is there are two types of mount. Global mount and a bind mount. I assume these two mounts are on two different directories. Unmount is generally a system level process, it just indicates the file-system driver to shutdown gracefully. From system point of view user will do a unmount with a path and notification will be sent to the file-system driver owning that path. If your global mount and bin mount are representing two different mount points then unmounting one can not wipe out another one. Can you share what command you use to unmount in case of v1 and v2?
@vibhansa-msft in the beginning, we have blobfuse2 globalmount and /var/lib/kubelet/pods/d793004b-9ef5-46d6-ac66-95bebc3b4fc0/..
is a bind mount to globalmount:
# mount | grep blobfuse | sort | uniq
blobfuse2 on /var/lib/kubelet/plugins/kubernetes.io/csi/blob.csi.azure.com/1f778bb6414796cfefe77ad67ee6e5f21be127ed1adf86affb766488a15ebee4/globalmount type fuse (rw,nosuid,nodev,relatime,user_id=0,group_id=0,allow_other)
blobfuse2 on /var/lib/kubelet/pods/d793004b-9ef5-46d6-ac66-95bebc3b4fc0/volumes/kubernetes.io~csi/pvc-32e09edb-ac53-4240-976a-6e810e270018/mount type fuse (rw,nosuid,nodev,relatime,user_id=0,group_id=0,allow_other)
then I run umount /var/lib/kubelet/pods/d793004b-9ef5-46d6-ac66-95bebc3b4fc0/volumes/kubernetes.io~csi/pvc-32e09edb-ac53-4240-976a-6e810e270018/mount
, after a while, the two mounts of the above mounts are all gone, mount | grep blobfuse
returns empty now.
This issue only happens in blobfuse v2, and it does not happen in blobfuse v1. umount one bind mount dir
now also unmounts the source blobfuse v2 mount dir.
@vibhansa-msft could you check whether there is implementation difference for unmount in blobfuse v2 or is this a fuse2 driver issue itself?
It could be reproed on Ubuntu 18.04.6 LTS 5.4.0-1101-azure #107~18.04.1-Ubuntu
, I suspect it's related to these two flags --pre-mount-validate=true --ignore-open-flags=true
, if I remove these two flags, then unmount bind mount dir won't unmount source blobfuse v2 mount dir automatically.
export AZURE_STORAGE_BLOB_ENDPOINT=andygoofys.blob.core.windows.net
export AZURE_STORAGE_ACCOUNT=andygoofys
export AZURE_STORAGE_ACCESS_KEY=
mkdir /tmp/blobfusev2
blobfuse2 mount /tmp/blobfusev2 --container-name=blobfuse --tmp-path=/tmp/blobfusev2-temp/ -o allow_other --pre-mount-validate=true --ignore-open-flags=true
mkdir /tmp/blobfusev2-bind
mount --bind /tmp/blobfusev2/ /tmp/blobfusev2-bind
mount | grep blobfuse2
umount /tmp/blobfusev2-bind
There is no unmount implementation as such, unmount is a standard linux command which user fires with a path and blobfuse binary just gets a signal to shut down. Unmount is not something that blobfuse can trigger. Not sure why we are having a difference here in v1 vs v2 but ideally unmount is something that user controls. "--pre-mount-validate=true" : works only in v1, v2 just ignores that option as there is no such need here. "--ignore-open-flags=true" : is a v2 specific option for fuse3 and it just controls when user opens a file, whether to override the mode given by user or not (it has some perf impact) and both of these are not related to unmount in any way.
"mount --bind /tmp/blobfusev2/ /tmp/blobfusev2-bind" : this is something new that we generally do not try in our use-case, I need to investigate on this as to how it's expected to work.
@vibhansa-msft one strange thing is that after I run umount /tmp/blobfusev2-bind
, I got following result:
mount | grep blobfuse
/dev/sda1 on /tmp/blobfusev2-bind type ext4 (rw,relatime,discard)
so after /tmp/blobfusev2-bind
unmounted, it's bound to /dev/sda1
later.
$ ./blobfuse2 mount /usr/blob_mnt -o allow_other --pre-mount-validate=true --ignore-open-flags=true --config-file=./config.yaml
$ sudo mount --bind /usr/blob_mnt /usr/tmp_mnt/
$ mount | grep blobfuse
/home/vikas/go/src/azure-storage-fuse/blobfuse2 on /usr/blob_mnt type fuse (rw,nosuid,nodev,relatime,user_id=1000,group_id=1000,allow_other)
/home/vikas/go/src/azure-storage-fuse/blobfuse2 on /usr/tmp_mnt type fuse (rw,nosuid,nodev,relatime,user_id=1000,group_id=1000,allow_other)
$ sudo umount /usr/tmp_mnt
$ mount | grep blobfuse
/home/vikas/go/src/azure-storage-fuse/blobfuse2 on /usr/blob_mnt type fuse (rw,nosuid,nodev,relatime,user_id=1000,group_id=1000,allow_other)
I am not observing this issue in my Ubn-18 running with blobfuse2.
Even if I unmount /usr/blob_mnt itself, my binding is still available, and I can access my storage account through the alias. Have tried without --pre-mount-validate and allow_other options as well, still holds good for me.
$ ./blobfuse2 mount /usr/blob_mnt -o allow_other --pre-mount-validate=true --ignore-open-flags=true --config-file=./config.yaml $ sudo mount --bind /usr/blob_mnt /usr/tmp_mnt/ $ mount | grep blobfuse /home/vikas/go/src/azure-storage-fuse/blobfuse2 on /usr/blob_mnt type fuse (rw,nosuid,nodev,relatime,user_id=1000,group_id=1000,allow_other) /home/vikas/go/src/azure-storage-fuse/blobfuse2 on /usr/tmp_mnt type fuse (rw,nosuid,nodev,relatime,user_id=1000,group_id=1000,allow_other) $ sudo umount /usr/tmp_mnt $ mount | grep blobfuse /home/vikas/go/src/azure-storage-fuse/blobfuse2 on /usr/blob_mnt type fuse (rw,nosuid,nodev,relatime,user_id=1000,group_id=1000,allow_other)
I am not observing this issue in my Ubn-18 running with blobfuse2.
@vibhansa-msft @andyzhangx Then would you guys think there are some fundamental differences in terms of mount
& unmount
between ubuntu 18
vs ubuntu 22
?
AKS 1.25.5 uses `Ubuntu 22.04.1 LTS
PRETTY_NAME="Ubuntu 22.04.1 LTS"
NAME="Ubuntu"
VERSION_ID="22.04"
VERSION="22.04.1 LTS (Jammy Jellyfish)"
VERSION_CODENAME=jammy
ID=ubuntu
ID_LIKE=debian
HOME_URL="https://www.ubuntu.com/"
SUPPORT_URL="https://help.ubuntu.com/"
BUG_REPORT_URL="https://bugs.launchpad.net/ubuntu/"
PRIVACY_POLICY_URL="https://www.ubuntu.com/legal/terms-and-policies/privacy-policy"
UBUNTU_CODENAME=jammy
AKS 1.24.9
uses18.04.6 LTS (Bionic Beaver)
NAME="Ubuntu"
VERSION="18.04.6 LTS (Bionic Beaver)"
ID=ubuntu
ID_LIKE=debian
PRETTY_NAME="Ubuntu 18.04.6 LTS"
VERSION_ID="18.04"
HOME_URL="https://www.ubuntu.com/"
SUPPORT_URL="https://help.ubuntu.com/"
BUG_REPORT_URL="https://bugs.launchpad.net/ubuntu/"
PRIVACY_POLICY_URL="https://www.ubuntu.com/legal/terms-and-policies/privacy-policy"
VERSION_CODENAME=bionic
UBUNTU_CODENAME=bionic
@pa-mc I could also repro this issue in Ubuntu 18.04 with blobfuse v2
Steps to reproduce consistently.
blobfuse2 mount /tmp/blobfusev2 --container-name=blobfuse --tmp-path=/tmp/blobfusev2-temp/ --ignore-open-flags=true --pre-mount-validate=true -o allow_other && mount --bind /tmp/blobfusev2/ /tmp/blobfusev2-bind
sleep 1
mount | grep blobfuse2
sleep 1
umount /tmp/blobfusev2-bind
mount | grep blobfuse2
When Blobfuse2 mount command is executed, a process is started which validates the basic config and then start another daemon process (child process) which actually does the mount and serves file-system calls. When parent process starts the child process, on successful creation parents exists with success status. Child process the later scheduled and it mounts the file-system. In case of above scenario when "mount --bind" is executed immediately after the mount command it may happen that child process has not yet mounted the file-system and "mount --bind" is hit. In such condition it will bind the directory with an alias. Later child process will go and mount container to that directory and this alias will go into inconsistent state. At this stage if user tries to unmount the aliased directory actual directory also gets unmounted as the blobfuse process goes down when file-system is unmounted.
This issue was not in v1 because the way child processes are forked and handled are different in v1 and v2. V1 being a C++ code, fork() starts the child process and before parent exists child mounts immediately after fork. In GoLang daemonize process restarts the binary which again has to go through the config parsing the other steps before it can hit the libfuse library to mount.
Potential way out here are as follows:
update: we are going to fix this issue in blob csi driver v1.19.2
and v1.20.1
, and then rollout the fixed version on AKS, if you have any AKS cluster hitting this issue, pls email me the aks cluster api-server address (you could find my mail address in my github account), I would help mitigate this issue immediately so you don't need to wait a few days for AKS release roll out in your region.
@andyzhangx @vibhansa-msft Two 🐐 s 😄
So @andyzhangx It's alright, I'm not in a hurry right now, I can wait till the the AKS release to happen. I've changed my AKS 1.25.5
to use NFS mount rather than Blobfuse2 mount. Speaking of which, the AKS release, is there a channel (slack or other) for me to monitor when the fix is merged into the necessary AKS version? Also what do I need to do to make sure the change propagate through existing AKS clusters? Perhaps do a az aks update
or something like that?
I just filed a PR to solve the problem from blobfuse2 side, please take a look. https://github.com/Azure/azure-storage-fuse/pull/1088 @vibhansa-msft @pa-mc @andyzhangx
@andyzhangx @vibhansa-msft Two 🐐 s 😄
So @andyzhangx It's alright, I'm not in a hurry right now, I can wait till the the AKS release to happen. I've changed my
AKS 1.25.5
to use NFS mount rather than Blobfuse2 mount. Speaking of which, the AKS release, is there a channel (slack or other) for me to monitor when the fix is merged into the necessary AKS version? Also what do I need to do to make sure the change propagate through existing AKS clusters? Perhaps do aaz aks update
or something like that?
@pa-mc pls check on https://github.com/Azure/AKS/releases, and the fix would be in 0319 release, and https://releases.aks.azure.com/ site would show you the release rollout progress on every region. This csi driver version upgrade would happen in the backend, you don't need to do anything. And yes, NFS mount is safe now on AKS 1.25
If you are running blobfuse on a container you can follow this. And better to be consitent with fuse3 on container and host. To run container use
docker run -it --rm \ --cap-add=SYS_ADMIN \ --device=/dev/fuse \ --security-opt apparmor:unconfined \ -e AZURE_STORAGE_ACCOUNT \ -e AZURE_STORAGE_ACCESS_KEY \ -e AZURE_STORAGE_ACCOUNT_CONTAINER \ <image>
Hi @vibhansa-msft I successfully mounted the blob in a docker container now. Please kindly tell me how should I do to mount the same blob to an AKS container?
Thank you very much.
Fixed with #1088
Thanks for all the help @vibhansa-msft @andyzhangx @cvvz !!
Great !!!
Thank you for all your help. Our AKS can work with blobfuse2 now!
Hi. Our team is experiencing the same issue with blobfuse2 on AKS 1.24.10. We have enabled managed blob-csi driver and its version is v1.19.2. The following error is produced during unmount:
mount_helper_common.go:150] Warning: deleting path "/var/lib/kubelet/pods/ab292926-4f67-43cb-a848-7ed8feeaa482/volumes/kubernetes.io~csi/bundle-reports-volume/mount"
utils.go:80] GRPC error: rpc error: code = Internal desc = failed to unmount target "/var/lib/kubelet/pods/ab292926-4f67-43cb-a848-7ed8feeaa482/volumes/kubernetes.io~csi/bundle-reports-volume/mount": remove /var/lib/kubelet/pods/ab292926-4f67-43cb-a848-7ed8feeaa482/volumes/kubernetes.io~csi/bundle-reports-volume/mount: device or resource busy
utils.go:75] GRPC call: /csi.v1.Node/NodeUnpublishVolume
From the discussion above I understood that the fix landed in v1.19.2 and deployed to managed blob-csi drivers with the 0319 release (which is deployed to our location West Europe
). Is it expected behaviour?
Which version of blobfuse was used?
blobfuse2 version 2.0.2
Which OS distribution and version are you using?
AKS kernel version 1.23.12 Docker: FROM python:3.10.4
root@mydocker:/home# cat /proc/version Linux version 5.4.0-1091-azure (buildd@lcy02-amd64-023) (gcc version 7.5.0 (Ubuntu 7.5.0-3ubuntu1~18.04)) #96~18.04.1-Ubuntu SMP Tue Aug 30 19:15:32 UTC 2022
root@mydocker:/home# cat /etc/os-release PRETTY_NAME="Debian GNU/Linux 11 (bullseye)" NAME="Debian GNU/Linux" VERSION_ID="11" VERSION="11 (bullseye)" VERSION_CODENAME=bullseye ID=debian HOME_URL="https://www.debian.org/" SUPPORT_URL="https://www.debian.org/support" BUG_REPORT_URL="https://bugs.debian.org/"
If relevant, please share your mount command.
install blobfuse2
wget https://packages.microsoft.com/config/debian/11/packages-microsoft-prod.deb dpkg -i packages-microsoft-prod.deb apt-get update apt-get install libfuse3-dev fuse3 -y apt-get install blobfuse2
mount command
blobfuse2 mount /home/blobfmount --config-file=/home/blobfuse2/config.yaml --container-name=mycontainer --log-level=log_debug --log-file-path=./bobfuse2b.log
here is config.yaml below:
Refer ./setup/baseConfig.yaml for full set of config parameters
allow-other: true
logging: type: syslog level: log_debug
components:
libfuse: attribute-expiration-sec: 120 entry-expiration-sec: 120 negative-entry-expiration-sec: 240
file_cache: path: /home/blobfuse/tempcache timeout-sec: 120 max-size-mb: 4096
attr_cache: timeout-sec: 7200
azstorage: type: block account-name: account-key: endpoint: container:
What was the issue encountered?
After I ran the blobfuse2 mount command, I cannot check the files in the destination blob or create test file and check in blob.
Have you found a mitigation/solution?
Not yet.
Please share logs if available.
root@mydocker:/home/blobfmount# cat bobfuse2b.log Wed Mar 8 06:31:11 UTC 2023 : blobfuse2[607] : LOG_CRIT [mount.go (384)]: Starting Blobfuse2 Mount : 2.0.2 on [Debian GNU/Linux 11 (bullseye)] Wed Mar 8 06:31:11 UTC 2023 : blobfuse2[607] : LOG_CRIT [mount.go (385)]: Logging level set to : LOG_DEBUG Wed Mar 8 06:31:11 UTC 2023 : blobfuse2[607] : LOG_TRACE [libfuse.go (220)]: Libfuse::Configure : libfuse Wed Mar 8 06:31:11 UTC 2023 : blobfuse2[607] : LOG_INFO [libfuse.go (260)]: Libfuse::Configure : read-only false, allow-other true, default-perm 511, entry-timeout 120, attr-time 120, negative-timeout 240, ignore-open-flags: true, nonempty false Wed Mar 8 06:31:11 UTC 2023 : blobfuse2[607] : LOG_TRACE [file_cache.go (197)]: FileCache::Configure : file_cache Wed Mar 8 06:31:11 UTC 2023 : blobfuse2[607] : LOG_INFO [file_cache.go (272)]: FileCache::Configure : Using default eviction policy Wed Mar 8 06:31:11 UTC 2023 : blobfuse2[607] : LOG_INFO [file_cache.go (291)]: FileCache::Configure : create-empty false, cache-timeout 120, tmp-path /home/blobfuse/tempcache, max-size-mb 4096, high-mark 80, low-mark 60 Wed Mar 8 06:31:11 UTC 2023 : blobfuse2[607] : LOG_TRACE [attr_cache.go (121)]: AttrCache::Configure : attr_cache Wed Mar 8 06:31:11 UTC 2023 : blobfuse2[607] : LOG_INFO [attr_cache.go (145)]: AttrCache::Configure : cache-timeout 7200, symlink false, cache-on-list true Wed Mar 8 06:31:11 UTC 2023 : blobfuse2[607] : LOG_TRACE [azstorage.go (83)]: AzStorage::Configure : azstorage Wed Mar 8 06:31:11 UTC 2023 : blobfuse2[607] : LOG_TRACE [config.go (270)]: ParseAndValidateConfig : Parsing config Wed Mar 8 06:31:11 UTC 2023 : blobfuse2[607] : LOG_INFO [config.go (372)]: ParseAndValidateConfig : using the following proxy address from the config file: Wed Mar 8 06:31:11 UTC 2023 : blobfuse2[607] : LOG_INFO [config.go (376)]: ParseAndValidateConfig : sdk logging from the config file: false Wed Mar 8 06:31:11 UTC 2023 : blobfuse2[607] : LOG_TRACE [config.go (479)]: ParseAndReadDynamicConfig : Reparsing config Wed Mar 8 06:31:11 UTC 2023 : blobfuse2[607] : LOG_DEBUG [config.go (383)]: ParseAndValidateConfig : Getting auth type Wed Mar 8 06:31:11 UTC 2023 : blobfuse2[607] : LOG_INFO [config.go (467)]: ParseAndValidateConfig : Account: mystorageaccount, Container: mycontainer, AccountType: BLOCK, Auth: KEY, Prefix: , Endpoint: https://mystorageaccount.blob.core.chinacloudapi.cn/, ListBlock: 0, MD5 : false false, Virtual Directory: true Wed Mar 8 06:31:11 UTC 2023 : blobfuse2[607] : LOG_INFO [config.go (471)]: ParseAndValidateConfig : Retry Config: Retry count 5, Max Timeout 900, BackOff Time 4, Max Delay 60 Wed Mar 8 06:31:11 UTC 2023 : blobfuse2[607] : LOG_TRACE [block_blob.go (173)]: BlockBlob::SetupPipeline : Setting up Wed Mar 8 06:31:11 UTC 2023 : blobfuse2[607] : LOG_TRACE [block_blob.go (135)]: BlockBlob::getCredential : Getting credential Wed Mar 8 06:31:11 UTC 2023 : blobfuse2[607] : LOG_DEBUG [azauth.go (79)]: azAuth::getAzAuth : Account: mystorageaccount, AccountType: BLOCK, Protocol: https, Endpoint: https://mystorageaccount.blob.core.chinacloudapi.cn/ Wed Mar 8 06:31:11 UTC 2023 : blobfuse2[607] : LOG_TRACE [block_blob.go (260)]: BlockBlob::SetPrefixPath : path Wed Mar 8 06:31:11 UTC 2023 : blobfuse2[607] : LOG_TRACE [block_blob.go (209)]: BlockBlob::TestPipeline : Validating Wed Mar 8 06:31:11 UTC 2023 : blobfuse2[607] : LOG_INFO [mount.go (392)]: mount: Mounting blobfuse2 on /home/blobfmount root@mydocker:/home/blobfmount# cd ..
I tried to do an unmount command and returned this:
root@mydocker:/home# blobfuse2 unmount all Error: failed to list mount points [open /etc/mtab: no such file or directory] root@mydocker:/home# blobfuse2 mount /home/blobfmount --config-file=/home/blobfuse2/config.yaml --container-name=mycontainer --log-level=log_debug --log-file-path=./bobfuse2b.log Error: mount directory is not empty root@mydocker:/home# rm -f ./blobfuse2/* rm: cannot remove './blobfuse2/config.yaml': Device or resource busy root@mydocker:/home#