winfsp / cgofuse

Cross-platform FUSE library for Go - Works on Windows, macOS, Linux, FreeBSD, NetBSD, OpenBSD
https://winfsp.dev
MIT License
511 stars 82 forks source link

Issue mounting on fresh-boot/driver-install when called concurrently #51

Closed djdv closed 3 years ago

djdv commented 3 years ago

I'm trying to host several different file systems within a single program. To do this, I construct a FileSystemHost for each file system implementation, and then concurrently call their Mount method.

Doing so on a fresh boot of the OS, or after installing the driver for the first time, causes cgofuse to log an error to the console (Cannot create WinFSP-FUSE file system: FSD not found.), for all but 1 of the Mount calls.

This does not happen on subsequent runs. When run again, each call to Mount works as expected under the same conditions.

Demonstration: https://www.youtube.com/watch?v=V0Z_abvSPNs Modifying memfs's main function gives a simple reproducible:

func main() {
    memfs1, memfs2, memfs3 := NewMemfs(), NewMemfs(), NewMemfs()
    host1, host2, host3 := fuse.NewFileSystemHost(memfs1), fuse.NewFileSystemHost(memfs2), fuse.NewFileSystemHost(memfs3)

    hostPath := `C:\whatever`
    os.Mkdir(hostPath, 0777)

    for i, hst := range []*fuse.FileSystemHost{host1, host2, host3} {
        go func(index int, host *fuse.FileSystemHost) {
            host.SetCapReaddirPlus(true)
            hostTarget := filepath.Join(hostPath, strconv.Itoa(index))

            fmt.Println("Mounting: ", hostTarget)
            host.Mount(hostTarget, nil)
        }(i, hst)
    }

    semaphore := make(chan struct{})
    <-semaphore
}

I've experienced this on versions of Go 1.14 - 1.15 with WinFSP versions 2020 - 2020.2 And assume it's not intended behavior. Although I'm wondering if the client program is expected to gaurd against such a thing (prevent concurrent calls to Mount even across different FileSystemHost instances)

If I can help provide more context for this, let me know.

billziss-gh commented 3 years ago

This does sound like a problem in cgofuse and/or WinFsp-FUSE.

I am very busy right now, but I am hoping to have time to look into it some time in the future.

billziss-gh commented 3 years ago

I looked into this a bit more and it appears to be a problem within WinFsp.

The issue is that when the file system driver is not running, the WinFsp DLL (i.e. the user mode portion of WinFsp) will attempt to start the driver. This is not currently done in a thread-safe manner as I never anticipated such a need.

It should be relatively easy to make this thread-safe by wrapping it in a lock. I will add an issue in WinFsp to track this.

billziss-gh commented 3 years ago

I have added WinFsp commit billziss-gh/winfsp@2d5d058d2f10de78d759aba4512d195736a5606d that should fix this issue.

djdv commented 3 years ago

Thanks for that! Whether I test out the fix early via a local build or wait until the next beta release, I'll post back about it.

This is not currently done in a thread-safe manner as I never anticipated such a need.

Makes sense. I figured launching like this wasn't typical.

If you're interested, I'm making this mount-instance / manager-interface thing that takes in a series of requests and issues them via gorotuines, with 1 routine per pair of APIs.

Example: fs manager thing Edit: the image was cut off, this is my console input

>.\bind.exe "/fuse/ipfs/path/mnt/routine-1" "/fuse/ipfs/path/mnt/still-routine-1" "/9p/ipfs/ip4/127.0.0.1/tcp/564/path/mnt/routine-2" "/fuse/ipns/path/routine-3"
>.\bind.exe "/fuse/ipfs/path/mnt/a-whole-new-process"
>.\bind.exe --list
...

There's no particular reason for it to be that way, it's just how the request pipeline ended up. The entire request is split into sections and each section basically gets parsed, and sent to its API handler (via a channel) as soon as it's ready. Likewise the responses just come back as early as they can in no particular order.

(Right now those test binaries don't actually do anything, they just response back as if it did, but I'll be adapting the cgofuse stuff I have up to it soon. Currently thinking about how best to split relevant packages up, some of it might be useful outside of my own needs)

djdv commented 3 years ago

Hey there, I missed the beta release tag but just saw the latest 2021 release and wanted to say that the modified memfs example in the first comment seems to work just fine now. 🎉 Thanks again for that!