Closed djdv closed 2 years ago
Status update:
The current draft worked well enough for me to implement the mount
command on top of it, but has also changed considerably in the process.
Those changes need to be pulled back into this branch and refined into something more suitable for review/release.
The mount branch could then be split up into its components and moved into PRs as well.
I'll try to detail the changes better later but some things are:
-server=/some/server/maddr
flag. Meaning from the user perspective they just invoke the command they want, without needing to think about the background process at all. It will shutdown when/if idle.)
go-ipfs-cmds
, a custom stdio protocol, and HTTP; now it's mainly the standard library with 9P for both stdio and all other types of connections. (We only use stdio when launching the server to pass back listening addresses to the client, before detaching the process into the background, but everything uses the same protocol/code now.)mount
) will bring it with it.closerfile
logic and came up with a different approach of handling listeners. It's still managed and exposed via a 9P file system, but all those implementations were refined into what I hope is a more sensible hierarchy within the code / type system.
/listeners
, and uses the same code. Something like list
is thus implemented almost implicitly through a simple readdir
, not a more bespoke implementation as was the case before.I'm glad you got back to working on this 9p stuff.
How can I help?
Are you going to have a demo ready for IPFS Camp? ;-)
One thing I'm tempted to do is try to build a 9p VFS for Qt, in the hope that this will actually get integrated. (So you could perhaps browse and load files from it via an ordinary file dialog, without necessarily needing to mount it first.) I could do that with the http client interface, but 9p is going to end up better and faster, right?
Is it realistic that we can have the 9p file server as a feature in kubo? I see that the fuse mounts are still missing so many features. (listing pins, listing or catting CIDs, writing anything... and if ipns is mounted, then ipfs name publish
doesn't even work!) IPFS is still not a filesystem at all. I hope this will get us there.
I'm glad you got back to working on this 9p stuff.
:^] Glad you're still interested in it.
How can I help?
Good question. At the moment, I can't think of much, but soon I'm going to try to get the CI to start publishing a pre-release branch. Then anyone can try it without waiting for our refactors/reviews. And from there, we can probably start charting issues and talking about features + extensions.
Are you going to have a demo ready for IPFS Camp? ;-)
No demos planned, but I might make a screencast that shows off how this 9P API server works in its current state. As an aside, I was told previously that it'd be "inappropriate" for me to attend PL events. :^/
One thing I'm tempted to do is try to build a 9p VFS for Qt, in the hope that this will actually get integrated. (So you could perhaps browse and load files from it via an ordinary file dialog, without necessarily needing to mount it first.)
That sounds great. I've had this on my mind recently as well. The end goal is to have go file systems treated as first class file systems on various operating systems, but yeah - nothing precludes you from mounting file system in-process like that and it's very useful.
Exposing 9P2000.L through QT would be cool. I already utilize 9P to do the bulk of the work for the CLI utilities, where they just send a few 9P client messages, and I can imagine the same would be useful in a GUI context. As an aside, Lua+IUP has come up during some past discussions already, but we didn't need to do anything graphical yet so we never prototyped it.
Something I'm going to need to implement later is a Windows shell extension of the context-menu, which compares the path it receives, from (virtual) mountpoint files hosted on the 9P server.
This is going to allow the context menu to expose items like "copy as ipfs://" or whatever makes sense, if we're right-clicking on a file within an IPFS mountpoint.
Likewise you'd be able to do the inverse, and handle ipfs://
, by asking the server for the mountpoint, and calling a file explorer with formatted arguments.
While this uses their APIs, you can imagine doing similar things in other toolkits/contexts.
Is it realistic that we can have the 9p file server as a feature in kubo? I see that the fuse mounts are still missing so many features. (listing pins, listing or catting CIDs, writing anything... and if ipns is mounted, then ipfs name publish doesn't even work!) IPFS is still not a filesystem at all. I hope this will get us there.
Most likely not. PL's response was something like "this feature adds no value to the project".
The lack of development on their FUSE implementation, and relevant technologies like UFS2 (and even 1.5), imo - add to this sentiment of disinterest. I assume they're more interested in seeing people use the IPFS APIs and making new things on top of that, rather than retrofitting it to existing APIs + tools. But I guess it depends on who you ask and when.
If you want those bridges, you have to build them yourself, separate from an IPFS implementation. Run IPFS (kubo or other) + third party extensions that standalone, but utilize the API.
I think the prototypes that have been published already, show that it's possible for us to get there.
We should chat on IRC or Telegram or whatever you use. I've always been a bit frustrated with what's missing in IPFS, sounds like you are too. Maybe forking and overtaking them is the answer? I just can't understand PL's approach: every application is to be a one-off that has its own daemon linked in? Or they really only care about how the web will use it, not so much about desktop operating systems? The way I see it, ipfs should be a system daemon, like sshd, so its client interface needs to satisfy all the use cases. It has to manage storage, it does a lot of communication, and so there's no justification to run more than one instance on your system. I was at IPFS Camp last time and never really got a straight answer about that. We discussed nitty-gritty details about future UnixFS implementation (for example) rather than the part about how to fulfill the interface requirement that is implied by the "FS" in the name. And AFAIK not much got done since then. I don't know if I go again, whether to expect finally some sort of enlightenment, or I'm just beating a dead horse.
I'm not sure whether I can really get support for other URL schemes into a file dialog; there is this warning "Non-native QFileDialog supports only local files". But we at Qt (and David Faure from KDE) have done some work to use QUrl everywhere that a path was used before, and we have QFileDialog::setSupportedSchemes() for some reason. As if we were going to support non-file URLs. Of course I can implement a new dialog, maybe have to do that for now. At least stuff like QDir and QFileInfo and QIODevice work with non-filesystem data sources. To make a new virtual filesystem, you have a choice: implement a KIO slave (I did that already in 2019, using the IPFS HTTP API), but that won't work with Qt 6 yet: KDE has been too slow getting their stuff ported. And KIO's approach to asynchronous API is unconventional: separate processes are started. (Which I was thinking the other day is actually quite conventional on Plan 9, but my instinct is to think it's gross on Linux. Maybe I'm wrong, should just get used to it.) Or, implement a QAbstractFileEngine subclass. That's private API, but it didn't change for a long time either; so I guess I'll try that. But we know we need to replace it eventually. See also https://bugreports.qt.io/browse/QTBUG-103246 : interesting discussion there. Or, maybe implementing a GVfs is an alternative... just a very unfamiliar one for me. At least Qt can use the GTK file dialog, optionally.
I agree that conceptually it's best to mount the FS somehow. That's what I want. But there are many issues:
I don't know if you care about Plan 9 or not. Anyway 9p could be a good way to mount filesystems across OSes. I believe it must work well, but in practice I've been messing around and haven't gotten a really smooth experience with it yet. (One experiment is with http://wiki.call-cc.org/eggref/4/9p which implements only core 9p, not the Linux extensions. So I have to use 9pfs, not the kernel client.) Will keep trying. But on the other hand: it so happens that Go runs on Plan 9 (as long as you aren't running a 64-bit ARM, anyway). So in theory, we should be able to get a mountable IPFS working there. Wouldn't that be sweet?
@ec1oud Sorry for the delay, I wanted to give this actual attention. As usual, I wrote a lot though (;´∀`)
We should chat on IRC or Telegram or whatever you use.
For sure :^) At the moment I'm mainly using matrix (@ddvpublic:matrix.org - with my nickname, J). But I have no preference so I can chat over whatever if you email me the contact info.
I've always been a bit frustrated with what's missing in IPFS, sounds like you are too.
Yeah, I only downloaded it in the first place because it claimed to have Windows and mount support. At the time, it had neither, so I fixed the former and am currently working on the latter. (In-between I did a lot of other things too.)
Maybe forking and overtaking them is the answer?
Hard to say. I'd rather collaborate than compete. But at the same time, it's seems to me that PL has priorities that do not align with several sects of its userbase. Regardless, I'm already sacrificing to make progress, and will continue to do so until users get what they want. Whether that progress goes to their repos, others, or both; doesn't matter much to me. The tools completion is most important, for the benefit it brings to people. Whatever helps realize this goal, I'm with it.
I just can't understand PL's approach: every application is to be a one-off that has its own daemon linked in? Or they really only care about how the web will use it, not so much about desktop operating systems? ... The way I see it, ipfs should be a system daemon, like sshd
From what I gather, it seems like they want this too. And that's how we use it here. You install an IPFS implementation, run the daemon, then standalone programs connect to its API. That's essentially what we're doing at least. We happen to have our own 9P daemon and API, but that's for a variety of unrelated reasons (mainly just backgrounding/persistence but there's other nice things too).
We discussed nitty-gritty details about future UnixFS implementation (for example) rather than the part about how to fulfill the interface requirement that is implied by the "FS" in the name. And AFAIK not much got done since then.
UFS 1.5 would be nice to have for timestamps when I re-write the interface between MFS <-> golang's fs.FS
(and IPNS write calls). And with the amount of time that's passed I'd expect 2 to have been published by now.
From what I can tell, I don't think progress was made on either outside of js-ipfs
.
... Qt part ...
Interesting stuff. I'm glad to hear about the path abstractions, and the IPFS client VFS.
And KIO's approach to asynchronous API is unconventional: separate processes are started. (Which I was thinking the other day is actually quite conventional on Plan 9, but my instinct is to think it's gross on Linux. Maybe I'm wrong, should just get used to it.)
I think it depends on the implementation. If you make good servers, it's not that big a deal, but if they're not so good, then yeah launching them and keeping them around can be inefficient. On Plan 9 it's conventional to make small simple file system servers in C, that pass messages to each other via 9P.
I think this 9P daemon would be well suited as a guest process (e.g. within a Qt host). It's capable of a variety of transports like stdio and unix domain sockets, which are fast for local IPC. And Go makes it easy to handle N streams on incoming events like signals (like OS signals, some other API, etc.)
Once I finish my current refactor, I'll make that screencast talking about it. Banging my head against something first. I do have this private demo, but it's not really meant to be a demo. You can see it if you like though. Also, I'm planning on re-using these libraries to quickly pump out simple file system servers as I need them, and use this program to actually mount them. The framework I've constructed should reduce a lot of the boilerplate to go from nothing to a custom daemon, but my bias is that I wrote it, so of course I understand and like it 😆
can you mount a CID? would it be read-only then if you can't mount a CID, I guess you can only mount an IPNS key?
So the MFS and write portions of the IPNS implementations I had, worked this way already.
We'd get a CID as input and return a file system interface (this is pre-golang fs.FS
) which could be mounted as read-write.
It had callbacks on it so that under various conditions it would store the root MFS CID back to the IPFS node, as well as publish to IPNS if that's what you were doing.
That implementation worked generally with UnixFS CIDs. I don't know what the latest version is of it, but here's an ancient reference {/mfs, /ufs} but isn't worth reading since it's going to change.)
IPNS is too slow anyway: perhaps not suitable for notifying clients in real-time that the mounted filesystem has changed. Can it be fixed? One of the ways I dealt with writes in IPNS, was to publish "locally" to the IPFS daemon, and only publish the key to the network conditionally. Those conditions changed over development, and are equally valid so it could be configurable. e.g. publish on fsync, on unmount, after no writes in $duration, on demand, etc. I think there was work done on IPNS over pubsub that may have speed up publishing, but I haven't tested it in a LONG time now. I'm expecting that to come up next though when write calls are being ported. IPNS itself though depends on the IPFS implementations and their networks I guess.
kernel mounts will hang ...
Ugh. This is one I'll have to read into more and see if we can focus on it during development. My presumption though is that all we can do is make tolerances configurable. Maybe some gross AIO wrappers around remote IO interface in a funny way.
I.e. allow users to set resolve timeouts on I/O so things like read
error out early instead of hanging around forever.
Not sure yet, will have to track it while testing real world stuff.
I have in fact had some trouble with the in-kernel 9p support: I want to write Plan 9 compatible file servers, and it seems like Linux somehow doesn't get along with them.
I can't say I've had any issues that weren't my own fault, but I'm also not doing anything terrible complex yet. And I know there have been bugs in the kernel for 9P before. For what it's worth, I'm using a (slightly modified) version of this https://github.com/hugelgupf/p9 which lets me make 9P2000.L servers, and I mainly use Gentoo and Arch when testing things for Linux, just because they're usually up to date. Maybe I can help you debug things later. :^)
which is also kindof silly: redirecting via fuse,
Yeah, unfortunately I'm likely going to have to do this on Windows as well. NT has a native 9P2000.L client and server implementation (for their WSL tech), but it's API is not public. So despite this, we can only mount remote 9P servers via WinFSP, not natively.
I like the idea of filesystems having resource forks; but the Plan 9 community rejects the 9p2000.L extensions (which include extended attributes) and Linux rejects the idea of being able to open a file as either a directory or a file, as far as I can tell so far.
I know what you mean. This came up when we were prototyping and it was interesting to consider, but we didn't go far with it. Because while it works natively, we knew the feature set would have to accommodate a variety of restrictions across operating systems, at least right now. But it's something of interest to me too.
Plan 9 has no concept of file change notification. (Another thing that 9p2000.L provides, I guess.) How do you do that then? It's ad-hoc: you need a special file which the user can open and read a stream giving some sort of info about the latest changes, somehow. With no convention for what that should look like.
Yeah it's an interesting thing and I go back and forth on it; but generally I like the idea of these ad-hoc conventions. Factotum comes to mind (in regards to them leaving things out of the standard). It's cool that auth messages are there, but no actual protocol is defined.
I was thinking recently if it would be a good idea to start include text data as a special path/hidden-file in my own 9P file systems.
Something that when read, outputs markdown or whatever that resembles entries from section 4 of the Plan 9 manual.
So that in the same way you do man mount
, why not cat /mounts/manual.md
, cat /listeners/manual.md
.
It's still ad-hoc, and not a convention that'd be used outside of here, but I've considered that it might be useful.
Then, if you have to do out of band stuff or other coordination, etc. it's at least bundled with the API directory you're actively manipulating.
Even still, you can just extend like that without documenting it and make wrappers around it to kind of formalize the behaviour.
At least, that's the approach I've been going for here. Mainly I use the shell to control the daemon, but obviously we're building this fs
binary around it which just does the same sequences after being invoked with shorthand. Much like Plan9's own system calls which generate sequences of messages (in a way defined in their manuals) when invoked.
I don't know if you care about Plan 9 or not.
The more I learn about it, the more I like it. The only things I don't like about it are that it's concepts and technologies aren't more widely used. ;^)
Nice. Lisps and Lua come up regularly when me and my associate talk about this stuff. Quick to write scripts, seem to go hand in hand with 9P, but historically I think they're mostly written in POSIX and rc shells rather than more portible languages. As you've probably seen, the library support for .L doesn't seem quite there for most languages. But also, shouldn't be hard to amend if we really wanted it.
But on the other hand: it so happens that Go runs on Plan 9 (as long as you aren't running a 64-bit ARM, anyway). So in theory, we should be able to get a mountable IPFS working there. Wouldn't that be sweet?
It's by no accident that all these components were chosen together. Portability is paramount to me personally, and after evaluating the whole gambit, these provided the best balance I could find (at least for me). Go is nice, it's easily and already ported. 9P is old and somewhat proven. And the designs we're using try hard to be independent.
The old go-ipfs
plugin version, did in fact compile on Plan 9 itself but at the time IPFS itself did not.
There's probably a more up to date commit, but this was the first reference I found to Plan 9 specific code.
IIRC that has since changed so everything might just work. That is, if you can get a 9P2000.L client on Plan9. Right now our sever only speaks that but more and more it's becoming apparent we're gonna have to extend that to support Styx and .u .
Small status update here. tl;dr: sorry for the lack of visible progress! And the compiler is being unhelpful.
Long version:
This server implements our own API and exposes it via a 9P file system (in-proc interface + optionally exposed as a server), and one of our file system hosting options is also a 9P file system (e.g. IPFS -> 9P or any Go fs.FS
-> 9P, and even 9P -> 9P)
(Realistically I more expect 9P->FUSE/other but if we do this, the former is implicitly implemented in this system).
So it makes sense to share code and data structures between these and other components later that will integrate with this utility.
Logically, the solutions to the problems I'm facing are simple and straight forward, however I'm getting caught up fighting with the compiler. Some of the features that would alleviate this were in Go's type-parameter proposal, but were not implemented in the 1.18 release. Some proposals have been accepted, but have not seen an implementations in subsequent releases, and some are still being discussed - so they may or may not ever be compiler legal. At this time it doesn't look like I can genericize the portions of code I would like, in the way that would be nice to read and write, and thus come with readability+maintainability compromises.
Falling back to the older Go spec, I'm having to lean slightly on dynamic things like runtime/reflection magic; which isn't ideal, but its use is minimal. Ideally this would be removed later as Go's spec changes, so that more things are checked at compile time. Likewise, I'm trying to avoid type assertions where I can because the current pre-release build proves that relying on this can shoot my foot when types I expect to extend an interface, don't actually implement the methods and just fail in unexpected ways at runtime (e.g. a type that is supposed to implement a directory, is treated like a regular file instead). The compiler could help prevent this, but its current feature set mixed with my current design is making this more difficult than I anticipated. So I have to try and come up with some middle-ground. I don't think I'm going to be pleased with whatever I come up with, but I'm going to just try to make it work within reason and we can focus on fixing it later. Again though, even this has been challenging me a little bit because I'm trying to work within legal bounds of the type system even if we can technically work around them dynamically in ways that are less safe.
All that said, I admittedly have not been able to spend as much time focusing on this recently as I would have liked. Mainly due to personal problems of my own doing, but I expect those problems to not affect me anymore. And I expect to be able to give this proper attention again, instead of looking at it and going "wow this is hard, I'm giving up immediately / will do it tomorrow".
This feature was reworked heavily, and while still not "complete", we'll be moving away from trying to push things into master when finished and instead will just adopt the more modern practice of putting things into master and incrementally improving them over time instead. This should allow things to be divided up and tracked a little easier than has been done so far.
Follow up to https://github.com/djdv/go-filesystem-utils/pull/3
We're implementing things via a "client and server" model, but it's actually more just like "remote and persist". Between calls to things like
fs mount
,fs list
, etc. we need somewhere to store and share persistent/session data, and this seems like a sensible and portable way to handle it compared to other strategies like shared memory, or whatever else.It also easily allows us to do privilege separation. The daemon can run in the background with higher privileges, while less authoritative clients can just make requests to it.
There's potential to do other fancy remote things with this since we expect to communicate over local sockets, but that's not really the focus for now.
Implementation: We're ditching the previous HTTP RPC library that was inherited from the old repo and starting fresh with a 9P based server that is arguably simpler to use and reason about. Abstractions are intended to be file systems, with basic I/O operations not too dissimilar to how people use REST. Listening sockets are files in a directory, the mount table should be some similar abstraction, etc.
We basically just need some abstract tree where we can hold data across calls, and access/manipulate it via some remote client. Ideally, the abstractions for these should be layered well. With file abstractions being implemented independently. ("socket files are socket files") Construction being done via composition. ("the server makes a bunch of socket files on startup and stores them in some directory") APIs having default expectations, but not lacking in configuration. ("
clientApi.Shutdown(sockName, opts...)
will walk to the socket file, and do the closer dance with it") With higher APIs just gluing it together ("from the CLI it'sfs.exe shutdown sockName
")Something like that.
Status: The concept seems to be viable enough to publish this draft, but the implementations still need to be completed. As-is we can listen on a socket, mount the server's root on Linux via
mount -t 9p 192.168.1.40 /mnt/9 -o "trans=tcp,port=564"
and close those listeners via standard file system writesprintf desino>/mnt/9/listeners/192.168.1.40\:564; umount /mnt/9
Doing this from a cli program is basically the same procedure, abstracted in Go.client := connect(...); open("the file"); write("shutdown key"); close("the file"); client.Close
This pattern should follow for all of our commands server and client side. The prototype we developed (*not useful, correct, or pretty) exists here: https://github.com/djdv/go-filesystem-utils/tree/j/D9 which shows a simple "MOTD" server. It can act as another example of how we'll likely implement
mount
,unmount
, andlist
. (Except less crude, and more spec correct)TODO: Lots. The commits need to be split up logically, some things need to be backported to the commands branch. Tests / spec correctness is important. This message needs to be more coherent.