Closed megawac closed 9 years ago
I'd be willing to help out with this in August if you're interested
I think it is really good idea, and offline work definitely should be implemented. But MakeDrive
uses redis for work, I think it is realy big dependency for file manager. Anyway Cloud Commander
could be used as middleware in application that works on top of something like MakeDrive
. API could be extended in direction of portability. But there is a lot things to think about. Feel free to share ideas.
Yeah, that was my original issue with makedrive
and why it didn't work for my use case - I wanted a proxy to the the servers file system which could be worked on offline. In my case it was mostly to edit configuration files for a robot.
CRUD file operations processed with help of restafary which is just express middleware. Editor also use it, so we could create a layer that would be compitable with something like this, but would use MakeDrive
for handling file operations.
opium could be used to synchronies changes made on client side, with server side after some offline work with files.
When client became offline, every file system operation step is recorded and then server repeat all this changes by himself.
There is one thing I can't figure out. How to determine that file system hasn't changed when client was offline. Because if it was, some diff
of file system state should be made before and after changes.
Any thoughts?
I looked into quite briefly last September, and I investigated the following schemes.
There are a couple conventional strategies for managing distrubuted file systems (plenty of papers on others)
This strategy can be employed on both the client and server, however, conflict management can be tricky to get right when a file changes on both devices [2].
Makedrive currently is using a strategy similar to LBFS as far as I can tell with this code for conflict management. This code seems to be shared between client and server https://github.com/mozilla/makedrive/tree/thimble/lib
[0]: Muthitacharoen, A., Chen, B., & Mazieres, D. (2001, October). A low-bandwidth network file system. In ACM SIGOPS Operating Systems Review (Vol. 35, No. 5, pp. 174-187). ACM. [1]: Braam, P. J., Callahan, M., & Schwan, P. (1999, August). The intermezzo file system. In Proceedings of the 3rd of the Perl Conference, O’Reilly Open Source Convention. [2]: Xianqiang, B., Nong, X., Weisong, S., Fang, L., Huajian, M., & Hang, Z. (2011, August). Syncviews: Toward consistent user views in cloud-based file synchronization services. In Chinagrid Conference (ChinaGrid), 2011 Sixth Annual (pp. 89-96). IEEE.
Thank you, I'll read about it.
I just thought that pack/extract
functions wouldn't work in Offline Mode because filer
doesn't support streams and jaguar that built on top of tar-fs uses it a lot. Maybe in future when browsers will support streams filer
would handle such cases.
I see, maybe @humphd or @modeswitch would be willing to shed light
Filer doesn't support multi-block files yet, which is a prerequisite for implementing streams. Note that it's not necessary to wait for native stream support by storing files over multiple underlying database objects. The size of each object would be the lower limit on the size of streamed chunks. This is the approach that I have planned to take for Filer.
As a workaround until that lands in Filer, you could read the entire file and then 'stream' it from a memory buffer. This doesn't give you any of the performance of actual streaming, but it would allow you to use libraries that operate using a stream API.
@modeswitch thank you for detailed answer. Main thing is keeping code of such libraries without changes. If I understand you right you suggest to use some code that will be running when fs.createReadStream
function called. Could you give me advice what should I put inside functions that creates streams? Looks like streams
should be reimplemented with functions like fs.readFile
to get things work, right?
For createReadStream
, you can read the entire file into memory and service stream API calls from the memory buffer. You will also need to watch the file, and if it changes you'll have to refresh your memory buffer. There are some edge cases here, but that's the general idea.
Keep in mind that this is a messy workaround, and won't be good on performance or memory usage.
A note on MakeDrive vs. Filer. We are still actively using/maintaining Filer, since we use it in our Brackets-in-the-Browser fork, Bramble. Filer doesn't have a ton of updates these days, mainly because it's very stable and well tested. MakeDrive, on the other hand, we aren't doing anything with at the moment, since the project that was going to use it went in another direction.
@humphd thank you for your response. Glad to hear that filer
on active maintain.
It's really cool project. The best part is node.js
compatibility. A lot already written code could just be run in a browser.
@megawac, do you have any ideas what to start from? What API's you need from Cloud Commander
to make offline work possible?
Closed due to long time of inactivity.
I'm sure you've seen Mozilla's makedrive it'd be incredibly awesome if
cloudcmd
could either supportmakedrive
or some offline editing functionality.In Makedrives case they store contents of the files in an
IndexedDB
and whenever connection is lost and regained they preform anrsync
. This enables users to either start with no connection (ServiceWorker
) or lose connection to the server and reconnect and have changes propagate.You may also be interested in investigating Bramble which is Adobe's
Brackets
on top ofFiler
(used to be on top ofmakedrive
before https://github.com/humphd/brackets/commit/1b463dc52b40f5d76ae813cd9fcfcae93fbd3c46)Other resources: https://etherpad.mozilla.org/thimble-plublish-plan
Another offline browserFS implementation (mirroring node-fs): https://github.com/jvilk/BrowserFS