Open dominictarr opened 12 years ago
That'd be handy for a dead simple event store, too.
Another thing I want is a reliable stream, it's basically like a stream that is transparent to disconnections. it would be useful for logging, it would have to handshake at the start to figure out where it left off, but then resend any data that may have been dropped when the wifi disconnected, or you went through the tunnel.
You'd probably want this to fail softly too, you don't want to get cause an out of memory error on your ar drone because you are buffering too much logging because there is no wifi in the paris sewers.
@dominictarr I'm not sure if I get your explanation. Why buffer all writes? What do you mean by overwriting? Are you talking about multiple processes using the same file?
The second thing I want also, this would make net.connect and others so much easier
@juliangruber if you havn't read the whole file yet, then writing to it would break the read stream. Maybe if the writes where appends it might be okay.
I need to think this through more, currently I'm thinking this is only a single writer, if you have multiple writers, that is a different problem.
Another thing I want is a reliable stream, it's basically like a stream that is transparent to disconnections.
I tried to create that using boot
. It's a massive pain in the ass. Good luck!
I'm not convinced that a duplex fs stream is a great idea, but if you were to do this, here's what I'd probably start with:
var util = require('util');
var Duplex = require('stream').Duplex;
util.inherits(FsDuplex, Duplex);
function FsDuplex(path, options) {
// use the options, but make sure to ignore fd and flags.
// we're going to open the file twice.
var o = util._extend({}, options);
o.fd = null;
this._reader = fs.createReadStream(path, o);
// writer is append-only.
o = util._extend({}, options);
o.fd = null;
o.flags = 'a';
this._writer = fs.createWriteStream(path, o);
Duplex.call(this, options);
}
FsDuplex.prototype.write = function() {
return this._writer.write.apply(this._writer, arguments);
};
FsDuplex.prototype.read = function() {
return this._reader.read.apply(this._reader, arguments);
};
Of course, it'd have to be smart enough to notice when the readable end closes, and if the writeable end isn't closed yet, switch into passthrough mode.
I'm not convinced it's a good idea either, but sometimes ideas niggle at me until I try them.
I wouldn't go to passthrough mode, the readable end would end, and emit 'end', but you'd still be able to write. due to the recent fix to Stream#pipe, such behavior should now be viable.
I'm considering a strange idea: duplex fs streams.
Basically, you'd get a file, stream that was both a fs.ReadStream, and then an fs.WriteStream it would emit all data, but buffer any writes until the read side has ended.
The write could overwrite the read stream, or it could append to it. -- instant chatroom persistance.
I'm also thinking this would be useful for things like persisting scuttlebutt instances, (basically the same pattern)