Closed loveencounterflow closed 7 years ago
Why not respond with all the data using a acknowledgment callback? That would make it dead easy to link the request with the response.
Are you sure that using socket.io is the best mechanism this this? Because you're creating a request, response mechanism REST sounds more appropriate?
One mayor difference between channels and rooms is that the client doesn't necessarily know about rooms, rooms is something you can use on the server side to group clients. The server can put clients in one or multiple rooms without the client knowing this. Clients actually control which channels/namespaces they connect to, there is even a auto reconnect mechanism when the connection breaks. The server can't put clients in other namespaces as far as I know. Also, rooms exist within namespaces.
Maybe socket.io-stream is interesting, when someone implements the following suggestion? https://github.com/nkzawa/socket.io-stream/issues/31
@peteruithoven socket streams definitely do look promising, i'll give them a try.
"Why not respond with all the data using a acknowledgment callback?"—because that feels so wrong for me; i'm using a stream to read data from a LevelDB instance which i then would have to buffer until it has ended, and then send a single, potentially HUGE message (my DB is full of eels) to the client—even if that huge message is skillfully streamed again by the underlying transport mechanism, it still feels wrong.
"Are you sure that using socket.io is the best mechanism this this? Because you're creating a request, response mechanism REST sounds more appropriate?"—this is experimental. i know i could just as well use HTPP get / post with chunked transfers and so on. but it certainly sounds like websockets are the better option, as they are bidirectional, hence more flexible, and at least i expect them to be more suited to the many-small-things-one-at-a-time approach that i've really come to like with NodeJS streams and pipes (and writeups like http://www.websocket.org/quantum.html seems to corroborate that feeling).
i'm using a stream to read data from a LevelDB instance Then socket.io-stream is definitely interesting.
That issue was closed automatically. Please check if your issue is fixed with the latest release, and reopen if needed (with a fiddle reproducing the issue if possible).
sorry if the question doesn't sound very clear; what i'm doing is basically using Socket.IO to implement inter-process communication between a NodeJS client and a NodeJS DB server running in another process / on another machine.
my problem is this: the client wants to get some data from the DB process so it sends an event, say,
client.emit('dump', {limit:10})
and it expects to receive up to 10 events with one record each usingclient.on('dump',...)
(i decided to share the event name between server and client, sodump
really meansgive me a dump
when sent from the client andhere is your dump
when sent from the server). that's easy and works as expected.the catch is that since everything is asynchronous with a network in between, you never know when messages will arrive; specifically, if the client should emit another
dump
event before the responding events from the lastdump
emission have come in, it can as such never be sure whether a givendump
event it receives from the server was in response to the first or the second request.i can think of several ways to organize this: using namespaces or channels/rooms (any difference between the two?), sending a request or 'session' ID to the server which will be sent back, using a callback on the
client.emit(name, data fn)
call that receives a unique request ID from the server, sending out events likedump#62738495
and use catch-all listeners on both sides to sort things out, stuff like that.i was wondering whether there's some kind of common best practice how to accomplish this. i couldn't find anything in the API or the underlying objects that gives me a very straightforward solution.