jbpros / plutonium-old

MIT License
12 stars 0 forks source link

Dnode is not working with (large) binary arguments #28

Closed thibaultponcelet closed 11 years ago

thibaultponcelet commented 11 years ago

In order to be able to send binary arguments to command bus through dnode we need to serialize it with BSON otherwise binary data are corrupted.

When doing this, the transfert time is very high.

Maybe dnode is not adapted?

jbpros commented 11 years ago

DNode is the quickest solution I could find back when I introduced the remote command bus. I knew there would be issues such as the current one.

Quick question: have you tried to encode the binary buffers with base64 instead?

A more robust solution is needed though. Here are the current candidates I have in mind:

Feel free to suggest other stuff.

WDYT?

thibaultponcelet commented 11 years ago

We do not tried base64, it should work but this will also lead to slow transfert due to the way dnode is implemented.

REDIS would be a nice solution, we could extract some logic from the current event bus wich already supports large binary arguments.

jbpros commented 11 years ago

Yep, Redis should be fine.

An HTTP interface is actually a nice alternative too. It'd be a simple HTTP API accepting POST requests. The obvious advantage of it is the simplicity, universality (any process could easily send commands) and the reduced setup hassle, compared to Redis.

POST /commands { name: "make passenger driver", data: { ... } }

Regarding binary payloads, multipart/form-data media types should be just fine.

djeusette commented 11 years ago

Based on our experience with couchdb that relies on the HTTP protocol, it's likely to be slow, isn't it ? Of course, it's easier to implement, but all commands from our API (separate process) will go through the implemented solution. Therefore, a fast, reliable solution will be needed in the long run.

jbpros commented 11 years ago

Comparing this and CouchDB's interface is a bit like comparing apples and oranges, imo. CouchDB was pretty slow at responding to requests partly due to the JSON serialisation.

multipart/form-data does not involve such greedy serialisation, and the response payload would be empty. Also, the request frequency would be much lower than what we measured during domain repository instantiation peaks.

I have no evidences this would be efficient enough and the only way to know for good is to build and measure it :)

thibaultponcelet commented 11 years ago

If you think this would be easy to attach multiple binary attributes, go for this :p

jbpros commented 11 years ago

Can you define large in Djump's context? :)

djeusette commented 11 years ago

We use to upload pictures from 2 to 4 Mb, possibly more.

But that's the same problem as the one you may encounter with Albums.

jbpros commented 11 years ago

Right. Albums is currently based on the in-memory command bus, things are fast for now but I'll hit the same issue as you guys do, soon.

How urgent is this for you?

djeusette commented 11 years ago

This will become critical when we launch the public beta test. Meaning early april. :)

jbpros commented 11 years ago

Ok. I'll be working on this on Thursday or Friday if that's fine with you guys.

thibaultponcelet commented 11 years ago

Fine for us, do you think you will have some time for this today?

jbpros commented 11 years ago

Yep, this afternoon. I'll ping you.

On 22 Mar 2013, at 11:54, Thibault Poncelet notifications@github.com wrote:

Fine for us, do you think you will have some time for this today?

— Reply to this email directly or view it on GitHub.

thibaultponcelet commented 11 years ago

This issue has been resolved some times ago, #39 fix the implemented solution