taoensso / nippy

The fastest serialization library for Clojure
https://www.taoensso.com/nippy
Eclipse Public License 1.0
1.04k stars 60 forks source link

core.async freeze #76

Closed alolis closed 8 years ago

alolis commented 8 years ago

Hello,

Is it possible to freeze a core.async channel with nippy?

ExceptionInfo Unfreezable type: class clojure.core.async.impl.channels.ManyToManyChannel clojure.core.async.impl.channels.ManyToManyChannel@544a38cf  clojure.core/ex-info (core.clj:4403

I understand that is not supported out of the box, but maybe someone has a custom type to suggest?

ptaoussanis commented 8 years ago

Hi Alexander,

I understand that is not supported out of the box, but maybe someone has a custom type to suggest?

Not really any standard way to suggest, sorry. The problem is that channels don't convert to values (collections, say) in any obvious + consistent way.

If you have a particular channel C- what would you expect from a serialized version of it?

  1. Is it okay if we drain the channel to copy its contents (i.e. side effects)?
  2. How long will we wait/block to allow contents to arrive?
  3. What if the channel has other consumers and we don't receive everything during draining?
  4. Do we copy the buffer config to be able to reconstruct it after deserialization?
  5. Assuming all the values we want to serialize are already immediately available, we aren't we just serializing a collection instead? etc.

The correct approach would be: Decide for your own case what it is about the particular channel that you're hoping to preserve, grab that information in a case-appropriate way, then stuff the data into a standard collection. Assuming you really want a channel on the deserialization end, create a new channel then with the properties and content you want using your backing standard collection.

Does that help / make sense?

alolis commented 8 years ago

Hi Peter,

Thanks for replying. After I posted i started thinking what you are saying as well; that it's a little bit hard to freeze the actual channel.

What I am trying to achieve is to keep a reference to the channel so I can close it at some point in the future. The channel in the mean while works normally. I do not really want to cache the data that actually go through the channel, just the channel reference. Up until now I simply had a hash with all the channels (each channel represents a client connected to my service so I kept a client-id : channel pair for each client) but the hash was becoming too big so i thought to move this to redis instead.

My apologies for not describing the situation better at my first post.

ptaoussanis commented 8 years ago

Gotcha.

Sounds like what you really want is just to store an identifier for the channel. So maybe something like an (atom {<identifier> <channel>}), then you can freeze/thaw the identifiers (strings or keywords, etc.).

Does that help?

alolis commented 8 years ago

I will try it out and get back to you but it sounds promising.

alolis commented 8 years ago

Did you suggest to freeze the atom earlier or did I misunderstood? If not, if i try to freeze the atom I get

ExceptionInfo Unfreezable type: class clojure.lang.Atom clojure.lang.Atom@19872a30  clojure.core/ex-info (core.clj:4403)
ptaoussanis commented 8 years ago

No, no - not suggesting you freeze the atom. Was suggesting you keep an atom of identifiers to channels (in memory), then serialize the identifiers if there's some reason to do that.

Just saw this now:

but the hash was becoming too big so i thought to move this to redis instead.

What do you mean by "too big" in this context? Keeping a couple million channel+identifier pairs shouldn't be a problem in most cases, I'm guessing?

Sorry, still think I'm not understanding the motivation to serialize / go to Redis for this.

Concretely: let's say you have 1m channels for a 1m clients, yes? Why do you want to serialize a reference to the channel? Let's say you could even serialize the channel entirely- what would be the motivation there instead of just keeping it in an atom?

Do you need to transfer the channels to another server? Survive a server reboot?

Sorry if I'm missing something obvious.

alolis commented 8 years ago

Hi Peter. You are right, hash with 1M entries shouldn't be a problem and it seems that I had a bug somewhere else which made me think that the hash was causing the issue so I am trying the hash solution again with the bug fixed.

My original motivation was that I thought the hash wouldn't be able to perform with a lot of entries and I thought to try moving this tracking to redis.

You didn't miss anything :) The only one missing something was me and this bug. Oh well :) Our conversation was definitely helpful.

ptaoussanis commented 8 years ago

No problem, happy if any comments here were helpful :-) Closing for now, but please feel free to reopen if you have any further questions.

Cheers :-)