whitfin / cachex

A powerful caching library for Elixir with support for transactions, fallbacks and expirations
https://hexdocs.pm/cachex/
MIT License
1.6k stars 104 forks source link

Add additional operations such as List/Set operations #80

Closed tlvenn closed 8 years ago

tlvenn commented 8 years ago

Not that is a bad thing by itself at all but i am wondering if there is any plan to support List and Set operations in the future ?

I believe it would open up many more use cases for cachex which arguably would go beyond just caching..

whitfin commented 8 years ago

@tlvenn do you mean similar to the commands Redis offers?

The issue here is that we back a cache with an ETS table, which only supports get/set by default. I can add things like list operations but it would have to be known that they're going to be a transactional get/set under the hood. Unless we move away from ETS (which might be viable now distribution is gone), there's no better way to implement that.

Are you interested in the features because of the sugar (i.e. convenience)? Or do you want O(1) perf against them, etc?

tlvenn commented 8 years ago

Does not have to match exactly what redis offer but you get the idea. ETS table are indeed problematic and while convenience would already be very nice, i have the feeling people will just assume it can perform on the same complexity level as Redis and it will not be pretty ;)

I guess I am just wondering what the roadmap is and if such thing is outside of the scope of cachex.

whitfin commented 8 years ago

@tlvenn so the scope is just whatever is requested, as long as it doesn't damage what's already there ;) A Set op would be pretty fast still, probably 10µs per operation (for inserts), so it's not horrible

tlvenn commented 8 years ago

Good to hear @zackehh , then I honestly think it would be nice to have basic set and list semantics.

whitfin commented 8 years ago

@tlvenn just so you're aware, you can use get_and_update/4 in the meantime to have those types of semantics. You'd have to handwrite the modifications made but you could easily do so, and it would still be sufficiently fast.

tlvenn commented 8 years ago

Ya i was thinking it would be best to keep data manipulation as close as the data source itself from a process perspective.

whitfin commented 8 years ago

@tlvenn I'm thinking about creating this type of interface internally:

lpop = fn([ head | tail ]) ->
  { head, tail }
end

rpop = fn
  (list) when length(list) > 0 ->
    { List.last(list), :lists.droplast(list) }
  (list) ->
    { nil, list }
end

Cachex.start_link(:test, [
  commands: [
    last: { :return, &List.last/1 },
    lpop: { :modify, lpop },
    rpop: { :modify, rpop }
  ]
])

{ :ok, true } = Cachex.set(:test, "my_list", [ 1, 2, 3, 4, 5 ])

5 = Cachex.invoke(:test, :last, "my_list")
1 = Cachex.invoke(:test, :lpop, "my_list")
2 = Cachex.invoke(:test, :lpop, "my_list")
5 = Cachex.invoke(:test, :rpop, "my_list")
4 = Cachex.invoke(:test, :rpop, "my_list")
3 = Cachex.invoke(:test, :last, "my_list")

[3] = Cachex.get!(:test, "my_list")

That way you can define custom functions on the cache (from outside). Internally we can just use this to surface some of the more common functions, perhaps those which can be optimized. Any thoughts?

Trying to make this extensible, rather than Cachex having to implement lots of different use cases.

alexgleason commented 2 years ago

Hi, I'm thinking about using Cachex to pre-generate a feed of posts (on a per-user basis) for a Twitter-like application.

The thing I'm concerned with is race conditions. If I call get_and_update/4 10 times per second on the same key (to add elements to the list), do you think it would be a problem?

whitfin commented 10 months ago

@alexgleason this got lost in my mailbox, so I'm waaaay too late to notice this, but for anyone coming back to this thread: no, calling it multiple times on the same key will run them sequentially and avoid races.