mhgbrown / cached_resource

Caching for ActiveResource
MIT License
80 stars 28 forks source link

user made to wait for caching to process #26

Closed Daniel-ltw closed 3 months ago

Daniel-ltw commented 8 years ago

Currently, the user is made to wait for the caching mechanism to process it's write and read before things are returned.

cache_collection_synchronize(object, *arguments) if cached_resource.collection_synchronize
cache_write(key, object)
cache_read(key)

Could we wrap that around a new thread, so that the thread would process the caching, but return the retrieve object after the thread have been spawn?

This way, users need not waiting for the three lines of code to be process, this would reduce the response time by a great amount. This is especially so if collection_synchronize is true.

Daniel-ltw commented 8 years ago

Anyone else would like to contribute to this topic?

From how I have seen this, the threading should work.

A new thread is spawn and it is use to do write objects to the cache. Now one problem I can see is that the dog piling would create multiple thread which would try to do their write and the :race_condition_ttl should handle that nicely.

Would like further input on this and would like to hear if I am actually missing anything here.

mhgbrown commented 8 years ago

Yeah, we should be careful with dog piling. The only thing I'm thinking about is how we create the threads. I haven't been doing a lot of Ruby recently, but is there some sort of thread pooling mechanism?

Daniel-ltw commented 8 years ago

With the above pull request, you could see the code changes.

Currently, I am just concern what more needs and could be done.

josh-h commented 6 years ago

Your join call on the thread forces the thread to complete before the find method can return. Which results in the find method waiting for the write to finish before returning.

Another approach, if you are okay adding a dependency to 'concurrent-ruby' would be to calling cache_write with 'async'. What I like about that approach is each ActiveResource type gets its own thread for writing to the cache. Which will queue writes during dog piling as you were concerned earlier, but still lets the find return before the cache is written to disk.

Daniel-ltw commented 6 years ago

@josh-h I like that idea, but I do not have the capacity to work on this anymore. Would be happy to see a pull request from you and I would get that merge in.

Not sure if we could actually get some test written for this as well.

jlurena commented 7 months ago

Another approach, if you are okay adding a dependency to 'concurrent-ruby' would be to calling cache_write with 'async'. What I like about that approach is each ActiveResource type gets its own thread for writing to the cache. Which will queue writes during dog piling as you were concerned earlier, but still lets the find return before the cache is written to disk.

@mhgbrown this is a good idea. Any thoughts on continuing ? Granted this issue is almost 9 years old. We don't even have to add the dependency. We simply add it as a configurable optional and have the consumers of this lib require the dependency.

begin
  require 'concurrent' if cached_resource.concurrent_write
rescue LoadError
  cached_resource.logger.error('`concurrent_write` option is enabled, but `concurrent-ruby` is not an installed dependency')
  raise
end

We can also instrument an event when writes are finished. Incase a consumer wants to know when a write is finished, and also for testing.

Will be a bit more complex than above Pull Request, because in order to use concurrent-ruby methods need to be instance methods. I think the following needs to take place:

  1. Create a shared module for caching logic
    • cache_key
    • name_key
    • etc.
  2. Create a Writer class
    • Logic for writing to cache will live in here
  3. Create a Reader class (for consistency)
    • Logic for reading from cache will live here

Both these classes can be singletons since they do only one thing.

mhgbrown commented 7 months ago

I am fully down and like what you proposed here. Do you want to give it a shot?

jlurena commented 5 months ago

After ~6 years~ 9 years!! ... Lol here we go: https://github.com/mhgbrown/cached_resource/pull/61 @mhgbrown

I do need help with specs. @josh-h @Daniel-ltw if you guys could review or give suggestions on how to best write specs I'd appreciate it.

mhgbrown commented 5 months ago

Awesome, nice work @jlurena. I'm low on time for the next week and a bit, will give it a look when I can. re: specs, I'll also have to do some digging.

jlurena commented 5 months ago

@mhgbrown would it be possible for you to make me a maintainer?

mhgbrown commented 5 months ago

@jlurena you're in. lmk if your perms are in need of adjustment.

jlurena commented 4 months ago

I finished it. @mhgbrown pls CR https://github.com/mhgbrown/cached_resource/pull/61

mhgbrown commented 4 months ago

@jlurena thanks for your work. I've merged your PR and there's a compat problem with ruby 1.9 and concurrent ruby:

    concurrent-ruby (>= 1.2.3, ~> 1.2) was resolved to 1.2.3, which depends on
      ruby (>= 2.3)

What do you think is the best course of action? Skip the tests and and installation of concurrent ruby for ruby < 2.3? Is Ruby < 2.3 still in wide enough use?

jlurena commented 4 months ago

Honestly, should stop supporting ruby 1.9 lol. It's been dead for almost a decade

mhgbrown commented 4 months ago

Ok, I'll get something going to remove the support.

jlurena commented 4 months ago

Ok, I'll get something going to remove the support.

Yeah likely maybe anything under 2.3 should be removed. 2.3 has been EOL for 8 years and 1.9 for 12.

https://endoflife.date/ruby

mhgbrown commented 3 months ago

@jlurena can you check this out?

mhgbrown commented 3 months ago

released: https://rubygems.org/gems/cached_resource/versions/8.0.0