Closed grivkees closed 12 years ago
You never git added thechronicle_modules/util/lib/cache.
Ha, of course not, I was in a rush before class. I'll do it when I'm done. Also I need to add a cache empty option. On Oct 22, 2012 3:59 PM, "Jim Posen" notifications@github.com wrote:
You never git added thechronicle_modules/util/lib/cache.
— Reply to this email directly or view it on GitHubhttps://github.com/thechronicle/chronjs/pull/362#issuecomment-9666913.
Yes you do. util.cache.bust should change the random tag and it should be called whenever a layout changes or an article is edited (that's at least the dumbest way to preserve functionality).
@jep37 added in the missing and had it bust cache on article edit and layout changes, tested and working.
Use redis for the cache. Auto key expire is free(TTL) as is a limit on max memory usage(LRU). Memory for each heroku instance is scarce at 512MB, very bad things will happen when it is exceeded. The scalability for the webserver is also memory bound. Also it avoids cache duplication with multiple web workers and allows for deploys without wiping out the cache. I noticed that the site is loading quite slowly, may be a good idea to add on an extra instance to see if that helps.
Joe and I also thought about getting a small 1.7GB EC2 instance for redis since it would be much cheaper. We have the luxury of caching everything for a very long time.
agree with dean 100%, don't cache on the heroku servers, cache on redis. memory and consistency problems (from each server having its own cache) otherwise.
On Mon, Oct 22, 2012 at 2:28 PM, Dean Chen notifications@github.com wrote:
Use redis for the cache. Auto key expire is free(TTL) as is a limit on max memory usage(LRU). Memory for each heroku instance is scarce at 512MB, very bad things will happen when it is exceeded. The scalability for the webserver is also memory bound. Also it avoids cache duplication with multiple web workers and allows for deploys without wiping out the cache. I noticed that the site is loading quite slowly, may be a good idea to add on an extra instance to see if that helps.
Joe and I also thought about getting a small 1.7GB EC2 instance for redis since it would be much cheaper. We have the luxury of caching everything for a very long time.
— Reply to this email directly or view it on GitHubhttps://github.com/thechronicle/chronjs/pull/362#issuecomment-9681402.
Joe Levy Duke University '12, Computer Science www.jodoglevy.com 919.886.6563
Recommendation:
Persist to disk. Keep what's hot in Redis. Direct all else to the mothership.
On Mon, Oct 22, 2012 at 5:33 PM, Joe Levy notifications@github.com wrote:
agree with dean 100%, don't cache on the heroku servers, cache on redis. memory and consistency problems (from each server having its own cache) otherwise.
On Mon, Oct 22, 2012 at 2:28 PM, Dean Chen notifications@github.com wrote:
Use redis for the cache. Auto key expire is free(TTL) as is a limit on max memory usage(LRU). Memory for each heroku instance is scarce at 512MB, very bad things will happen when it is exceeded. The scalability for the webserver is also memory bound. Also it avoids cache duplication with multiple web workers and allows for deploys without wiping out the cache. I noticed that the site is loading quite slowly, may be a good idea to add on an extra instance to see if that helps.
Joe and I also thought about getting a small 1.7GB EC2 instance for redis since it would be much cheaper. We have the luxury of caching everything for a very long time.
— Reply to this email directly or view it on GitHub< https://github.com/thechronicle/chronjs/pull/362#issuecomment-9681402>.
Joe Levy Duke University '12, Computer Science www.jodoglevy.com 919.886.6563
— Reply to this email directly or view it on GitHubhttps://github.com/thechronicle/chronjs/pull/362#issuecomment-9681581.
Another thought... MongoDB is boss at this biznass: http://www.mongodb.org/display/DOCS/Caching
On Mon, Oct 22, 2012 at 6:10 PM, Faraz Yashar faraz.yashar@gmail.comwrote:
Recommendation:
- NGINX
- Disk backed Redis-cache a la EC2.
Persist to disk. Keep what's hot in Redis. Direct all else to the mothership.
On Mon, Oct 22, 2012 at 5:33 PM, Joe Levy notifications@github.comwrote:
agree with dean 100%, don't cache on the heroku servers, cache on redis. memory and consistency problems (from each server having its own cache) otherwise.
On Mon, Oct 22, 2012 at 2:28 PM, Dean Chen notifications@github.com wrote:
Use redis for the cache. Auto key expire is free(TTL) as is a limit on max memory usage(LRU). Memory for each heroku instance is scarce at 512MB, very bad things will happen when it is exceeded. The scalability for the webserver is also memory bound. Also it avoids cache duplication with multiple web workers and allows for deploys without wiping out the cache. I noticed that the site is loading quite slowly, may be a good idea to add on an extra instance to see if that helps.
Joe and I also thought about getting a small 1.7GB EC2 instance for redis since it would be much cheaper. We have the luxury of caching everything for a very long time.
— Reply to this email directly or view it on GitHub< https://github.com/thechronicle/chronjs/pull/362#issuecomment-9681402>.
Joe Levy Duke University '12, Computer Science www.jodoglevy.com 919.886.6563
— Reply to this email directly or view it on GitHubhttps://github.com/thechronicle/chronjs/pull/362#issuecomment-9681581.
I agree. I'm imagining it working like this: var getSomething = util.cache(function (a, b, callback) {
}));
Somewhere else: getSomething(a, b, callback);
And util.cache takes the hash of the function body plus all arguments EXCEPT the callback plus a random tag.
util.cache = function (func) { return function () { ... } };
And do it with Redis, agreed.
Heroku has nginx built in to it's infrastructure. We should be using it if nobody removed the expires header in the templates.
Sent from my iPhone
On Oct 22, 2012, at 3:18 PM, Faraz Yashar notifications@github.com wrote:
Recommendation:
Persist to disk. Keep what's hot in Redis. Direct all else to the mothership.
On Mon, Oct 22, 2012 at 5:33 PM, Joe Levy notifications@github.com wrote:
agree with dean 100%, don't cache on the heroku servers, cache on redis. memory and consistency problems (from each server having its own cache) otherwise.
On Mon, Oct 22, 2012 at 2:28 PM, Dean Chen notifications@github.com wrote:
Use redis for the cache. Auto key expire is free(TTL) as is a limit on max memory usage(LRU). Memory for each heroku instance is scarce at 512MB, very bad things will happen when it is exceeded. The scalability for the webserver is also memory bound. Also it avoids cache duplication with multiple web workers and allows for deploys without wiping out the cache. I noticed that the site is loading quite slowly, may be a good idea to add on an extra instance to see if that helps.
Joe and I also thought about getting a small 1.7GB EC2 instance for redis since it would be much cheaper. We have the luxury of caching everything for a very long time.
— Reply to this email directly or view it on GitHub< https://github.com/thechronicle/chronjs/pull/362#issuecomment-9681402>.
Joe Levy Duke University '12, Computer Science www.jodoglevy.com 919.886.6563
— Reply to this email directly or view it on GitHub< https://github.com/thechronicle/chronjs/pull/362#issuecomment-9681581>.
— Reply to this email directly or view it on GitHubhttps://github.com/thechronicle/chronjs/pull/362#issuecomment-9682819.
I'll create a GUI in Visual Basic. Track the killer's IP address.
On Mon, Oct 22, 2012 at 4:07 PM, Dean Chen notifications@github.com wrote:
Heroku has nginx built in to it's infrastructure. We should be using it if nobody removed the expires header in the templates.
Sent from my iPhone
On Oct 22, 2012, at 3:18 PM, Faraz Yashar notifications@github.com wrote:
Recommendation:
- NGINX
- Disk backed Redis-cache a la EC2.
Persist to disk. Keep what's hot in Redis. Direct all else to the mothership.
On Mon, Oct 22, 2012 at 5:33 PM, Joe Levy notifications@github.com wrote:
agree with dean 100%, don't cache on the heroku servers, cache on redis. memory and consistency problems (from each server having its own cache) otherwise.
On Mon, Oct 22, 2012 at 2:28 PM, Dean Chen notifications@github.com wrote:
Use redis for the cache. Auto key expire is free(TTL) as is a limit on max memory usage(LRU). Memory for each heroku instance is scarce at 512MB, very bad things will happen when it is exceeded. The scalability for the webserver is also memory bound. Also it avoids cache duplication with multiple web workers and allows for deploys without wiping out the cache. I noticed that the site is loading quite slowly, may be a good idea to add on an extra instance to see if that helps.
Joe and I also thought about getting a small 1.7GB EC2 instance for redis since it would be much cheaper. We have the luxury of caching everything for a very long time.
— Reply to this email directly or view it on GitHub< https://github.com/thechronicle/chronjs/pull/362#issuecomment-9681402>.
Joe Levy Duke University '12, Computer Science www.jodoglevy.com 919.886.6563
— Reply to this email directly or view it on GitHub< https://github.com/thechronicle/chronjs/pull/362#issuecomment-9681581>.
— Reply to this email directly or view it on GitHub< https://github.com/thechronicle/chronjs/pull/362#issuecomment-9682819>.
— Reply to this email directly or view it on GitHubhttps://github.com/thechronicle/chronjs/pull/362#issuecomment-9684231.
Or use git blame.
It works, its much faster. Give it a try.
It's a hack (like the rest of the site), and we need to be careful because it doesn't remove things from cache ever if it gets full, it only removes old versions, so we need to make sure we are using it well and not running the machine out of memory.
@jep37