elben / mc-map

Austin Stone MC Map
1 stars 0 forks source link

Switch out Thin for Puma #30

Closed jossim closed 11 years ago

jossim commented 11 years ago

This was done in cce6f20b51a26d7428598079c944dad1f38c0859

I also setup a cache on nginx, you can tell if something was served from the nginx cache by looking at the X-Cache-Status HTTP header. Anything in the assets folder is served directly from nginx and completely bypasses rails.

jasontbradshaw commented 11 years ago

@jossim, we should also cache the API response, anything that comes from /communities. I see that as being our biggest bottleneck, since every hit there basically downloads the entire database.

jossim commented 11 years ago

@jasontbradshaw currently anything that's publicly cacheable should be picked up by nginx so if you just set the Cache-Control header to public on the controller actions that should be cached, nginx should pick that up. See https://devcenter.heroku.com/articles/http-caching-ruby-rails#public-requests. Alternatively, I can tell nginx to ignore the Cache-Control header for certain paths.

I believe by default Rails set Cache-Control to private.

jasontbradshaw commented 11 years ago

What you're saying (the second thing) is more what I had in mind.

I basically wouldn't want those requests to hit Rails at all, only when they've expired in the cache would they need to fall through. I might be thinking more along the lines of something like Varnish than nginx though, not sure if nginx can do that.

jasontbradshaw commented 11 years ago

Really, I think we might be fine even if un-cached. The server isn't doing all that much work in general. If it's a problem after the first Sunday, we can revisit then.

jossim commented 11 years ago

I've told nginx to ignore the Cache-Control header from anything on /communities. Currently it's set to cache 200 and 302 responses for 30mins. You may want to play around with the app a little bit to see if it still works the way you expect and isn't giving old data.

location /communities {
    index  index.php index.html index.htm;
    proxy_cache proxy_cache;
    proxy_cache_valid 200 302 30m;
    proxy_cache_min_uses 1;
    proxy_set_header Host $host;
    add_header X-Cache-Status $upstream_cache_status;
    proxy_ignore_headers "Cache-Control";
    proxy_pass http://mcmap_upstream;
}

You can see that it's caching from the last header here, even though it's set to private.

diamond:~ jdsimmons$ curl --head http://mcmap.austinstone.org/communities.json
HTTP/1.1 200 OK
Server: nginx/1.1.19
Date: Fri, 30 Aug 2013 00:23:31 GMT
Content-Type: application/json; charset=utf-8
Connection: keep-alive
Vary: Accept-Encoding
X-UA-Compatible: IE=Edge,chrome=1
ETag: "8143a0a84fd3a017c4d1504b347b2e2b"
Cache-Control: must-revalidate, private, max-age=0
X-Request-Id: 7ff606a8e6826f372ff48a20b884fddd
X-Runtime: 0.518018
X-Rack-Cache: miss
X-Cache-Status: HIT
jasontbradshaw commented 11 years ago

That sounds good to me! That's going to be the most commonly hit endpoint, and presumably its data isn't going to change that often overall.

The only other one is the /signup page, but I think we need cookies and such there, so I'm not sure we can/should cache it. @eshira would know better than I.

jossim commented 11 years ago

Hmm, actually some MCs might need to be taken off the map on Sunday if they get full. In the case of HTTP caching we'd have to wait up to 30mins for them to disappear.

If action or fragment caching is used, then those entries could be expired when the community is updated.

I'm not sure how much updating is expected to happen on Sunday, @mrdthompson should have a better idea.

Might be able to use etags to address this concern, though this might not be a concern.

jasontbradshaw commented 11 years ago

@eshira would know best, hopefully I'll get to talk to him tonight in-person.

mrdthompson commented 11 years ago

Waiting 30 mins could present a problem. I'd like to avoid that if possible. Don't foresee lots of MCs needing to be hidden immediately but definite possibility between services.

Sent from my mobile. Please excuse the brevity.

On Aug 29, 2013, at 7:46 PM, "Joseph Simmons" notifications@github.com<mailto:notifications@github.com> wrote:

Hmm, actually some MCs might need to be taken off the map on Sunday if they get full. In the case of HTTP caching we'd have to wait up to 30mins for them to disappear.

If action or fragment caching is used, then those entries could be expired when the community is updated.

I'm not sure how much updating is expected to happen on Sunday, @mrdthompsonhttps://github.com/mrdthompson should have a better idea.

Might be able to use etags to address this concern, though this might not be a concern.

Reply to this email directly or view it on GitHubhttps://github.com/eshira/mc-map/issues/30#issuecomment-23533962.

elben commented 11 years ago

I don't think header-based caching will help much, because the problem is number of people that load the page, not the number of times a person hits the page.

elben commented 11 years ago

@jossim do you happen to have a memcached server ready to use?

jossim commented 11 years ago

Should be ready to go, just use the dalli gem and use Rails' normal caching facilities.

jossim commented 11 years ago

And I deleted the forced caching behavior. Caching is still available if you send the right headers though.

elben commented 11 years ago

Joseph, can you also explain the puma setup? How many threads/processes/whatever? There's not much reason, for example, to do rails-side caching if we're still bottlenecked at number of processes.

jossim commented 11 years ago

I'm going to do a load test tomorrow, if you have some thing you'd like to test, let me know. I think assuming all the connections people Downtown, West & StJ will be doing searches simultaneously is a good target. I don't know how many people that is though, probably less than 25 I'd guess; other people will probably try on their phones and such but signal strength isn't very good at AHS so they'd have to go outside most likely, at which point they'd most likely do something else.

Puma has 2 worker processes running, each with a max of 16 threads and a min of 1, so there's always at least 2 threads around, 1 for each concurrent process. It's listening on a unix socket.

Apparently Puma really shines if you use Rubinius or JRuby but it still have gains in MRI over other servers.

elben commented 11 years ago

Does that mean we have a max of 32 concurrent connections on the puma side, sharing 2 processes in parallel? Or does it mean at the end of the day, the two processes are still serving only one connection at a time?

Elben

On Thu, Aug 29, 2013 at 11:19 PM, Joseph Simmons notifications@github.comwrote:

I'm going to do a load test tomorrow, if you have some thing you'd like to test, let me know. I think assuming all the connections people Downtown, West & StJ will be doing searches simultaneously is a good target. I don't know how many people that is though, probably less than 25 I'd guess; other people will probably try on their phones and such but signal strength isn't very good at AHS so they'd have to go outside most likely, at which point they'd most likely do something else.

Puma has 2 worker processes running, each with a max of 16 threads and a min of 1, so there's always at least 2 threads around, 1 for each concurrent process. It's listening on a unix socket.

Apparently Puma really shines if you use Rubinius or JRuby but it still have gains in MRI over other servers.

— Reply to this email directly or view it on GitHubhttps://github.com/eshira/mc-map/issues/30#issuecomment-23539779 .

elben commented 11 years ago

Ah I see, it's using MRI "threads" which has a global lock. Yes that is in theory better than say Passenger with only two processes. A global lock is better than no threads for a webapp that isn't CPU-bound.

elben commented 11 years ago

@jossim how do you restart puma after I do a git pull? I tried sudo pumactl -F config/puma.rb status but I think I just brought the service down instead.

mrdthompson commented 11 years ago

You guys are scaring me. The map is gone. Please make it come back.

Your worried friend, D

jossim commented 11 years ago

I'm currently eating breakfast, but I'll bring it back up in a bit. It's all still there so no need to worry.

:)

jossim commented 11 years ago

@eshira to restart you actually need to kill the master process and start it again. The process id is kept in tmp/puma.pid, as well as tmp/puma.state (which also lists the complete running Puma configuration).

To stop, bundle exec pumactl -S tmp/puma.state stop, to start bundle exec pumactl -F config/puma.rb start

We are up and running now.

elben commented 11 years ago

@jossim thanks for the info.