Closed nvloff closed 6 years ago
@DSAMPAT heroku has no issues with git branch gems. I even deployed a test app to confirm. What are you seeing that makes you believe that? Is it failing to deploy? Failing to start? Paste any relevant logs here.
@kch I have managed to deploy. Not sure what the issue was. Could have been another gem causing the bundle issue as I am using Windows mingw-32.
I am getting this notification below, should I change anything or is this fine?
app/web.1:
Rack::Timeout.timeout=: class-level settings are deprecated. See README for examples on using the middleware initializer instead.
@DSAMPAT it's just a warning, doesn't really impact anything at the moment.
We are seeing this issue also. I will try with your debug branch.
Unfortunately we are on ruby 2.0... so the beta version is not going to work. Can you make a version of the 0.3.2?
@beatjoerg it should work fine with 2.0. Any reason you think it wouldn't?
@kch Because you use some methods in the code that are not available with ruby 2.0, like super_method. I tried to backport them, but I did not arrive to fix all in the timebox I had. To see what I "fixed" so far: https://github.com/beatjoerg/rack-timeout/commits/26e05ee40322eae527aa3fd38a8d933e3dec1819/lib/rack/timeout/legacy.rb
Oh super_method
is really bad, it's 2.2-only. Ok, I'll get rid of that.
@beatjoerg 0.4.2 is out with 2.0 compatibility. I also rebased the debug branch here so it should be good to go now.
@samandmoore might work for you in jruby now too, give it a go.
@kch thanks. Still no luck for me running it because it's not ruby 1.9 compatible:
SyntaxError: /opt/rubies/jruby-1.7.24/lib/ruby/gems/shared/bundler/gems/rack-timeout-7a5914cc9734/lib/rack/timeout/core.rb:94: syntax error, unexpected tLABEL
def initialize(app, service_timeout:nil, wait_timeout:nil, wait_overtime:nil, service_past_wait:false)
@samandmoore ok so your jruby version also doesn't support named parameters. I'll see if I can make a version of the debug branch off 0.3.2…
@samandmoore give it a go https://github.com/heroku/rack-timeout/tree/vanishing-env-info-debug-0.3.2
thanks, will give it a shot.
Is anyone actually running this debug branch in a reasonably trafficked app? I'd've expected to have some useful log lines by now.
How about this: I'll mail some heroku swag to the first person who delivers log lines that enlighten me. Help us if you like purple stuff!
I'm working on it. Just in the midst of another big change. Will likely have results for you tomorrow :) On Wed, Apr 6, 2016 at 18:25 Caio Chassot notifications@github.com wrote:
Is anyone actually running this debug branch in a reasonably trafficked app? I'd've expected to have some useful log lines by now.
How about this: I'll mail some heroku swag to the first person who delivers log lines that enlighten me. Help us if you like purple stuff!
— You are receiving this because you were mentioned.
Reply to this email directly or view it on GitHub https://github.com/heroku/rack-timeout/issues/92#issuecomment-206595820
@kch Just deployed to production... Waiting for the errors...
@kch Unfortunately last time we tried this branch we almost took down our production. We have created a bypass for the error but it still happens at quite a regular frequency. ~200 per hour.
@mbasset I wonder if only injecting the hijack on a sample (like 10% or so) of the requests would make it more bearable for you. Lmk if it sounds like a good a idea and I'll add that as an option.
@kch That sounds good in theory but even having it on 10% of our traffic could cause our servers to become unstable and die, albeit more slowly. We may be able to try this in our staging environment as we recently created a script to simulate some production load. However need to find time to try this out.
@mbasset well I'd make the frequency configurable. Also I believe I rewrote the debug code to be a lot nicer since you last tried it. Not sure.
@mbasset so I did it. You can set the env var RACK_TIMEOUT_HIJACK_FREQUENCY
to any number between 0
and 1.0
and that'll determine the frequency.
I've also made changes so I'm hijacking even less methods, trying to stay more out of the way.
Cool I'll see when we get a chance to try this version out and see what happens.
Just FYI still have not had a chance to look into this yet. Ideally if someone else has time.
@kch Hoping to try out the branch next week. However I see it only sends warnings. In our production environment (which is the only environment this seems to happen in) how would you suggest getting the output. We are currently using an nginx/unicorn setup.
I've been working on updating the 0.3.2 branch to look like the newer one. I tried to run it under a load test (because we only see these issues when under reasonable load) and I haven't been able to reproduce it in development mode. However, I was able to reproduce it in a capybara feature spec. Unfortunately, that spec didn't have the logging in place. I am hopefully going to try it in production in the next week or so.
On Fri, Apr 29, 2016 at 3:17 PM Matthew Basset notifications@github.com wrote:
@kch https://github.com/kch Hoping to try out the branch next week. However I see it only sends warnings. In our production environment (which is the only environment this seems to happen in) how would you suggest getting the output. We are currently using an nginx/unicorn setup.
— You are receiving this because you were mentioned. Reply to this email directly or view it on GitHub https://github.com/heroku/rack-timeout/issues/92#issuecomment-215853100
@kch We have been trying your latest branch out in an environment that normally causes ~100 of these errors an hour with rails 4.2.6 and it does not appear to be happening anymore. I wonder if the changes that were made to timing in that branch have solved the issue. Possibly also fixed with rails update. Will continue to monitor for the next while to see.
@kch @mbasset : Same also here, we do not see the error anymore running the debug branch. (We use also rails 4.2.6.)
@kch I have a version of the debug branch in production, and i'm seeing occasional log statements like this:
[DM4PNP]2016-05-24 07:15:53 -0400 severity=WARN, source=rack-timeout-debug info-vanish id=4773927afb3ba69e09eed01d8ae1b6fd
Unfortunately, there are no other logs associated with that id value, so I've got no call stack for you. That makes me think that the method that is unsetting the env info is one of the methods we're not redefining with the logging behavior. Thoughts?
I'm sorry for bumping this for everyone - did anyone have any success in resolving this?
Rails 4.2.6 and the latest 0.4.2 version has fixed it entirely for us.
Hi, I am suffering from the same issue with rack-timeout 0.3.2, ruby 1.9.3-p551 and Rails 3.2.22.5. Unfortunately I am stuck with what I have (can't upgrade ruby or Rails.). It seems that ruby 1.9.x works with 0.3.x so I can't use 0.4.x. I will try downgrade to 0.2.x and keep you posted.
It appears this has been resolved in some way or otherwise gone away, so closing pending new reports.
I saw essentially the same error in v0.5.1 (slightly different line numbers) and with Rails v6.0.3 and gitlab-puma v4.3.3.gitlab.2. It was preceded by
/opt/gitlab/embedded/lib/ruby/gems/2.6.0/gems/rack-timeout-0.5.1/lib/rack/timeout/support/scheduler.rb:73 run> terminated with exception (report_on_exception is true):
/opt/gitlab/embedded/lib/ruby/gems/2.6.0/gems/rack-timeout-0.5.1/lib/rack/timeout/core.rb:168:in `_set_state!': undefined method `state=' for nil:NilClass (NoMethodError)
The only (remote) clue I have to hopefully help reproduce the issue, is that Unicode characters Ç
& Ã
are in a commit message, that's not being displayed where it should be. I'm grasping at straws here, but is string parsing involved at any point here?
Have been having this issue since upgrade to GitLab 13.3.2-ee (13.2.x did not happen). This is with 0.5.2, which I realise is not the latest version of this package. The two instances with EE Premium licence have this problem and a service restart "fixes" it until it occurs again. Another instance without a licence uploaded appears to not have this problem, but it could just be RNG. I will raise an issue with GitLab support regarding this.
On another host the traceback is very different, so it is almost certain it is a problem in Puma or some component of GitLab and not this project, apologies for the noise. It also happened with GitLab 13.2.x fyi.
Yesterday we've upgraded rack-timeout to 0.3.1 from 0.2.4 and we started seeing this error on production (we're using heroku) :
Rack middlewares on production:
Tried reproducing locally but I can't seem to find the right conditions to do so.