Closed GoogleCodeExporter closed 9 years ago
Thanks for catching this! I think the solution will have to be to save/restore
the old datastore's connection around every yield in the transaction() call in
context.py. What do you think?
Original comment by guido@google.com
on 21 Sep 2012 at 5:58
Yes, I think that ought to do it. Looking forward to the fix in 1.7.3. Thanks
Guido!
Original comment by pi...@knowlabs.com
on 21 Sep 2012 at 6:04
Looking into this. (But issue 210 has priority.)
Original comment by guido@google.com
on 25 Sep 2012 at 12:31
This issue was closed by revision 49857afed4df.
Original comment by guido@google.com
on 25 Sep 2012 at 5:49
My app still encounters the same error when multiple async transactions were
run in parallel, runtime python2.7, GA version is 1.7.3. The error happens
with 60 async transactions were run in parallel, but the error does not occur
all the time. Is there a limit on how many parallel async transactions can be
run in parallel?
The error was:
suspended generator _get_tasklet(context.py:266) raised BadRequestError(The referenced transaction has expired or is no longer valid.)
W 2012-12-13 10:00:52.967 suspended generator get(context.py:667) raised
BadRequestError(The referenced transaction has expired or is no longer valid.)
W 2012-12-13 10:00:52.967 suspended generator get(context.py:667) raised
BadRequestError(The referenced transaction has expired or is no longer valid.)
Original comment by zhongt...@gmail.com
on 14 Dec 2012 at 2:08
We've occasionally seen this error as well under high transactional load. I
don't know if it's the same root cause or not, though -- I haven't been able to
narrow down the circumstances in which it occurs. Anecdotally, in one case
removing logging statements from the code made the error stop occurring, so
there might well be some ndb vs old code interference issues left.
Original comment by pi...@knowlabs.com
on 14 Dec 2012 at 2:24
We (repcore-prod) are also seeing this "The referenced transaction has expired
or is no longer valid" fairly frequently in our logs (post-1.7.4)
Is there something we can do to alleviate? Reduce the number of RPCs we are
issuing in parallel?
Original comment by jcoll...@vendasta.com
on 20 Dec 2012 at 2:51
We've learned that datastore transactions have a limited lifetime: you get 15s
for free, then the transaction times out after 15 consecutive seconds of
idleness (i.e. minimum life is 30 seconds), and is always killed after 60
seconds. So while there's no nominal upper bound on the number of parallel
transactions, in practice CPU contention and ndb's unfair scheduling algorithm
will prevent you from going too high. I posted a monkeypatch on Stack Overflow
for recovering from expired transactions which can help a bit:
http://stackoverflow.com/a/14268276/1952074
Original comment by pi...@knowlabs.com
on 10 Jan 2013 at 10:20
Original issue reported on code.google.com by
pi...@knowlabs.com
on 21 Sep 2012 at 5:47