In the hub right now when the DB times out on content fetch, we'll rely on
the task queue to do the retries. This causes back-off, which is bad for
throughput. Instead, we should reschedule the task (possibly with the same
ETA) on the retry queue.
Traceback (most recent call last):
File
"/base/python_lib/versions/1/google/appengine/ext/webapp/__init__.py", line
509, in __call__
handler.post(*groups)
File
"/base/data/home/apps/pubsubhubbub/feed-ids.336557603768777282/main.py",
line 319, in decorated
return func(myself, *args, **kwargs)
File
"/base/data/home/apps/pubsubhubbub/feed-ids.336557603768777282/main.py",
line 2294, in post
if parse_feed(feed_record, headers, content):
File
"/base/data/home/apps/pubsubhubbub/feed-ids.336557603768777282/main.py",
line 2211, in parse_feed
db.run_in_transaction(txn)
File "/base/python_lib/versions/1/google/appengine/api/datastore.py",
line 1819, in RunInTransaction
DEFAULT_TRANSACTION_RETRIES, function, *args, **kwargs)
File "/base/python_lib/versions/1/google/appengine/api/datastore.py",
line 1948, in RunInTransactionCustomRetries
raise _ToDatastoreError(err)
Timeout
Original issue reported on code.google.com by bslatkin on 24 Sep 2009 at 4:57
Original issue reported on code.google.com by
bslatkin
on 24 Sep 2009 at 4:57