ankurpiyush26 / pubsubhubbub

Automatically exported from code.google.com/p/pubsubhubbub
Other
1 stars 0 forks source link

Reduce contention on EventToDeliver subscription iteration #34

Closed GoogleCodeExporter closed 9 years ago

GoogleCodeExporter commented 9 years ago
See this exception:

too much contention on these datastore entities. please try again.
Traceback (most recent call last):
  File
"/base/python_lib/versions/1/google/appengine/ext/webapp/__init__.py", line
503, in __call__
    handler.post(*groups)
  File
"/base/data/home/apps/pubsubhubbub/secrets.334970643233067897/main.py",
line 241, in decorated
    return func(myself, *args, **kwargs)
  File
"/base/data/home/apps/pubsubhubbub/secrets.334970643233067897/main.py",
line 1855, in post
    work.update(more_subscribers, failed_callbacks)
  File
"/base/data/home/apps/pubsubhubbub/secrets.334970643233067897/main.py",
line 1161, in update
    self.put()
  File "/base/python_lib/versions/1/google/appengine/ext/db/__init__.py",
line 696, in put
    return datastore.Put(self._entity)
  File "/base/python_lib/versions/1/google/appengine/api/datastore.py",
line 166, in Put
    raise _ToDatastoreError(err)
  File "/base/python_lib/versions/1/google/appengine/api/datastore.py",
line 2055, in _ToDatastoreError
    raise errors[err.application_error](err.error_detail)
TransactionFailedError: too much contention on these datastore entities.
please try again.

Basically we're re-putting the EventToDeliver entity on each chunk size of
N feeds. When we're done with that N, we enqueue another task to handle the
next N more that need to be contacted. The trouble is the entity group
can't have transactions going through it at this high of a rate.

Simple solution:
* Have the continuation task always have a countdown of 1 second to
rate-limit this behavior
* Increase the EVENT_SUBSCRIBER_CHUNK_SIZE constant to increase the
per-iteration latency, reducing the number of needed iterations.

Long-term solution:
* Have one task sequence that iterates through all feeds and another that
actually does delivery
* This will isolate broken callbacks into their own transactional pools

Original issue reported on code.google.com by bslatkin on 17 Jul 2009 at 11:32

GoogleCodeExporter commented 9 years ago

Original comment by bslatkin on 21 Sep 2009 at 8:08

GoogleCodeExporter commented 9 years ago
In the past 6+ months this has not been an issue again. The temorary fix of 
contacting 
50+ subscribers per delivery iteration basically guarantees 1 second latency, 
and thus 
the iteration contention is gone.

Original comment by bslatkin on 26 Feb 2010 at 11:02