(an example from the README), I get this traceback:
Checking URL: http://sourceforge.net
Running Bicho with delay of 15 seconds
Traceback (most recent call last):
File "./bicho", line 8, in <module>
retval = Bicho.main.main()
File "/home/sumanah/test/Bicho/Bicho/main.py", line 56, in main
backend.run()
File "/home/sumanah/test/Bicho/Bicho/backends/allura.py", line 381, in run
ticketTotal = json.loads(f.read())
File "/usr/lib/python2.7/json/__init__.py", line 328, in loads
return _default_decoder.decode(s)
File "/usr/lib/python2.7/json/decoder.py", line 365, in decode
obj, end = self.raw_decode(s, idx=_w(s, 0).end())
File "/usr/lib/python2.7/json/decoder.py", line 383, in raw_decode
raise ValueError("No JSON object could be decoded")
ValueError: No JSON object could be decoded
Has the actual code of the Allura application on sourceforge.net changed? Or perhaps they've just added so many tickets that pagination takes longer and they kill a hard-to-compute process by spitting out a 500 error?
When I run
(an example from the README), I get this traceback:
I think I can trace this to lines 375-378 and 399-404 of
allura.py
; they're adding pagination-related strings toself.url_issues
, but when you visit that URL in a browser (e.g. http://sourceforge.net/rest/p/allura/tickets/search/?limit=1&q=mod_date_dt%3A[1900-01-01T00%3A00%3A00Z%20TO%20%202013-11-07T08%3A29%3A01.406453Z] ), you get a 500 error.Has the actual code of the Allura application on sourceforge.net changed? Or perhaps they've just added so many tickets that pagination takes longer and they kill a hard-to-compute process by spitting out a 500 error?