Closed GoogleCodeExporter closed 9 years ago
Can you given an indication of what times out? Is it the database command
that's
timing out or the HTTP transaction? My guess is that it's the latter, but
still,
would be good to be sure.
Original comment by azizatif
on 6 Sep 2007 at 5:57
I'm pretty sure it's an Http timeout because the command runs quickly in Sql
Query
Analyzer.
Original comment by haac...@gmail.com
on 6 Sep 2007 at 7:12
Yes, I gave it a look, the code for retrieving and spanning the errors on
different
channel items by day looks cpu and disk intensive, a fast hack is to change the
pageSize variable in the Render method of the handler to a bigger value, so
that it
retrieves more results at a time. This performs fine with 15000 records,
generated
automatically yesterday (read, they are all shown in the digest). But note that
it's
unlikely that you'll get so many exceptions in a few days, or at least I hope ;)
Original comment by simone.b...@gmail.com
on 6 Sep 2007 at 8:20
Simone, if you are using SQLite (which is hosted in-process) and have enough
RAM
that the database file is mostly cached from recent usage, then yes, the
generation
of 15,000 entries could give the illusion of the whole task being entirely CPU-
bound. In reality, though, the exact mileage will vary. If you're using SQL
Server
over a network then there'll be synchronous I/O involved and CPU will go down.
And
as you said, 15,000 entires in a day means you need to see and address the
problem
in another light. :) Perhaps this does bring out another problem. The digest
feed
should have a harder upper limit on total entires across days? Meanwhile, I
hope
Haacked can shed some more light on why it's the HTTP request that's timing
out. I
haven't been able to reproduce it on my end so far with my data pattern.
Original comment by azizatif
on 6 Sep 2007 at 4:17
Does the fix for Issue #32 help in this case? It should be a lot faster now. I
can
imagine that the time out was occurring in the handler simply because it was
paging
through a very large log as it makes roundtrips to the database. Haacked, could
you
check if this is now resolved in your case? Of course, if there are hundreds of
errors being logged per day then it could still take a while to produce a
digest
feed that spans 15 days of history. Perhaps an hard upper limit needs to be set
there but I doubted if it would really help because a site with lots of errors
(probably being caused by spiders, crawlers and form spam) will always be
getting a
clipped feed. Frankly, the right solution there is to add filtering
(http://code.google.com/p/elmah/wiki/ErrorFiltering) so as to decrease the
noise.
Original comment by azizatif
on 12 Sep 2007 at 10:11
Original comment by azizatif
on 13 Nov 2008 at 5:58
Original issue reported on code.google.com by
haac...@gmail.com
on 5 Sep 2007 at 6:20