Closed AndyWardle closed 9 years ago
It is fine to have the output out of sink (ha!) in my opinion. Having a new file created on each app pool recycle will probably cause issues for DV as they are expecting one file per day.
if you think the messages will be serialised correctly then code looks fine. :shipit:
What do you mean serialised correctly? The ordering or just that they are written to the file correctly?
I mean that the streams don't flush at the same time, mangling the log events from 2 writers.
To elaborate more, i think that the events are linearised (is that a word?) in the sink. So the sink owns it's resource, and assumes exclusive access.
Yeah, I understand you now. It is an issue:
[19/11/2012 12:31:00 +00:00] Information: "__32432"[19[19/11/2012 12:35:00 +00:00] Information: "__32436"[19[19/11/2012 12:39:00 +00:00] Information: "__32440"[19[19/11/2012 12:43:00 +00:00] Information: "__32444"[19[19/11/2012 12:47:00 +00:00] Information: "__32448"[19[19/11/2012 12:51:00 +00:00] Information: "__32452"[19[19/11/2012 12:55:00 +00:00] Information: "__32456"[19[19/11/2012 12:59:00 +00:00] Information: "__32460"[19[19/11/2012 13:03:00 +00:00] Information: "__32464"[19[19/11/2012 13:07:00 +00:00] Information: "__32468"[19[19/11/2012 13:11:00 +00:00] Information: "__32472"[19[19/11/2012 13:15:00 +00:00] Information: "__32476"[19[19/11/2012 13:19:00 +00:00] Information: "__32480"[19[19/11/2012 13:23:00 +00:00] Information: "__32484"[19[19/11/2012 13:27:00 +00:00] Information: "__32488"[19[19/11/2012 13:31:00 +00:00] Information: "__32492"[19[19/11/2012 13:35:00 +00:00] Information: "__32496"[19[19/11/2012 13:39:00 +00:00] Information: "__32500"[19[19/11/2012 13:43:00 +00:00] Information: "__32504"[19[19/11/2012 13:47:00 +00:00] Information: "__32508"[19[19/11/2012 13:51:00 +00:00] Information: "__32512"[19[19/11/2012 13:55:00 +00:00] Information: "__32516"[19[19/11/2012 13:59:00 +00:00] Information: "__32520"[19[19/11/2012 14:03:00 +00:00] Information: "__32524"[19[19/11/2012 14:07:00 +00:00] Information: "__32528"[19[19/11/2012 14:11:00 +00:00] Information: "__32532"[19[19/11/2012 14:15:00 +00:00] Information: "__32536"[19[19/11/2012 14:19:00 +00:00] Information: "__32540"[19[19/11/2012 14:23:00 +00:00] Information: "__32544"[19[19/11/2012 14:27:00 +00:00] Information: "__32548"[19[19/11/2012 14:31:00 +00:00] Information: "__32552"[19[19/11/2012 14:35:00 +00:00] Information: "__32556"[19[19/11/2012 14:39:00 +00:00] Information: "__32560"[19[19/11/2012 14:43:00 +00:00] Information: "__32564"[19[19/11/2012 14:47:00 +00:00] Information: "__32568"[19[19/11/2012 14:51:00 +00:00] Information: "__32572"[19[19/11/2012 14:55:00 +00:00] Information: "__32576"[19[19/11/2012 14:59:00 +00:00] Information: "__32580"[19[19/11/2012 15:03:00 +00:00] Information: "__32584"[19[19/11/2012 15:07:00 +00:00] Information: "__32588"[19[19/11/2012 15:15:00 +00:00] Information: "__32596"
:rabbit:
:shipit:
I think we need to add tests for concurrent access (at least 2 concurrent writers) emitting log messages at high frequency. It is my assumption that the output will then be mangled.