Closed GoogleCodeExporter closed 9 years ago
Oh! I wasn't even aware of the "flowId" property. Nifty.
Original comment by jeff.br...@gmail.com
on 17 May 2009 at 11:07
Applied with a few modifications.
Thanks!
I'm still not sure whether the flowId will work correctly in all circumstances.
For
example, what will happen to our ability to report test suites?
Original comment by jeff.br...@gmail.com
on 17 May 2009 at 11:02
Now that you mention this, I see it doesn't recognize the test assembly.
3.0.6.776 without [Parallelizable] :
http://teamcity.codebetter.com/viewLog.html?
buildId=2486&buildTypeId=bt58&tab=testsInfo&guest=1
3.0.6 with my patch:
http://teamcity.codebetter.com/viewLog.html?buildId=2495&buildTypeId=bt58&tab=te
stsInfo&guest=1
Plus, one of the tests reports bogus timing (7s when it should be ~540ms)
I'll try trunk with the applied patch.
Original comment by mauricio...@gmail.com
on 18 May 2009 at 12:48
The latest v3.0.6 and v3.0.7 builds are likely to give the same output.
I fear that flowId won't solve our problems. It is intended to be used to
report
results from completely independent processes whereas in our case we have
branching
processes that nest.
Original comment by jeff.br...@gmail.com
on 18 May 2009 at 1:43
Here is what I am going to do.
I will modify the TeamCityExtension to produce a serial ordering of the test
results
in the output instead of using flowId for that purpose. What this means is that
sometimes we will delay presenting test information in order to preserve the
correct
nesting structure.
Original comment by jeff.br...@gmail.com
on 18 May 2009 at 2:03
OK, but IMHO the message ordering should be handled by TeamCity. Otherwise the
test timings can't be trusted.
I'll check with JetBrains about the usage of flowId, there isn't much
documentation about this.
Original comment by mauricio...@gmail.com
on 18 May 2009 at 4:37
We explicitly tell TeamCity what the test timings were by including a duration
attribute.
In any case, the flowId mechanism seems to be intended to tell apart results
from
parallel non-interacting processes whereas we have more interesting interactions
going on here. For example, fixtures are represented as test suites in the
output
but if we give the fixtures and their tests different flowId's then TeamCity
will not
know they are related.
I have just applied a fix in revision 1795 (trunk) and 1796 (v3.0.6).
Original comment by jeff.br...@gmail.com
on 18 May 2009 at 4:58
Right, forgot about the duration attribute. I'll give it a try... thanks!
Original comment by mauricio...@gmail.com
on 18 May 2009 at 1:22
How does it work?
Original comment by jeff.br...@gmail.com
on 18 Jul 2009 at 5:02
I have just upgraded to 3.1.238 and it works beautifully :-)
There still seems to be a minor problem with the timings though. Compare this
test report http://bit.ly/Uv0wg using 3.1.238 with the previous one
http://bit.ly/WQNoc using
3.0.6.787. Even the 3.0.6 timing for the DocumentBoost test (6s) is very
suspicious, it
only takes 0.8s or less when run with ReSharper.
Original comment by mauricio...@gmail.com
on 20 Jul 2009 at 4:56
I just ran the test with Icarus and it reports 8s, so it's not the teamcity
extension's
fault.
Not sure why... most of the tests are using Rhino Mocks, which uses Castle
DynamicProxy, maybe something in there is blocking?
Original comment by mauricio...@gmail.com
on 20 Jul 2009 at 5:04
Here's a crazy suggestion.
Try wrapping your test code up in a new thread. Right now MbUnit tends to
create
very deep stacks which causes problems for some tools like the debugger. I'd be
curious to see if Rhino.Mocks performance is also affected. (Shortening stacks
is
out of scope for v3.1 but it is one of my goals for v3.2.)
Original comment by jeff.br...@gmail.com
on 20 Jul 2009 at 8:19
Didn't seem to make much difference...
teamcity test log: http://bit.ly/QYy8x
changeset: http://code.google.com/p/solrnet/source/detail?r=424
On Icarus it still says 8s when running all tests. When running that test alone
it's only 0.5s or so.
Let me know if you want me to test something else.
Anyway this isn't the original issue anymore... maybe this should be moved to a
new, lower priority issue?
Also, how can we be sure that this is really a Gallio issue and not something
in Rhino.Mocks or my tests or my code?
Original comment by mauricio...@gmail.com
on 22 Jul 2009 at 2:33
As you suggested, I'm going to mark this issue fixed.
It is sometimes difficult to isolate the specific cause of performance issues
like
this because there are many and varied subtle interactions involved. I'm glad
it's
not another stack trace depth related issue but that would have at least given
us a
tidy hypothesis to pursue. Feel free to open another issue to follow up on this
separately in v3.2. I hope the performance is acceptable for the time being.
Incidentally, you might take a look at the Gallio.Framework.Tasks static class.
It
provides helpers for tests that spawn threads and processes. ;-)
Original comment by jeff.br...@gmail.com
on 22 Jul 2009 at 5:16
Original issue reported on code.google.com by
mauricio...@gmail.com
on 17 May 2009 at 4:58Attachments: