Open petems opened 3 months ago
This is exactly what happening for our testsuites that run synchronously once we deploy to Salesforce envs. We are really looking into the tools that will help us to reduce tech debt and introduce quality solutions among multiple Salesforce instances. We saw DataDog as a provider that might help us to achieve it but as we go on the way and discover more bugs/ unprepared solutions for us we are strongly considering to drop DataDog in favor of smaller provider better suited for SF.
Here is what we receive from SF CLI on deployment:
<?xml version="1.0" encoding="UTF-8"?>
<testsuites>
<testsuite name="force.apex" timestamp="2024-08-30T18:12:57.698Z" hostname="https://someinstance--qa.sandbox.my.salesforce.com" tests="1603" failures="0" errors="0" time="3178.06">
<properties>
<property name="commandTime" value="0.00 s"/>
<property name="failRate" value="0.00%"/>
<property name="failing" value="0"/>
<property name="hostname" value="https://someinstance--qa.sandbox.my.salesforce.com"/>
<property name="orgId" value="00DXXX"/>
<property name="passRate" value="100%"/>
<property name="passing" value="1603"/>
<property name="skipped" value="0"/>
<property name="testExecutionTime" value="3178.06 s"/>
<property name="testStartTime" value="Fri Aug 30 2024 6:12:57 PM"/>
<property name="testTotalTime" value="3178.06 s"/>
<property name="testsRan" value="1603"/>
<property name="username" value="xxx.cicd@abc.com"/>
</properties>
<testcase name="callingServiceReturnsCorrelationId" classname="ServiceImpTest" time="15.92">
</testcase>
<testcase name="callingServiceReturnsAccountId" classname="AnotherServiceImpTest" time="1.13">
</testcase>
</testsuite>
</testsuites>
DataDog PR report looks like that:
This is so confusing for us as well as for management, of course you can make a dashboard but just in time access to simple deployment data in PR would make our lives with DataDog much accountable.
@petems what are the chances to prio this for the incoming month or two?
hey! Sorry for the slow response. We're talking about this internally. Once we have a response we'll let you know π
No problem at all. I am aware of the product roadmap planning pitfalls. Here are a few valuable points in regards to Salesforce ecosystem and dev ops awareness/limits:
β-sequential
flag DataDog would cover the whole landscape of Salesforce ecosystem β ISVs- usually they use more enginerring patters to recreate their orgs so here you would see point 4., partners and customers β majority of the ecosytem - those that just bought Salesforce license for internal use or people helping them to maitain it β those will use Metadata API for deployments which produces incorrect Junit xml Hi @petems @szymon-halik ! We've released a change to how we interpret JUnitXML reports to take into account the time
field reported by <testsuite>
and <testsuites>
. The behavior is the following:
<testsuite>
time
field for Datadog test suite duration
<testsuites>
time
field for Datadog test module and test session duration
Thank you for suggesting this, and I hope this makes your experience with JUnitXML reports in Datadog better! π
@ManuelPalenzuelaDD @juan-fernandez that works like a charm! Thank you for delivering that
Bug description
Currently, there is no way to get an accurate "Wall Time" or full test-suite time from any JUnit test upload at the full suite level. It uses the longest testcase time, and assumes all testcases are run in parralel.
As far as I know, there is no way to list testcases in JUnit XML as sequential, so using the longest test time will always be inaccurate.
This is mentioned in a caveat in the docs as "Total Test Time is Different Than Expected"
However, a better default behaviour (IMHO) would be to use given testsuite time for suite duration. I wouldn't fix the issues in the flamegraph and showing the tests in the right sequential order, but it would make things like the duration graphs more accurate, which help with features like the Github PR Commenter bot showing regressions/improvements on overall suite time.
Describe what you expected
When I upload a JUnit report I Expect the suite time to match the testsuite time Instead the suite time is whatever the longest test (testcase) time is
Steps to reproduce the issue
Take any JUnit XML test report, here's an example generated with PyTest:
Expected: I should see the test suite as having the length
5.465
Actual: Suite time is listed as
2.441
, the slowest indivudal test speed.Workaround: You can make a dashboard widget that makes a
SUM
of tests into the overall accurate test suite time:Additional context
No response
Command
junit