Closed amtrack closed 5 months ago
Thank you for filing this issue. We appreciate your feedback and will review the issue as soon as possible. Remember, however, that GitHub isn't a mechanism for receiving support under any agreement or SLA. If you require immediate assistance, contact Salesforce Customer Support.
moved this since it's in the vscode team's apex library
@amtrack we are aware of the issue around volume and additional time to process code coverage. I see that you have already tried bumping max heap space to no avail. I also see that the machine being used for CI/CD is not very robust.
Given the limited memory on the CI/CD machine, out recommendation would be to breakup the test runs into smaller groups. The CLI supports running of Apex Test Suites, as well as taking a list of Apex test classes to run.
Thank you for the example project that challenges both time and space dimension of this issue. We also use similar configurations for our testing. The largest of those produces almost a million code coverage records and requires a heap space of 32GB.
We are looking into alternatives to help minimize the memory consumption.
@peternhale Thanks for your detailed response! Good to know that you are already exploring this topic.
I'm wondering if you at Salesforce could eventually make the ApexCodeCoverage
object available for the Bulk API / Bulk API 2.0. That could probably solve both challenges of the time and space dimension (at least on the client).
Apparently this is possible for the MetadataComponentDependency Tooling API object:
Using Bulk API 2.0, you can query the MetadataComponentDependency Tooling API object and retrieve up to 100,000 records in a single query.
@amtrack It would be nice to have the ApexCodeCoverage object available in Bulkv2. I don't believe either bulk of the endpoints support tooling objects.
As for improving fetch time, the Coverage field is blob of sorts, that stores two arrays, covered and uncovered lines. When included in a query, the fetch times go up dramatically, due to the field's complexity.
I don't believe either bulk of the endpoints support tooling objects.
I honestly believe the same but the docs say the Bulk API 2.0 does support the MetadataComponentDependency
Tooling API object.
I wanted to test it myself but failed quickly failed:
$ sf data query -q "SELECT Id FROM MetadataComponentDependency" --use-tooling-api --bulk
Error (2): The following errors occurred:
--bulk=true cannot also be provided when using --use-tooling-api
--use-tooling-api=true cannot also be provided when using --bulk
See more help with --help
Looking at the code in the plugin-data
repo, I found that it isn't as easy as just removing the flag constraint.
Gave up for now but wondering why the docs should make such a claim.
The errors you are seeing are from the cli query command, rather than to bulk api. Until now, bulk and tooling don'e mix.
@amtrack have you made any progress in getting the tests w/coverage to run on your CI/CD machine?
@peternhale thanks for asking! Unfortunately 8GB RAM was not enough and scaling up the paid cloud infrastructure even more only because of this is not an option for us.
After trying lots of things and reading https://developer.salesforce.com/blogs/developer-relations/2012/11/how-code-coverage-works we're probably going to switch to Aggregated Code Coverage.
This works fine in terms of speed and memory.
I think there is a small bug in sf apex test run
when Aggregated Code Coverage is enabled that the Coverage per Class isn't queried or printed.
I'll create an issue when I find time.
But I was able to query ApexCodeCoverageAggregate
for the few hundred classes with a simple and fast Tooling API query.
And that seems to give us the info we need.
@amtrack Thanks for getting back to me. We do have work on backlog to try to better handle large test runs. Please file the issue you mentioned when you have the details gathered.
I would like to close this issue if it is ok with you.
@peternhale OK, I'm looking forward to test it again when you have better handling of large test runs implemented.
@peternhale I've filed an issue https://github.com/forcedotcom/salesforcedx-vscode/issues/5599 pointing to a Git repository reproducing the issue.
Summary
Retrieving the code coverage for a large codebase is very slow and uses a lot of RAM.
In our case, the unit tests only take 20 minutes to run, but retrieving the code coverage takes 30 minutes and even fails with a
JavaScript heap out of memory
error in some environments.Steps To Reproduce
Create a Scratch Org and deploy some generated Apex Classes (see this repository).
Run the unit tests:
Expected result
After the unit tests have finished, I expect it will not take more than 5 minutes to retrieve the code coverage.
Actual result
It either takes very long (30 minutes) to retrieve the code coverage or even fails with a
JavaScript heap out of memory
in some environments.System Information
Additional information
This is probably a consequence of fixing an issue with incomplete code coverage report which was released in
@salesforce/cli
v2.22.7. We didn't notice the incomplete coverage reports but I can now reproduce this with an old version.Debug logs from a Mac when retrieving the code coverage succeeds but takes 30 minutes:
JavaScript heap out of memory
error on a CI machine with 8GB RAM and heap size increased from 2GB default to 4GB usingNODE_OPTIONS=--max_old_space_size=4096
.