Closed Baunsgaard closed 4 years ago
Details: https://help.github.com/en/actions/reference/workflow-syntax-for-github-actions
Each job in a workflow can run for up to 6 hours of execution time. Each workflow run is limited to 72 hours. The number of concurrent jobs you can run in your account depends on your GitHub plan
** EDIT If you fork the project, then your fork will use your own GitHub plans resources. This means for instance my fork has access to 20 concurrent 2 core machines. With the current scheduling it takes ~40 min to run all tests.
LINK to see how it looks once running https://github.com/Baunsgaard/systemds/actions
GitHub plan | Total concurrent jobs | Maximum concurrent macOS jobs |
---|---|---|
Free | 20 | 5 |
Pro | 40 | 5 |
Team | 60 | 5 |
Enterprise | 180 | 50 |
I just had a first look into the PR and into your workflow test reports. Can the unfinished tests be ignored/omitted in the test runs? If an unimplemented test makes the overall result a fail upon pushing a commit, that is a misleading result. And I just ran the tensor sum test locally and that went fine. So why can't the script in the workflow test read matrix A and how can we debug that?
I made the Tensor tests fail, because they were not finished.
The tests as they are currently inside master passes, yes. BUT, they test that the TensorBlocks are constructed, but not what their content is. This means for instance if you make a tensorblock where you intend it should have some content, it is possible for the test to parse even if the tensor actually contains for instance all 0's.
I made the Tensor tests fail, because they were not finished.
The tests as they are currently inside master passes, yes. BUT, they test that the TensorBlocks are constructed, but not what their content is. This means for instance if you make a tensorblock where you intend it should have some content, it is possible for the test to parse even if the tensor actually contains for instance all 0's.
Alright, I remember now, that we talked about it. Yes, you are right that these tests should fail. For the release it would be good to have all tests pass though (imho)
LGTM - thanks for this effort @Baunsgaard. These workflows will be very useful for the entire team.
I only resolved the merge conflicts, fixed a typo in the tasks file, and reverted the handling of tensor tests. Without the the prints (or writes) all operations of these tests are deadcode eliminated. For now, I made them pass gain as they are at least testing that the related compilation and runtime path do not throw any exceptions. Theses tests were introduced when no tensor readers/writers existed. I guess it's fine to accept some imperfections while we're incrementally adding the necessary functionalities.
Also, I'm merging this in, fully aware that this will break the build - but at least it will force us to fix the remaining issues in a timely manner.
./src/test/scripts/installDependencies.R
FAILING TESTS on Workflows: