Closed LukasHedegaard closed 3 years ago
Checking the action history for two branches not integrated in master at that point, we see the following results:
May 12th:
May 18th:
Was there a direct commit to master (not tested via GitHub actions through a PR) that could have drastically slowed down tests? Alternatively, a slowdown of the OpenDR FTP-server could also be the root cause since many methods rely on test dataset downloads from there.
Could commit f0104ab012612350b0f852283aac08e3fdfc1c06 from May 14th be the issue, @passalis? The associated PR #73 did not show issues w.r.t. test time, but it is the only commit to master in-between May 12th and May 18th as far as I can see.
Is it possible to add timers to tests and log running time to be able to check which tests should be optimized?
Could commit f0104ab from May 14th be the issue, @passalis? The associated PR #73 did not show issues w.r.t. test time, but it is the only commit to master in-between May 12th and May 18th as far as I can see.
This commit didn't change anything to the testing done, and I am pretty sure my tests are minimal/run fast enough compared to the time needed for all the tests that are running. I don't think the tests runtime are slowed down by this PR/commit.
I performed a simple benchmarking of the tests on my pc (with a GPU):
Time | |
---|---|
Total time for all tests: | 5m36,790s |
activity_recognition | 3m27,859s |
object_detection_3d | 1m4,311s (These tests do not actually run, __init__.py seems to be missing) |
object_tracking_2d | 43,588s |
face_recognition | 26,824s |
pose_estimation | 13,269s |
object_tracking_3d | 6,413s |
speech_recognition | 1,264s |
I think all tests that take more than 1 minute should be revised in order to consume less time. Please keep in mind that the test time is exponentially larger on github, since we are probably using CPU-only machines with a small number of cores (maybe we are allocated just one core).
Also, tests for object_detection_3d seems that do not run (we are missing an __init__.py
file in https://github.com/tasostefas/opendr_internal/tree/master/tests/sources/tools/perception/object_detection_3d/voxel_object_detection_3d) Therefore, the actually runtime after fixing this would be even larger.
Looks good, @passalis. Did your tests include the download time for various pretrained models and datasets?
Yes, download time is included. Of course, this does not exclude the possibility of a slow link between github servers and our server. These are indeed known to happen from time to time and usually resolve on their own. However, given that the allocated machine might be on a different farm each time, it might be difficult to reproduce the issue if it happens only sporadically.
Therefore, my first suggestion would be to try to shorten the duration of tests, if possible. I think that most time is spent on training tests, so perhaps a simple solution is to perform very simple training tests. If this does not work, we can then check other options.
Fixed in PR #81
There has been a significant reduction in test speeds over the past few weeks. Thanks to @iliiliiliili and @negarhdr for noticing this. Can this be alleviated or is it a result of the additional method integrations?