Could you please review the code and let me know if there's anything that you would like me to change?
Summary of the changes :
Addition of the script utils/measure_performance.py --> This function is designed to capture the execution time, memory usage, and CPU usage before and after the specified function is called.
Addition of the scripttests/performance_comparison.py --> I have decided to create a separate script for the performance comparison of the two functions detect_test_runners and detect_test_runners2 instead of merging them in the src/github_repo_request_local.py script. The metrics (Execution time, memory and CPU usage,..) are saved in DataFrame 1 and DataFrame 2 according to the name of the functions.
In terms of the result of this comparison, I have done some basic analysis :
Execution time:
Mean: The average execution time for DataFrame 1 is slightly higher at about 0.094 seconds compared to DataFrame 2, which averages about 0.081 seconds. This indicates that the function associated with DataFrame 2 might be more efficient in terms of execution speed.
Max: The maximum execution times are quite close, with DataFrame 1 at about 3.65 seconds and DataFrame 2 at about 3.54 seconds, suggesting that both functions can occasionally experience similar delays.
Memory usage (MB):
Both datasets show a wide range in memory usage, from a significant decrease to a notable increase, indicating variable memory performance across different runs.
Mean : DataFrame 1 has a slightly negative average memory used, suggesting some memory deallocation or more efficient memory use, while DataFrame 2 shows a slightly positive mean, indicating a small amount of memory allocation on average.
CPU usage :
DataFrame 2 showing a slightly higher idle time on average, potentially indicating it's slightly more efficient in terms of CPU usage.
Transferred test_runners dictionary and related functions to github_repo_request_local.py
Introduced Node and TestCategory enums to standardise node names.
Hi Julian,
Could you please review the code and let me know if there's anything that you would like me to change?
Summary of the changes :
Addition of the script
utils/measure_performance.py
--> This function is designed to capture the execution time, memory usage, and CPU usage before and after the specified function is called.Addition of the script
tests/performance_comparison.py
--> I have decided to create a separate script for the performance comparison of the two functionsdetect_test_runners
anddetect_test_runners2
instead of merging them in thesrc/github_repo_request_local.py
script. The metrics (Execution time, memory and CPU usage,..) are saved inDataFrame 1
andDataFrame 2
according to the name of the functions. In terms of the result of this comparison, I have done some basic analysis :Mean: The average execution time for DataFrame 1 is slightly higher at about 0.094 seconds compared to DataFrame 2, which averages about 0.081 seconds. This indicates that the function associated with DataFrame 2 might be more efficient in terms of execution speed.
Max: The maximum execution times are quite close, with DataFrame 1 at about 3.65 seconds and DataFrame 2 at about 3.54 seconds, suggesting that both functions can occasionally experience similar delays.
Transferred
test_runners
dictionary and related functions togithub_repo_request_local.py
Introduced
Node
andTestCategory
enums to standardise node names.Thanks
closes #69 closes #58