Closed 2lambda123 closed 8 months ago
Unable to locate .performanceTestingBot config file
Seems you are using me but didn't get OPENAI_API_KEY seted in Variables/Secrets for this repo. you could follow readme for more information
Processing PR updates...
Thanks @2lambda123 for opening this PR!
For COLLABORATOR only :
To add labels, comment on the issue
/label add label1,label2,label3
To remove labels, comment on the issue
/label remove label1,label2,label3
👋 Figuring out if a PR is useful is hard, hopefully this will help.
- @2lambda123 has been on GitHub since 2019 and in that time has had 1869 public PRs merged
- Don't you recognize them? They've been here before 🎉
- Here's a good example of their work: HeyGenClone
- From looking at their profile, they seem to be good with Shell and Java.
Their most recently public accepted PR is: https://github.com/2lambda123/pytorch-serve/pull/11
Description has been updated!
PR Details of @2lambda123 in pytorch-serve : | OPEN | CLOSED | TOTAL |
---|---|---|---|
11 | 1 | 12 |
The recent modification simplifies the error handling in the continuous integration workflows by eliminating the steps that automatically opened issues upon test failures during scheduled runs. This change affects both CPU and GPU regression test workflows, streamlining the process and possibly shifting the responsibility of issue tracking to developers or different mechanisms.
Files | Change Summary |
---|---|
.github/workflows/regression_tests_cpu.yml |
Removed automatic issue creation on failure |
.github/workflows/regression_tests_gpu.yml |
Removed automatic issue creation on failure |
🐇✨
In the realm of code where rabbits dare to hop,
Changes were made, unnecessary steps, chop, chop!
No more issues from the sky, automatically drop.
Now, we leap and bound, without a stop,
Celebrating the workflows that just won't flop.
🚀🌟
Thank you for using CodeRabbit. We offer it for free to the OSS community and would appreciate your support in helping us grow. If you find it useful, would you consider giving us a shout-out on your favorite social media?
The Pull Request reverts a previous commit that added a feature to automatically open a GitHub issue when a regression test fails during a scheduled run. The reverted feature was implemented in both CPU and GPU regression test workflows. The removal of this feature means that failed regression tests will no longer automatically create issues, and manual monitoring or an alternative method will be needed to track and address regressions.
If the automatic issue creation was causing problems (e.g., spamming with too many issues, creating issues for flaky tests, etc.), it might be beneficial to refine the conditions under which an issue is created. For example, issues could be created only after a test has failed consecutively several times, or only for certain critical tests. Alternatively, a different notification system could be implemented, such as sending alerts to a dedicated Slack channel or email list, to reduce noise on the GitHub issues page.
Check out the playback for this Pull Request here.
Description
In this pull request, changes have been made to the GitHub Actions workflows for regression tests on CPU and GPU environments. The section responsible for opening an issue on failure has been removed from both workflows.
Summary by CodeRabbit