Closed realkosty closed 1 week ago
@realkosty, hmm does this mean the codecov.yml
check is going to fail on every single PR?
Does this 'false negative', which i can see us starting to ignore on every run, risk obscuring actual future problems occurring in the Codecov test run? 🤔
That said, i understand not wanting to maintain fake unit tests... let's talk more about this
@realkosty, hmm does this mean the
codecov.yml
check is going to fail on every single PR?
yes, after the first time some existing test starts failing
Does this 'false negative', which i can see us starting to ignore on every run, risk obscuring actual future problems occurring in the Codecov test run? 🤔
yes. The way it's implemented if test run command npm test -- --coverage
fails for ANY reason, deployment would still succeed.
We could add a check that coverage report is generated after this step to mitigate that.
@cstavitsky Maybe this issue won't come up very often and not worth our time? Ideally we probably want to have tests for code that doesn't get changed a lot What's the benefit of having our codecov demo use the same repo as React instead of some random sample repo btw?
What's the benefit of having our codecov demo use the same repo as React instead of some random sample repo btw?
IIRC the vision was to incorporate other codecov features into the PR and possibly have the PR serve as an entry point into the sentry demo as well (part of the platform vision)
Although I'm not sure how much overlap there has been between a Codecov POC and Sentry POC for the same customer
We should discuss with @ndmanvar
Discussed offline, decided we will just change all assertions to assert(true)
instead of this change
IMO we should not have to maintain fake unit tests unless this is important for Codecov demo.
Context: https://github.com/sentry-demos/empower/pull/577