Closed galczo5 closed 3 years ago
Hey! Thanks for the question. What did you think those constraints would look like? To me it sounds more like what you would use the goal
function for (the docs are a little out of date for my current dev version but should be right for v4)
So you might have something like:
export default {
'check something': {
test: () => doTheTest(),
constraint: smaller,
goal: (result: number) => number > 0 && number < 10
}
};
Not exactly. Our case is that we can't update results (current status) in every commit and we want to make it possible to change it in periods like weeks or few days in our CI.
IMO it should look like:
export default {
'check something': {
test: () => doTheTest(),
constraint: smallerThan(10),
goal: 0
}
};
Hmm I'm still not sure that I understand, what would smallerThan
look like? What determines better vs worse vs same?
Yup, this is what I wanted to add. In this case, there is no better or worse. In our case, we would like to check for 'good' and 'bad'.
For example. We want to run the time-consuming test. We cannot expect our devs to run it every commit. We want to run it on CI process and check if numbers are lower or the same (good) - CI passes, or worse or the same (bad) - CI fails.
In short periods like weeks, we want to run another CI that will shrink the 'budgets'. So it will change smallerThan(10)
to smallerThen(5)
and so on.
So to sum up. We are making incremental changes, but we want to set budgets
manually. I believe that these constraints are relatively small and easy to implement so I can create a pull request for them. Of course, if you want these constraints in @betterer/constraints
package.
Okay, I think I kind of understand, but if you just want to run a test in CI and pass or fail, then I don't think that Betterer is the right tool for that. Let me try to explain why:
Betterer captures the current state of your codebase in the .betterer.results
file, and the results file has to be the source of truth. If the results file isn't updated in every pull request, then it doesn't represent your codebase anymore, and using it as a comparison point doesn't make sense. Betterer only makes sense when a test is run locally. If the test is only run in CI, then your CI builder would need to be responsible for updating the .betterer.results
file and committing the changes, and it's a pretty bad idea to have your CI builder create a commit. What is the test doing that makes it so prohibitively slow? For that matter, what do you expect the contributor to do if it fails in CI? Won't they have to run the test locally to see why it failed?
I have some thoughts as to why budgets
don't make sense even if you are able to commit to the results file with every change, but let's try to figure out that first part first! I'm happy to help and see if we can get it working. Otherwise I suggest using another tool to run the test and update the expectation manually, so you don't have to worry about the results file, or any constraint functions.
Ok, I think I understand it now and I have an idea how to introduce it right now in our code base. Thanks!
Tests are slow because we have a lot of heavy linting stuff and so on. We are trying to make it faster but it'll take some time to do it.
I'll be back if I find any problems introducing betterer. Thanks! 👍
The V5 release should come soon which should make any lint rules very fast (they run in parallel and only check changes files)
Wait, so there will be a snapshot for the whole project and beterrer
will increase/decrease results only for changed files? Am I understand it right?
Yes, exactly. There's a basic version of it in V4 (if you pass the --cache
flag when you run Betterer) but it'll be much more reliable in V5.
Hi, I would like to add few more constraints like
biggerThan
,smallerThan
,between
,equal
andnot equal
. Is it ok for you to add this to@betterer/constraints
?