Open irapha opened 7 years ago
smol note: bazel would let us do that.
@joshuamorton if you really want bazel I'm ok with it but like you'd have to make the PR yourself
At this point call it a slightly more serious "refactor to use lisp".
I'm ready to stop any and all lisp PRs.
Ok so actually we'd probably want to use something like this: https://circleci.com/docs/2.0/workflows/, but I'm not sure how it plays with DoCIF.
@joshuamorton this is actually a fairly complicated issue.
Originally, based on early descriptions of circleci 2.0, it seemed like I could keep the same workflow DoCIF provides (building a baseimage for caching so build performance is much improved), so I wanted to build a 2.0 backend for DoCIF. However circleci 2.0 ended up being a lot less powerful than I originally anticipated, which makes it it impossible to accomplish on 2.0.
very late in the 2.0 beta, circleci introduced workflows, which sovled half the issues I had with 2.0, but created many new ones. In my opinion, 2.0 is a limited, but polished product, while workflows is more of an alpha pushed out to release early. Despite this, I do think workflows is interesting, and I use it on a few of my projects.
For example, workflows do not support forked pull requests at all, and break email notifications (repeatedly send failure and pass emails), which breaks most of my personal use cases.
For your case, which currently does not use any of the baseimage caching features DoCIF provides, your best bet is probably to switch to circleci 2.0 without workflows. If you would like baseimage caching and easy local builds, you're pretty much forced to stick with 1.0 and DoCIF. Workflows might be interesting someday, but it's pretty broken right now.
I don't think that running tests only related to the current change is a goal that is needed at your current stage. Even for extremely large companies it's important to run all tests on all commits to make sure no complex interactions between dependencies are missed. Most of the time this is done to improve developer experience by improving CI time, but since most of your CI time is installing dependencies (not actually running tests), your CI time can be vastly improved by building a custom baseimage (to be manually updated) via 2.0, or using DoCIF's caching feature (which auto-updates images).
Even for extremely large companies it's important to run all tests on all commits to make sure no complex interactions between dependencies are missed.
As someone working at a Very Large Company, this isn't exactly true. It would significantly slow development, but the greater point is true. This is a very premature optimization.
As someone working at a Very Large Company, this isn't exactly true.
I guarantee that every competent large company is running all tests on all commits at some point in their build process. While this might be hidden from developers, or developers might turn this off for local testing, or it might be run only before merging into source control to speed things up, there are always cases that the build system simply cannot predict. (as a trivial example) if test A runs code that deletes dependencies for another unrelated project that happens to be colocated on the same host, it will break the other project despite being painfully easy to catch via it's tests.
If you are talking about google (since they seem to be the largest software company right now), their testing solution is not one to strive to achieve (imo at least), since most of their products are painfully buggy and regress often. For example, hangouts recently added a new forced popup on startup for me, which when closed, freezes all of hangouts, making it completely unusuable. The official homepage (google.com) currently has a bug with the javascript spell check notifier where it flickers on/off at random intervals and places. Both of these bugs (and many, many, others I've encountered) could be attributed to not running all tests on all commits, since they are complex interactions between different services (hangouts plugin/hangouts js; google search backend/google spellcheck/google search js), but are painfully obvious to an external observer or a trivial end to end test (in a non-dependent project). There are much better tested examples of software projects out there to learn from (which have public test infrastructure too)!
At any rate, copying someone else's <X>
only places limits on what you <X>
. Build a solution that works well for you, (regardless of how it works for others) and you'll be better off (with a much more stable and reliable buzzmobile) in the long run :smile:
I guarantee that every competent large company is running all tests on all commits at some point in their build process.
I can guarantee that this becomes computationally infeasible past a certain scale, but that's mostly beside the point here, like I said. If you want to have a longer conversation about testing methodology and infrastructure (at google), feel free to PM me.
I believe there is an easy way for DoCIF to give us what was changed in the current PR and we can easily find the list of unit tests that are affected by that.
This might get complicated if we decide to make integration tests.
Still, I'm leaving this issue open to encourage investigation on doing this, but it's very low priority