Closed joshk closed 7 years ago
@joshk - I have a somewhat related question. Do you consider introducing a new tagging system that would trigger only part of the build pipeline? The usecase I have in mind is very simple: run unit tests, if they pass run the docs build. But for pure docs PRs, no need to do the first step so a [docs only]
or [skip test1]
in the commit message would jump to the step in the build process. A group of jobs are then tagged in this example either as test1
or docs
.
Hi @bsipocz
Hmmmm, that is an interesting idea. Not at the moment, but it might be something for us to consider later. I think this might be a bit of an edge use case, although we might be surprised by what people need and want. š
That sounds interesting! The following feature would be very useful to me:
My build has a number of jobs (7 total). I want to run all of them on my release branch. However, I only want to run a subset of them on my feature branches to speed up the Travis run and to save resources. According to support (thanks @joepvd !) this is not possible right now but might be in the future š
Would that be useful for other people, too?
@larsxschneider I love this idea, and definitely think it is a valid use case!
...it was about time! :sweat_smile:
All kidding aside, in my opinion this is THE missing feature of Travis which will also tempt all Jenkins lovers to give Travis another try.
I would strongly suggest to have a look at the great job that the GitLab guys have done with pipelines and environments (no, I'm not part of the GitLab team).
Hi, it's looking good so far! In the example, the unit tests are bash scripts. For us, though, the unit tests are in multiple services, each with their own GH repo, and they currently trigger CI builds. The issue we have with this is that it doesn't report CI failures back to the GH issue that triggered it. I'm thinking about replacing the CI step with a pipeline repo, but I still don't see how to get around this issue.
So lets say I set it up like this:
service-1
has unit testsservice-2
has unit testsintegration
has integration testsdeploy
has deploy scriptspipeline
uses this feature to test service-1
, then service-2
, then integration
then run the scripts to deploy
Then when someone submits a PR in service-1
, that PR should cause travis to run pipeline
's travis build instead of its own. But the interface from the PR should feel the same. Meaning it should report failures back to the PR that triggered it. Metaphorically, I'm thinking about it like a file system soft-link, or a C++ reference, where service-1
's .travis.yml has some configuration to say "I don't have my own CI, instead go run pipeline
's with a parameter telling it to build my repo against this commit"
I'm expecting that this is how almost everyone is going to want to use it. Multiple repos that act as event triggers for the pipeline, and the pipeline should report back to them with its result. Eg even if you're not deploying, once you split your project into multiple repos, to use the pipeline to coordinate across those repos, they'll see their unit tests as just the first stage in their pipeline repo.
Also, shout out to y'all for working on this, I'm a huge Travis fan, and was worried I'd have to find a different CI or write a lot of wrapper code in order to get this kind of feature. Also, thx to @BanzaiMan for pointing me at it ā¤ļø
Nice one! Is it possible to somehow "name" the jobs so that the job's intent is also revealed in the UI?
Hi everyone!
Build Stages are now in Public Beta š https://blog.travis-ci.com/2017-05-11-introducing-build-stages
Looking forward to hearing what you all think!
This looks really nice! The one thing I'd love though is conditional stages.
The same on
structure as deploy
would work fine. In our case, I'd like to have a deploy stage that runs for tagged commits (using a specific regex tag format), but I don't want the stage to appear at all on builds otherwise, since none of them should be deploying. I think something like this also solves quite a few of the use cases above (docs-only builds, unit/integration tests stages depending on the branch, etc).
First: Wow! This looks really really cool.
With that said, I think I found a bug? Maybe?
My build stages aren't respected correctly if I specify a ruby version at the top level (config, build log), only if I specify it inside the job itself (config, build log).
That is to say,
language: ruby
rvm: '2.4'
cache:
bundler: true
jobs:
include:
- stage: prepare cache
script: true
- stage: test
script: bundle show
- stage: test
script: bundle show
- stage: test
script: bundle show
gives me four "Test" jobs and one "Prepare Cache" job, in that order, while inlining the rvm
key as below gives me the proper one "Prepare Cache" and three "Test" jobs.
language: ruby
cache:
bundler: true
jobs:
include:
- stage: prepare cache
script: true
rvm: '2.4'
- stage: test
script: bundle show
rvm: '2.4'
- stage: test
script: bundle show
rvm: '2.4'
- stage: test
script: bundle show
rvm: '2.4'
I would have expected them to be equivalent?
Deployment is a central piece of every Continuous Delivery pipeline. Some organization or projects do not want to go with the Continuous Deployment model as it doesn't fit their workflow. That means they'd rather decide when to deploy on demand instead of deploying with every change. Are you planning to support the definition of a stage that can be triggered manually through the UI?
Python docker test/build/deploy fails for unknown reasons when converted to build stages. Should a separate issue be created?
When debugged and each step run in the tmate shell, everything works as expected.
Thanks for the feedback, everyone! We are collecting all your input, and we will conduct another planning phase after a certain amount of time, and evaluate your ideas, concerns, and feature requests. So, your input is very valuable to us.
@pimterry This makes sense. The on
condition logic is currently only evaluated only after the job already has been scheduled for execution, and it only applies to the deploy
phase that is part of the job. We'd want to make this a first-class condition on the job itself. You're right, this also would make sense in other scenarios, too. I'll add this to our list.
@hawkrives I see how this is confusing, and looks as if both configs should be equivalent. The reason why they're not is that rvm
is a "matrix expansion key" (see our docs here), and it will generate one job per value (in your case just one). The jobs defined in jobs.include
will be added to that set of jobs. This makes a lot more sense in other scenarios, e.g. when you have a huge matrix, and then want to run a single deploy job after it, e.g. https://docs.travis-ci.com/user/build-stages/matrix-expansion/. We have evaluating this more on our list, as we've gotten this same feedback from others, too, and we'll look into how to make this less confusing.
@bmuschko Yes, that is one of the additions on our list. In fact, it was mentioned in the original, very first whiteboarding session, and it has had an impact on the specific config format that we have chosen for build stages.
@soaxelbrooke Yes, it would make sense to either open a separate issue, or email support@travis-ci.org with details.
Again, thank you all!
Hi guys! Great feature!
I've just started to playing with it and have got an issue with build matrix:
I have a several Python version in my build matrix, so generating multiple test
-stage jobs.
Adding another stage without explicitly set python
key generates a single job with all python version values collapsed into single value.
Here's the build ā https://travis-ci.org/aio-libs/aioredis/builds/231530766
@popravich Hello. For individual issues, please open a separate issue at https://github.com/travis-ci/travis-ci/issues/new, or send email to support@travis-ci.com. Thanks.
Iād love it if the test
job/stage was not always the first one, and that the main script
was not included if it is skip
.
@jeffbyrnes If you do not want test
to be the first stage, please override the stage names.
the main script was not included if it is
skip
.
Do you mean that you don't want to see the message at all?
I'm seeing the same behavior as @hawkrives: there is no way to declare stages that execute before the build matrix. Any top-level keys that trigger any sort of build matrix (rvm
, env
, node_js
, etc) even if it's a single-job matrix, cause the test
stage to be declared first, so it always executes first. Any test
jobs declared within jobs: include:
are merely appended to the build matrix jobs.
The only solution I've found is to avoid the build matrix, manually enumerating each job of the matrix within my jobs.include
section. This is fine -- I have total control -- but it means for big matrices I might write a script to generate my .travis.yml
. The documentation could also describe this solution for newcomers.
Is there a way to share build cache between jobs with different environments? For example, can I populate the yarn cache in a stage using node_js: 7
, and use that cache in both of my "test" jobs: both node_js: 7
and node_js: 6
?
imho the syntax is rather complex and hard to grasp comparing to gitlab-ci. i already had serious headache trying to understand how to use matrix. and to make even worse stages and matrix can be also combined!
travis syntax endorses that all my scripts get nested several levels deep of indentation.
for example, let's take deploy-github-releases example:
deploy:
was root levelperhaps some syntax addon to write section names from root level instead typing in the actual script?
jobs:
include:
- script: &.test1
- script: &.test2
- stage: GitHub Release
script: echo "Deploying to npm ..."
deploy: &.deploy
.test1: |
echo "Running unit tests (1)"
.test2: |
echo "Running unit tests (2)"
.deploy:
provider: releases
api_key: $GITHUB_OAUTH_TOKEN
skip_cleanup: true
on:
tags: true
ps: [deploy-github-releases] lacks echo
keyword in script examples
@cspotcode Could you elaborate on how you would like to mix the build matrix and the stages, where you might want to execute some of it before the matrix?
As for the cache question, what is "an environment" when you say:
Is there a way to share build cache between jobs with different environments?
The answer to your question, I believe, is "no", because node_js: 6
and node_js: 7
jobs will have different cache names, as explained in https://docs.travis-ci.com/user/caching/#Caches-and-build-matrices. They could contain binary-incompatible files and may not work in general. If you want to share things between them, an external storage (such as S3 or GCS) would have to be configured.
@BanzaiMan
To mix the matrix with stages, I am imagining a situation like this example: https://docs.travis-ci.com/user/build-stages/share-docker-image/
However, in that example, all jobs in the "test" stage are declared explicitly within jobs.include
. Suppose a developer wanted to use the build matrix to declare those "test" jobs but wanted the "build docker image" stage to execute first. Will that be possible, or will we be required to avoid the build matrix like in the linked example?
I now see that this is the same as what @jeffbyrnes asked about here: https://github.com/travis-ci/beta-features/issues/11#issuecomment-301110820
"An environment" means all of the things that make the cache names different: node version, ruby version, environment variables, OS, etc. I agree that sharing cache between node 6 and 7 may not work in general, which is why the default behavior is to have different caches. I'm asking if there is a way to override that behavior in situations where a developer knows that sharing cache will safely afford them a performance benefit without causing problems.
EDIT fixed a typo
@flovilmart As mentioned before, for particular use case issues, please open a separate issue at https://github.com/travis-ci/travis-ci/issues/new. Thanks.
Hi,
Is there a list of YAML keys I have to remove from config (and move to jobs.include
), so that travis would detect it as pipeline-enabled?
I had to move them one by one until only notifications and cache left at the root level of nesting.
@webknjaz I am not sure if such a list should exist. This feature is meant to be compatible with the existing configuration, and if you had to do extra work, then there might be a bug. In https://github.com/travis-ci/travis-ci/issues/7754#issuecomment-301210607, I identified matrix.fast_finish
to be a potential culprit. Did you have this? If not, where can we see how you worked through the troubles?
Is there any way I can make certain parts of the matrix be in one stage, and others in another? Kind of like how allow_failure
can be used with env vars to target multiple disparate jobs.
If I am seeing this correctly each stage uses its own "workspace" meaning a new clone of the same repository. Under certain conditions you'll want to reuse the workspace (and the produced outputs) of an earlier stage and continue work based on that result.
Example:
Is this going to be a supported feature? It's an essential feature for modeling many build pipelines out there.
you'll want to reuse the workspace (and the produced outputs)
I believe you are looking for the Build Stages: Share files via S3 example or this earlier comment in this thread.
@BanzaiMan Yea, I think moving matrix.fast_finish
has fixed it: https://github.com/GDG-Ukraine/gdg.org.ua/compare/5af2ffc...49f1b02 (JFYI)
Is there a way to ignore failures of a specific job in a stage? I tried
# this is the second job in a stage
- env:
- TOOLS_TO_BUILD="clippy"
script: ./.travis/build-containers.sh
allow_failures:
- TOOLS_TO_BUILD="clippy"
But that seemed to have no effect.
@bmuschko See also https://github.com/travis-ci/travis-ci/issues/7753.
seems allow failures not possible. as enabling matrix
keyword disables whole stages
concept.
jobs:
include:
- php: "5.5"
- php: "5.6"
- php: "7.0"
- php: "7.1"
- php: "hhvm"
- php: "nightly"
matrix:
allow_failures:
- php: hhvm
@jeffbyrnes With regards to the test
stage being the first one, we have a change on our list that will make this possible. essentially, you'll be able to specify a list of stages and, doing so, modify their order. I'm not 100% sure I understand your second comment. The main script
will not be run if it is skip
. Do you mean you'd rather not have any message output at all?
@cspotcode That's true. At the moment, one cannot run a stage before the jobs that are expanded out of matrix expansion keys (such as rvm
, env
, etc.). This will be possible with the change I have mentioned before. For large matrixes that have a lot of repitition it can make sense to use YAML aliases, see https://docs.travis-ci.com/user/build-stages/using-yaml-aliases/. About your question regarding the yarn cache, I'm not 100% sure. I believe the runtime version is always included to the cache key in our cache integration. You could, of course, always manage this manually. E.g. see https://docs.travis-ci.com/user/build-stages/share-files-s3/.
@glensc You're right that the YAML syntax has an additional level of indentation (jobs.include
vs jobs
), which we intend to get rid of in the future. Other than that, our syntax is very similar to the one GitLab supports. Except that, of course, we also support matrix expansion, which can be confusing to use when combined with stages. You can always just list all jobs in jobs.include
though, and disregard the matrix expansion. We are considering allowing a stage
key on the root deploy
section, but we're not decided on that, yet. If you need any help with getting your .travis.yml
file right please always feel free to email us at support@travis-ci.org.
@webknjaz At the moment there's no official, public list of matrix expansion keys, although they're listed on the documentation for various languages. I've made a gist for you here: https://gist.github.com/svenfuchs/66c8b627dca35561ee1f0912d54dfd0d.
@ljharb At the moment there's no such way, and I'm not sure that would be beneficial. Looking at the confusion people sometimes go through I'm afraid this might add to it. Do you have an example usecase though? I'd like to understand what you are trying to do.
@bmuschko You're right that every job runs on a fresh/clean VM using a new clone of the repository. In order to share build artifacts (for example a binary compiled in an earlier stage) you can do so using our artifacts feature (see https://docs.travis-ci.com/user/uploading-artifacts/), or manage the process manually, e.g. https://docs.travis-ci.com/user/build-stages/share-files-s3/. We do intend to improve on this in the future.
@shepmaster, @glensc allow_failures
should continue to work as before. I am surprised our docs on stages do not mention allow_failures
though. I thought we had added that, and I'll make sure we fix that. It should mention jobs.allow_failures
, not matrix.allow_failures
. jobs
is an alias key for matrix
. So you want to specify jobs.include
and jobs.allow_failures
. Does that help? Here's an example build: https://travis-ci.org/backspace/travixperiments-redux/builds/226522374 and here's the respective .travis.yml
file: https://github.com/backspace/travixperiments-redux/blob/primary/.travis.yml
@svenfuchs
jobs
is an alias key formatrix
. So you want to specifyjobs.include
andjobs.allow_failures
.
I think this explains https://github.com/travis-ci/travis-ci/issues/7754. If there is an existing top-level matrix
key (as in the issue), depending on where jobs
appears in .travis.yml
, one or the other will be ignored (according to the YAML specification).
So there will be some modification required:
jobs:
fast_finish: true
include:
- stage: 1
script:
- echo yes
allow_failures:
ā®
allow_failures should continue to work as before
@svenfuchs yes, I see how it can work now, but I have to admit that it's pretty inelegant. I have to copy-paste my entire stage definition simply to mark it as failure allowed. If any part of my stage definition changes, I have to make sure to duplicate those changes to the allow_failures
key. The association seems backwards to me.
I'll probably scrounge around for some YAML magic to allow me to do something akin to &reference
so I don't have to duplicate everything.
@svenfuchs ok, great! Specifying order is definitely a really great thing to have. And sorry for ambiguity; yes, I mean that I would prefer, if Iām skipping the built-in script
, that it not appear at all.
@jeffbyrnes , let us go even further, if not specifying install
let not run default installation
probably need to have a feature switch like "no-commands-auto-discovering" or sth like this.
@svenfuchs the idea is that i want to be able to create a matrix, and then arbitrarily select which items in the matrix (which might not be a complete row or column, but might also include complete rows and/or columns) go in which stages.
Hi.
Great feature the stage
I have a question (hope is not repeated):
If a job ends success, and we create different stages, let's say for deploys, those stages share files of scripts?
Use case: Build & test android app, then stage UI test android app, and then release many flavors, each one in a stage, and also release to github
@alorma Storage does not persist between stages.
This took a few false starts, but I got it working for a multi-test stage then single deploy stage usecase: https://github.com/peterjc/pico_galaxy/commit/4506825607ec5848e7663f2c6363fb1407f74498 https://travis-ci.org/peterjc/pico_galaxy/builds/232879541
And for multiple quick preliminaries prior to multiple slow tests: https://github.com/biopython/biopython/commit/4154a48c1531582d332bb1b8d4050838bdc6969b https://travis-ci.org/biopython/biopython/builds/232880583
The key point for me was appreciating that my test stage's before_install
, install
, script
(etc) defined at top level would by default also be applied to my new stages. I had to define (dummy) entries for those as well. I think this is exactly what @jeffbyrnes and @keradus were finding too.
In solving this the "View Configuration" tag next to the "Job Log" shown for a build was really helpful.
@alorma you can emulate this with cache
directive: just specify the directory in config. If you need, you may put some files into the dir at before_cache
stage and optionally copy them from cache dir to the corresponding locations somewhere at before_install
stage.
If you have multiple jobs in a stage, be mindful that caches may corrupt, depending on the cache names, and may be overwritten by the last job that uploads the cache.
The best way is probably setting up a sync
able resources such as S3 and a remote scp
server. We have an example of S3 in the docs.
With matrix one could specify big matrix of lang x envs, then via jobs it's possible to add extra jobs to jobs generated from matrix. Also, it's possible to completely skip the matrix and create jobs manually. But still, it creates matrix-based jobs and assign stages to them.
Is is possible to take stage-first approach, but treat some stages as small matrixes ? I would like to first decide what kind of stages I have, and then assign building envs to them.
Like the following example (it's not working, but I think it represent my idea pretty well)
sudo: false
jobs:
include:
-
stage: FOO
php:
- 5.4
- 5.5
- 5.6
- 5.6
env:
- DB=mysql
- DB=sqlite
script: foo
-
stage: BAR
php:
- 7.0
- 7.1
script: bar
-
stage: BAZ
php: hhvm-nightly
script: baz
2nd thing. Let say I specify everything via good old matrix, and then inject one new stage via jobs. It goes as 1nd stage, while first stage is Test
from regular matrix. How could I inject stage before that default Test
stage without dropping matrix and create stages only via jobs ?
Hi.
Using this travis.yml file: https://github.com/SchibstedSpain/Leku/blob/travis_stages/.travis.yml#L48
I get a two test stages: https://travis-ci.org/SchibstedSpain/Leku/builds/233135760
Why? :S
@alorma did you get persistence to work with the cache across other jobs? I have similar issue that I was hoping stages would fix, esp as there's an example about warming up cache for other jobs.. I have one build job to build opencv and put jars in ~/.m2, and once thats built about 5 other jobs can kick off that require that dir. The .m2 cache doesn't seem to appear to the other stages.
S3 probably not viable for me as any PR's wouldn't get the credentials.
Just started trying @vb216 , i will see if jobs work :D
@alorma as per my observations, setup outside of jobs section produces test stage and any jobs.include
entries produce additional stages.
Is it possible to make dependencies like apt packages, or android components, build stage specific? e.g. I might need a dozen such dependencies for my test stage, but not for deployment.
@webknjaz i tried moving all "before" steps to a first job, and still two jobs for test :s
@alorma Because you are currently using matrix expansion keys on root level: jdk
.
If you move jdk
value to jobs.include, than it just make one jobs.
I know this is some confusing.. And I hope there are some nice docs or behavior for that.
This is related to https://github.com/travis-ci/beta-features/issues/11#issuecomment-300848017
From simple deployment pipelines, to complex testing groups, the world is your CI and CD oyster with Build Stages.
Build Stages allows you and your team to compose groups of Jobs which are only started once the previous Stage has finished.
You can mix Linux and Mac VMs together, or split them into different Stages. Since each Stage is configurable, there are endless Build pipeline possibilities!
This feature will be available for general beta testing soon... watch this space š
We love to hear feedback, it's the best way for us to improve and shape Travis CI. Please leave all thoughts/comments/ideas related to this feature here.
Happy Testing!