Closed svenfuchs closed 6 years ago
Heads up, the comment on #11 points to itself instead of to here.
Is there any chance you could implement a more fine-grained job dependency graph feature than just "wait until the previous stage is done"? E.g. in my case I have a bunch of different compiler/OS configurations, and "populate cache" and "build" stages that each have a bunch of jobs corresponding to every configuration. In principle, a "build" job could start as soon as the corresponding "populate cache" job for that configuration was done, but right now the "build" stage doesn't begin until all of the jobs in the "populate cache" stage have finished.
See https://github.com/haskell/cabal/issues/4556 for more details.
Is it possible to skip cache update for a particular stage?
I have a acceptance test job which should leverage cache from the previous stage. However, I don't care about the new artifacts it creates/installed (which would be cached in default configuration) and I don't want to have cache updated at the end of that particular stage (while cache still should be fetched at the beginning of that job to have build faster).
Looks like there's a small bug. If I have multiple stages (e.g. 4) and I add conditions to run only 1 stage then the main job will stuck in "created" state.
Looks like there's a small bug. If I have multiple stages (e.g. 4) and I add conditions to run only 1 stage then the main job will stuck in "created" state.
Hm, that's also true for multiple jobs (e.g. if conditions remove 2 jobs and 2 remain).
@ujovlado I think that is https://github.com/travis-ci/travis-ci/issues/8415. We will look into it shortly.
@BanzaiMan thanks!
@ujovlado No worries. That is one long standing source of confusion, which is tracked in https://github.com/travis-ci/travis-ci/issues/1066.
- [1] A strict travis-yml parser has been shipped.
Does this mean we'd have CLI tool for config validation? Maybe embed it into existing travis CLI utility?
Somewhat related to:
Allow specifying
allow_failure: true
per job onjobs.include
. (Probably yes, but not before [1].)
Feature: Set the build status when all remaining jobs and stages are allowed to fail. https://github.com/travis-ci/travis-ci/issues/8425 also asks this in the case where the last stage consists of jobs which are allowed to fail.
Guys, plz take a look at this build: https://travis-ci.org/aio-libs/multidict/jobs/275530708#L848
There must be deploy step in this job, but it's missing according to logs. I'm going to guess that some of your changes might've affected build steps resolution.
Any ideas? Should I file a separate issue?
One thing that's frustrating when working on a build with stages, compared to travis-after-all
, is that it seems like it takes much longer for a stage to start after the previous stage completes. I'm not sure if this is what's happening, but it's like the next stage's jobs get tossed onto the end of the queue. It would be nice if subsequent stages could run more quickly after the earlier matrixed jobs have already started. I can clarify if that isn't clear.
@abbeycode Each job is independently queued (in other words, only when the execution gets to that job, is it inserted into the queue). This is due to how the queue is processed; if it is in the queue, it is assumed to be runnable. If we put the last job on the queue, it may be ready to run before it is supposed to.
@webknjaz We are not just "guys", so please refrain from using that. Thanks. :-)
As for your deployment not triggering, it is reported as https://github.com/travis-ci/travis-ci/issues/8337. The current workaround is to not skip
, but use an actual command (true
, or echo skip
, or whatever).
Hey guys, it is being really nice to test the stage features. I would like to suggest running the stages concurrently, it would be fantastic!
@cristopher-rodrigues We are not just "guys", so please refrain from using that. Thanks. :-)
I would like to suggest running the stages concurrently, it would be fantastic!
Could you elaborate on this? If the jobs in these stages can be run concurrently, they can be in the same stage and not lose any meaning, can't they? Or did I miss some broader context?
Would be nice with a label option when running multiple stages with the same name. As a workaround we have put a comment in an environment variable (which is displayed)
@hmottestad This request is listed in @svenfuchs's list above.
@BanzaiMan Sorry for that, my bad.
Look for some use case:
I execute unitary tests and e2e in a different stages for example, they can be executed in parallel to speed things up a little bit.
I run different journeys of e2e tests and they do not depend on each other. My E2e tests are complex and time consuming, if we would be able to execute asynchronously, it would br way more faster.
Something like that:
...
jobs:
include:
- stage: Tests Unit
async: true
script: ...
- stage: Tests E2E Journey Foo
async: true
script: ...
- stage: Tests E2E Journey Bar
script: ...
async: true
@cristopher-rodrigues If you give those three the same stage, they'll execute in parallel, won't they? Do they need to be in their own stages?
@hawkrives No, I wish they were different and independent stages. If they fail, I can treat them in a particular way.
@cristopher-rodrigues Thank you for the clarification.
I seem to have a duplicate stage for node 6 in the vimflowy travis build: https://travis-ci.org/WuTheFWasThat/vimflowy/builds/276374578 yet I've only got
node_js:
- '6'
- '6.1'
In my .travis.yml
@willprice The jobs defined in jobs.include
are picking up the first of the node_js
array you are using to define a matrix expansion. They are different jobs; do compare https://travis-ci.org/WuTheFWasThat/vimflowy/jobs/276374579 and https://travis-ci.org/WuTheFWasThat/vimflowy/jobs/276374581.
@BanzaiMan Ah yes, I can see the difference now. Will dive more into the docs tomorrow. I'm aiming to get the first stage to run across all environments defined in node_js
.
@BanzaiMan Some feedback on working with the build stages:
I finally understand how the node_js
entry adds implicit jobs to the test
stage, this is quite confusing and took several read-throughs of the docs to fully understand.
The builds take a very long time from commit to start, is there a lot of contention for machines supporting build stages?
I'm not sure if I'm doing it wrong or I found a bug.
I have a deploy stage (https://github.com/date-fns/date-fns/blob/master/.travis.yml#L29-L33):
jobs:
include:
- stage: deploy
if: branch = master AND tag IS present
script: ./scripts/release/release.sh
The stage is supposed to run when the branch is master
and it's a tag build (branch = master AND tag IS present
). I ran a tag build on master
but the stage was skipped: https://travis-ci.org/date-fns/date-fns/builds/276882550. I've created the tag using GitHub Releases interface. It did work before (although I had a problem that deploy
runs 3 times, so I hoped that the stages would solve the problem): https://github.com/date-fns/date-fns/commit/e5ec2d59576645abe93e893689d21aa755ace3f6#diff-354f30a63fb0907d4ad57269548329e3L25
@kossnocorp
Funny thing is that $TRAVIS_BRANCH == $TRAVIS_TAG
for tagged commits: https://docs.travis-ci.com/user/environment-variables/#Default-Environment-Variables
They probably rely on these env vars, which causes expr evaluate to false
.
I guess you could work around the issue using branches.only
and/or deploy.on.branch
config keys:
@webknjaz I see, thanks! In my case I can just get rid of branch = master condition.
@svenfuchs Thank you for this nice new feature. In my case it works like a charm.
https://travis-ci.org/acsone/alfodoo https://github.com/acsone/alfodoo/blob/10.0/.travis.yml
I agree with @hmottestad for the need of a label option and see that this feature is listed in your list above.
:+1:
@svenfuchs We no more have access to the stage view...
same for me. Stages view today broke. It appeads on second on refresh and disappers. In both FF and Chome.
@svenfuchs, @BanzaiMan - Thanks you for adding this feature, this looks super useful for our usecase.
However I've noticed one potential issue. My first try on adding an order of stages is here: https://github.com/bsipocz/astropy/blob/f756545a90347191e71605f87cb498ac8b157cce/.travis.yml#L54
The logic would be to have a default stage "tests", but only run that after we make sure the most basic checks pass, and then if everything passed on "tests", too we would also run a few more jobs (e.g. hold the osx tests for the case when everything else passed already).
However it seems that using the default stage doesn't take the order into account, and being scheduled to be the last one: https://travis-ci.org/bsipocz/astropy/builds/277572768
@bsipocz, I guess it's singular: test
, not tests.
https://travis-ci.org/muvarov/odp/builds/277479969?utm_source=github_status&utm_medium=notification Matrix is still not visible.
@webknjaz - not sure what you mean, "tests"
is the name of one of the stages I defined, could have called it anything else.
@muvarov you've got single stage: default matrix stage is called test
and you add a bunch of jobs to the very same stage, I guess there's no reason to title all of the jobs with a single stage.
@bsipocz not really, you define stages by adding jobs to that stage name. And stages:
list is just for ordering, it does not define stages.
btw, I see you have two duplicate stages:
lists. I'd say get rid of one of them.
@webknjaz thanks, after renaming it works!
There is one global stage above, called "tests" (not to be mixed up with stages
that does the ordering). Assuming that setting the global works for stage
just as it works for the other global settings, which may not be a good assumption (but then be a valuable feedback to the group working on this feature).
@muvarov It should be fixed now. sorry for the issues and thanks for pinging us.
@drogus rogus yes, thanks. I see that old builds now shows correctly.
@bsipocz I believe the top-level stage
key is causing some confusion, though it is behaving exactly the way you defined.
I will explain.
Notice that the top-level stage
key defines the default stage for all the jobs in the configuration. (This is not explicitly discussed in the current documentation; this should be fixed.)
The current value is an array:
stage:
- tests
The stage
key is assumed to be a scalar, but nothing further is currently done (which I suppose can be considered a bug). This means that the default stage name this key defines is ["tests"]
, which is inherited by the jobs defined in jobs.include
which do not have their own stage
value defined.
I believe that you are expecting the second job in jobs.include
to implicitly define the stage name to be master_only
, which was previously defined in the first job, but because of the top-level stage
, this second job gets the stage name ["tests"]
.
So, all in all, if you remove the top-level stage
key, I believe you would get what you want. Please try it, and let us know how it works for you.
Hi there, I've been using build stages for the past few months, here's my feedback:
For example, my build stages include a "Deploy" one. However, I would prefer to have this stage only run when I have a Git Tag (aka Github Release) or when on master
branch. That speed up my builds and simplify some of the scripts (I do the tag/branch checks in the scripts with $TRAVIS_BRANCH
and $TRAVIS_TAG
)
@superzadeh you can do this: just add new if: tag IS present
or if: branch = master
It looks like build status turns into canceled instead of failed. See: https://travis-ci.org/cherrypy/magicbus/builds/278959885
Do you mean that you were expecting the status to be "failed" when the failed jobs in the "Test" stage canceled the jobs in "Test under os x (last chance to fail before deploy available)"?
Exactly
jobs.fast_finish: true
doesn't cause green build here: https://travis-ci.org/cherrypy/magicbus/builds/278982813
I've done allow_failures
for whole OS X and the only stage which didn't complete yet contains only OS X jobs, but the status is still started
We have shipped iteration 2 for the Build Stages feature.
Along with a lot of bug fixes and small improvements this includes:
Find out more about this on our blog, and documentation here and here.
We would love to hear your feedback. Please leave all thoughts, comments, ideas related to this feature here.
Happy Testing!
FAQ
What feature requests have you received for improving Build Stages so far?
We are adding this list of feature requests (with a rough indication of how likely we are going to prioritize it) so you don't have to ask about these again :)
script
etc. on the newstages
section, so they can be shared across all jobs in one stage. (Yes, but not before[1]
)allow_failure: true
per job onjobs.include
. (Probably yes, but not before[1]
.)stage
key on thedeploy
section. Turn this into a stage and a job. (Not quite sure, not before[1]
.)skip: all
, so one does not have to overwritebefore_install
,install
, andscript
. (Sounds like a good idea? Not before[1]
)jobs.include.env: [FOO=foo, BAR=bar]
. (Not quite sure).travis.yml
editor/web tool. (Yes, based on the specification produced in[1]
)[2]
)running
tab, consider grouping jobs per build (and possibly stage). (Interesting thought. We are working on improving this UI, and might consider this in a later iteration.)Other improvements:
[1]
A strict travis-yml parser has been shipped.[2]
The GitHub commit status API has the known limitation that new updates are being rejected after the 1000th update. They are working on improving this, and providing a way for us to post more updates. Until this is unblocked we are unlikely to make any changes to our commit status updates.