Closed nmenon closed 5 years ago
@murpdj72 assigned to you
@nmenon What defconfig(s) do we want to build? Any specific requirements? Do you want to use TI config fragments?
@murpdj72 a) no TI config fragments. b) x64, am64, armv7 as a start.
we should add compile_test list as well..
Any specific toolchains? I was going to use 8.3 compiler I assume you mean https://cateee.net/lkddb/web-lkddb/COMPILE_TEST.html
gcc 8 might be safe for now. 9 would be better as well.. not sure of stability of gcc9.
yes on compile_test.
Can we get sparse and checkpatch runs? Not sure how to limit those to only the components of interest, though. I've only done plain compile testing with travis.
@murpdj72 oh yeah -> when tomba mentioned, I recollected https://github.com/nmenon/kernel_patch_verify -> will need to make sure bisectability etc are handled. checkpatch, coccicheck, sparse, smatch etc are low cost automation problems to get rid of..
OK. Lets first get it building the basics then we can add other checks. But if I can add all of this in the initial yaml file I will.
So doing some testing I am finding we have some limitations We are given a 50 minute window after VM boot to 50 minutes. This includes any installs that need to happen. So far I cannot perform 2 simple builds in 50 minutes (see below)
I am finding that the servers are very limited in processing power. The best I have seen is 2 processors which means the number of build threads are very low. I have tried other distributions available but they produce the same.
I am looking for solutions on parallelizing the builds as well as better servers to commission within travis. Also looking at matrixing the builds. This is my current travis.yml I am working with https://github.com/murpdj72/linux-kernel-led/blob/travis_work/.travis.yml
@murpdj72 awesome :) - better than none. any chance we can put different profiles in as separate ones? or do you see this is a deadend entirely?
I put a travis_baseline branch based on v5.2 tag that you could send a pull request to. you could refer to this issue to link the PR to the issue.
@nmenon Well the travis-ci is triggered based on a branch update. I need to play around with matrixing and see if there is a way to speed things up or tirgger back to back builds. Otherwise we might need to trim our requirements to building arm64 once w/COMPILE_TEST and running the kernel_verify script. All the other heavy lifting can be done by our Jenkins server leveraging AWS but I am not seeing much speed improvement with the AWS either just not IT managed servers.
Lets only build the parts we're working on. The DP driver, PHY, and what else.
@tomba What directories specifically would we build?
For the DP:
drivers/gpu/drm/bridge/ drivers/phy/ti/ drivers/phy/cadence/
we will need something like this for usb as well
What is the longevity of this project? How likely will we need to re-use this mechanism? I am trying to determine how much effort to put into this.
I am thinking of scripting the actual build work like uBoot (buildman) does with a recipe file that contains what we need. Then cloning that repo and linking in the script to the kernel code.
yaml is not really a scripting language so we should probably move this to a bash script for ease of updating.
Hoping we have a year or so+ of work. but the key is that we are currently getting blocked on minor issues that speed up the upstreaming.. which means if the automation is effective, the life time of the project should be shorter, however the investment done on this can be carried over to other similar project elsewhere in linux community (not just us).
Closing since this is done. See PR #5
Create a travis-ci.org compliant yaml file that can be used for travis testing.