Open vanti opened 5 years ago
Updated to ignore files which device type does not have a corresponding job template defined.
I would need to look into more details of this, but some high-level comments based on just the description:
First of all, as a generic comment, I'm a big fan of automation, practice it for a bunch of time in different contexts, and then ... I know limits of it. In some cases, some things are better be done manually instead of fragile/inflexible automation.
When new test images are added to the directory, jobs are automatically submitted
Sounds cool, but you need to run a command to add a new test image to the directory, right? Then, instead of running cp zephyr.bin ...
and having job submitted automatically after that, you could as well run something like submit-to-lava.sh zephyr.bin
, which will both copy binary where needed and post a job. And most importantly, it can easily accept additional params.
using the template corresponding to the device type.
I'm afraid, that's too inflexible. LAVA job description depends not just of device type, but more on the actual binary to be tested. Standard Zephyr test vs MicroPython binary vs Zephyr.js binary vs networking test vs GoogleIoT test all could/would run in different ways. That's why submit-to-lava.sh -t job_template_name zephyr.bin
would be superior solution.
./monitor-images.sh
We have a default identity being set up, so can as well use that by default (while allow to override).
The script is automatically run when running 'make all'.
Based on all the above, I probably would prefer a different solution to submit jobs, so if we go forward with this "automatic monitoring daemon" way, I'd suggest to make it a separate target, so people who won't use it didn't have to run it (because it may lead to additional confusion and/or errors).
submit-to-lava.sh -t job_template_name zephyr.bin
This is btw very similar to how we deal with this matter in our Jenkins jobs (indeed, I'm probably biased towards that solution based on my knowledge and familiarity with it).
(I actually wanted to sit with Vincent at the sprint and show this Jenkins-based LAVA setup to make sure he's aware of it, but well, we had pretty packed schedule there, so I forgot about it later.)
Thanks @pfalcon for the comments. To put this in some context, I asked @galak what would be missing in order for automated testing to happen for Zephyr besides what we have been working on (ie. adding boot methods, device types), and he said we need a way to automatically submit jobs once an executable is placed into test-images folder. So I took a first stab at putting something together that works for what I have today for TI CC32xx and OpenOCD to help get the ball rolling. It would likely take several iterations for us to converge to something that works for the wide variety of tests we have got.
Sounds cool, but you need to run a command to add a new test image to the directory, right? Then, instead of running
cp zephyr.bin ...
and having job submitted automatically after that, you could as well run something likesubmit-to-lava.sh zephyr.bin
, which will both copy binary where needed and post a job. And most importantly, it can easily accept additional params.
If something similar to "submit-to-lava.sh" already exists, I am all for reusing its functionality or combining what I have got in there. Maybe you can submit a PR?
I'm afraid, that's too inflexible. LAVA job description depends not just of device type, but more on the actual binary to be tested. Standard Zephyr test vs MicroPython binary vs Zephyr.js binary vs networking test vs GoogleIoT test all could/would run in different ways. That's why
submit-to-lava.sh -t job_template_name zephyr.bin
would be superior solution.
I wanted to see if we could keep it simple by having one template per device-type. If as you said we expect the job definition to significant vary between binaries, then we won't have a choice but to write a job definition for each test, which is extra work on the test writer, especially in cases where 99% of the tests could have shared the same job definition. It is also unclear to me who would be responsible for writing these job definitions. Would everyone who contribute tests to Zephyr learn LAVA and provide their own LAVA job definition? What would you think of an approach where we make the '-t' flag optional, and use a default template when it is not specified?
Based on all the above, I probably would prefer a different solution to submit jobs, so if we go forward with this "automatic monitoring daemon" way, I'd suggest to make it a separate target, so people who won't use it didn't have to run it (because it may lead to additional confusion and/or errors).
Currently, if there is no template added to the job-templates directory for a given device type, no job would be auto-submitted and everything works as before. The way I see it we should agree on a mechanism that everyone would use and incrementally improve on. Adding build targets that only some of us use is actually more confusing imho.
When new test images are added to the directory, jobs are automatically submitted to LAVA using the template corresponding to the device type. The templates are stored under 'job-templates/lava_<device-type>.job.
Usage: ./monitor-images.sh <your LAVA identity>
The script is automatically run when running 'make all'.
Signed-off-by: Vincent Wan vincent.wan@linaro.org