Open drbild opened 4 years ago
The xaprw001_dev artifact needs to be uploaded to S3 in order to be able to install & test.
The xaprw001_dev artifact needs to be uploaded to S3 in order to be able to install & test.
Is that something that you can do?
If I understood your checklist item correctly, you wanted me to upload the artifact to the xaptum-captive-firmware-dev
bucket after the tests have passed.
What I meant by my comment is that I needed to do that first to be able to install it on the device.
That checklist item needs to be the first item.
I see, that makes sense. Can you please reorder the checklist?
This does pose a problem for testing the production releases (which I think should be the primary focus of our testing, in general, for future releases. Agree?).
We can't upload those before testing, because a customer might install them. But we can't install them easily for testing without uploading.
I can think of several ways to address this, but am curious what your thoughts are.
Since we are currently using a manual upload to S3, we could upload the image to S3 as long as we don't add it to the captive server. The customer won't see it. However, it is probably not a good idea for auditing to do that or to test the device - even if it is on our own /64 - on the production ENF.
Agreed on not a good idea. We shouldn't be manually uploading to S3 anyway; it should all be done through the captive API.
We could put a production router card on the captive-dev network. We're testing the build, not the delivery mechanism in production. Once we complete testing, we can upload the artifact to the production S3 bucket and add it to ENFCLI.
Interesting.
The challenge with this approach had been the build variant (dev vs prod) hardcoded the network ::/64 (captive vs captive-dev) in the mender configuration files and certificates. But now that mender config is removed, there may not be any hardcoded references to the network in the firmware itself.
It would certainly be ideal to be able to run either build on either network. The only desired difference between dev and prod builds is the additional debug tools and serial console. Ideally each build would work on either network.
Can you double check in the xaprc
and xaptum-buildroot/buildroot-external-xaptum
repos that we no longer hardcode any reference to the particular network?
I can think of several ways to address this
Another option is to extend the captive
server object model to support "released" and "unreleased" firmware versions.
When a firmware is first uploaded it is "unreleased" and not visible to customers, but is visible to users in our test networks. After testing, we mark the firmware as "released" to make it visible to customers.
This requires changes to the captive API and permissioning model, so I definitely (at least for now) prefer your approach of running the prod firmware on the captive-dev network.
Can you double check in the
xaprc
andxaptum-buildroot/buildroot-external-xaptum
repos that we no longer hardcode any reference to the particular network?
The cert for validating the signature on the mender artifact is still included (and needed) in the build. That limits a single router card to just dev
builds or just prod
builds, but that should be ok. The captive-dev
API server will need to serve both variants and the tester ensure to to upgrade a particular card to the right variant (or the upgrade will fail).
It's fine time to release version 1.1.1. Let's use this process:
@dberliner
git tag -a -m "release 1.2.0" v1.2.0 25a335ef && git push --tags
@glfejer
@drbild If all tests passed
@glfejer If all tests passed
@glfejer