Open strongly-typed opened 7 years ago
I really appreciate the professional management of @salkinium!
Awwww. 😍 pets ego
I think this is a great idea, the simplest way to start would be to actually run the unit tests on the device for every release and attach the output to the release in some form.
Q2 release is in ~7w, we should agree on a timeline, otherwise this may lead to some serious yak shaving.
mbed CI Test Shield.
Contractually not allowed to speak ill of The World's #1™ embedded test shield… 😅
Might be interesting to create just one PCB that fits on all the Discovery and Nucleo-32/64/144 boards. That would be some serious footprint-overlay-magic*, but would save on costs.
I've been thinking about hardware test tools and setups a lot and have many ideas bouncing around my head, so I'll sketch them down. I'm going to be fairly busy until end of May, so this could take a while.
* do not attempt, professional engineer in a closed room
Thanks for the feedback. I'm not yet aiming at an own board. That's something for later.
actually run the unit tests on the device
That's a great idea. We haven't done that for years. For AVRs this is only possible for the ATmega2560 and I don't think I have access to one at the moment. If my preferred seller from Asia is quick enough a board may arrive before end of Q2.
Is there a strategy to split the unittest into smaller chunks which can be tested individually?
That's a great first step to first introduce a simple workflow. I would call it an achievement if we manually run unittests on an ATmega, a STM32L4 and a STM32F4 for the next 2017q2 release. Agreed on that?
Maybe I prepare a breadboard with
Uart is tested by using the logger to observe the example.
That would be some serious footprint-overlay-magic*, but would save on costs.
For the pinout issue: The simplest approach would be:
DUT <-> FPGA as a static switch matrix <-> Sensors
The FPGA can be reconfigured to adopt different pinouts to our sensors board.
Is there a strategy to split the unittest into smaller chunks which can be tested individually?
Paging @dergraaf, paging @dergraaf. We need a knowledge infusion stat!
manually run unittests on an ATmega, a STM32L4 and a STM32F4 for the next 2017q2 release. Agreed on that?
👍 Let's do it.
The simplest approach would be {FPGA}
simplest
I think we should start with a wooden board and a 5V power supply and put like 10 different dev boards on it, then figure out how we are going to program and evaluate them. I'm currently trying to get the ESP8266 port of BMP to work, then we'd have wireless BMPs for cheap and could connect them to Ze Klaot™. No platform-dependent USB mess (cc @ekiwi), only one 5V power rail.
Don't get me wrong, I absolutely agree on the FPGA as a vision, but I think the next logical step would be to deploy the unit tests as part of CI. That actually involves a lot of non-embedded work as well.
Is there a strategy to split the unittest into smaller chunks which can be tested individually?
Well, the typical way would be to group the test into test suites which then can be combined or run individually. Unfortunately this has not been implemented in the xpcc unittest tools yet.
For the pinout issue: The simplest approach would be:
DUT <-> FPGA as a static switch matrix <-> Sensors
I like that idea :smile:
Well, the typical way would be to group the test into test suites
I'd like first to convert the runner templates to Jinja2 templates, so that the iterations are done with Jinja ('cause of yak shaving).
The unittest creates a blob which is poured in here. But I have not yet found out how (technically) to integrate the Jinja2Template Builder in the unittest.py. Any suggestions?
I ran a unittest on STM32L4 Nucleo board (after disabling one test).
Unittests (May 14 2017, 15:47:59)
FAIL: bmp085_test:192 : true == false
FAIL: bmp085_test:193 : true == false
FAIL: bmp085_test:192 : true == false
FAIL: bmp085_test:193 : true == false
FAIL: bmp085_test:192 : true == false
FAIL: bmp085_test:193 : true == false
FAIL: resumable_test:904 : 4 == 1
Failed 7 of 3847 tests
FAIL!
I will look into the bmp085 issues.
I changed the resumable_test:904
- TEST_ASSERT_TRUE(sizeof(result) == 1);
+ TEST_ASSERT_EQUALS(sizeof(result), 1U);
so that the unittest spits out whats expected and what is the actual result. @salkinium What's wrong there?
Ah. The assumption I made for ResumableResult<T>
was that this is an object that lives either on the stack or in registers, but is rarely (never?) placed in memory. Therefore I used a uint_fast8_t
to describe the state. This should therefore be:
- TEST_ASSERT_TRUE(sizeof(result) == 1);
+ TEST_ASSERT_EQUALS(sizeof(result), sizeof(uint_fast8_t));
Finally that's the result on an Nucleo L476RG board
Unittests (May 14 2017, 18:50:53)
Machine: usrs-MacBook-Pro.local
User: usr
Os: Mac 10.11.6 (x86_64)
Compiler: arm-none-eabi-g++ (GNU Tools for ARM Embedded Processors 6-2017-q1-update) 6.3.1 20170215 (release) [ARM/embedded-6-branch revision 245512]
Local Git User:
Name: Sascha Schade (strongly-typed)
Email: xxxxxxx@gmail.com
Last Commit:
SHA: b70e0a51a74dc33e0f24afae1bd490437625ae97
Abbreviated SHA: b70e0a51
Subject: [tests] Include build info for unittest.
Author:
Name: Sascha Schade (strongly-typed)
Email: xxxxxxx@gmail.com
Date: Sun May 14 18:36:13 2017 +0200
Timestamp: 1494779773
Committer:
Name: Sascha Schade (strongly-typed)
Email: xxxxxxx@gmail.com
Date: Sun May 14 18:36:13 2017 +0200
Timestamp: 1494779773
File Status:
Modified: 3
Added: 0
Deleted: 0
Renamed: 0
Copied: 0
Untracked: 0
Passed 3847 tests
OK!
Actually it runs for some seconds!
Steps to reproduce:
git clone https://github.com/strongly-typed/xpcc.git
cd xpcc
git checkout fix/release_tests
Change the target to nucleo_l476rg
in src/unittest_stm32.cfg
. I did not want to commit that. Should it be adopted to scons unittest target=nucleo_l476rg
?
scons unittest target=stm32
openocd -f board/stm32l4discovery.cfg -s tools/openocd -c " init" -c " reset halt" -c " flash write_image erase build/unittest_stm32/executable.elf" -c " reset halt" -c " mww 0xE000EDF0 0xA05F0000" -c " shutdown"
Should it be adopted to
scons unittest target=nucleo_l476rg
?
Yes, please. It makes more sense to proved a board here, than a device, since the xpcc logger and programmer is already defined that way. This should enable unittest execution on all boards then.
picocom -b 115200 --imap lfcrlf /dev/tty.usbmodem1423
openocd -f board/stm32l4discovery.cfg -s tools/openocd -c " init" -c " reset halt"
IP=`ifconfig | grep "inet " | grep -Fv 127.0.0.1 | awk '{print $2}'`
docker run -e OPENOCD_IP=${IP} -e BOARD=nucleo_l476rg stronglytyped/xpcc-unittest-stm32
Inside of the docker container the toolchain will be set up, the unittest of the develop branch will be checkout, compiled and flashed to the board. The unittest starts to run after flashing.
Unittests (Jun 30 2017, 00:35:02)
Machine: 13c382e98832
User: root
Os: Ubuntu 16.04 xenial
Compiler: arm-none-eabi-g++ (GNU Tools for ARM Embedded Processors 6-2017-q2-update) 6.3.1 20170620 (release) [ARM/embedded-6-branch revision 249437]
Local Git User:
Name:
Email:
Last Commit:
SHA: faca725834af311436f78f182eca88e86ca10d0c
Abbreviated SHA: faca725
Subject: [travis] Update arm-none-eabi-gcc to 2017q2.
Passed 3847 tests
OK!
Docker containers are here.
Very nice, just in time for the 2017q2 release this weekend!
Thanks, great to hear that. Finishing something for the 2017q2 release was my goal ... after some time of inactivity. Were you able to reproduce it?
BTW, I won't be able to fix the I2C for 2017q2 release. But definitely after that.
After introducing a new release cycle (#160, #237), the next release is knocking on the door (2017q2 tags at #249 and #248 ;-) I really appreciate the professional management of @salkinium!
What about adding a (manual) release test with some dev boards to that release cycle? HWUT is dead, so lets start first with something easier.
Dev boards with general availability are IMHO
They could be wired together with some LEDs, TTL2USB, CAN transceivers, well known sensors (I2C, SPI, ...). There should be a generally accepted test setup, e.g. like the mbed CI Test Shield.
The QA team :1st_place_medal: should not spend more than 15 minutes to run some examples to verify that at least in that setup something works. The test coverage may still be small but may catch some nasty errors that may be introduced by e.g. updating header files.
I would like to discuss some options for advancing the release cycle.