RIOT-OS / Release-Specs

Specification for RIOT releases and corresponding test configurations
4 stars 21 forks source link

Release 2019.07 - RC1 #128

Closed MrKevinWeiss closed 4 years ago

MrKevinWeiss commented 5 years ago

This issue lists the status of all tests for the Release Candidate 1 of the 2019.07 release.

Specs tested:

MrKevinWeiss commented 5 years ago

As tested just before the RC1 @cladmi found the following issues on the board

Board Test Results ``` #### arduino-mega2560/failuresummary.md Failures during test: - [tests/bitarithm_timings](tests/bitarithm_timings/test.failed) - [tests/event_wait_timeout](tests/event_wait_timeout/test.failed) - [tests/evtimer_msg](tests/evtimer_msg/test.failed) - [tests/isr_yield_higher](tests/isr_yield_higher/test.failed) - [tests/libfixmath](tests/libfixmath/test.failed) - [tests/periph_gpio](tests/periph_gpio/test.failed) - [tests/pipe](tests/pipe/test.failed) - [tests/pkg_fatfs_vfs](tests/pkg_fatfs_vfs/test.failed) - [tests/pkg_jsmn](tests/pkg_jsmn/test.failed) - [tests/pkg_libb2](tests/pkg_libb2/test.failed) - [tests/pkg_lora-serialization](tests/pkg_lora-serialization/test.failed) - [tests/pkg_micro-ecc](tests/pkg_micro-ecc/test.failed) - [tests/ps_schedstatistics](tests/ps_schedstatistics/test.failed) - [tests/trickle](tests/trickle/test.failed) #### cc2650-launchpad/failuresummary.md Failures during test: - [tests/driver_grove_ledbar](tests/driver_grove_ledbar/test.failed) - [tests/driver_my9221](tests/driver_my9221/test.failed) - [tests/gnrc_ipv6_ext](tests/gnrc_ipv6_ext/test.failed) - [tests/gnrc_rpl_srh](tests/gnrc_rpl_srh/test.failed) - [tests/gnrc_sock_dns](tests/gnrc_sock_dns/test.failed) - [tests/periph_timer](tests/periph_timer/test.failed) - [tests/ps_schedstatistics](tests/ps_schedstatistics/test.failed) #### frdm-k64f/failuresummary.md Failures during test: - [tests/gnrc_ipv6_ext](tests/gnrc_ipv6_ext/test.failed) - [tests/gnrc_rpl_srh](tests/gnrc_rpl_srh/test.failed) - [tests/gnrc_sock_dns](tests/gnrc_sock_dns/test.failed) - [tests/periph_timer](tests/periph_timer/test.failed) - [tests/pkg_fatfs_vfs](tests/pkg_fatfs_vfs/test.failed) - [tests/ps_schedstatistics](tests/ps_schedstatistics/test.failed) #### frdm-kw41z/failuresummary.md Failures during test: - [tests/gnrc_ipv6_ext](tests/gnrc_ipv6_ext/test.failed) - [tests/gnrc_rpl_srh](tests/gnrc_rpl_srh/test.failed) - [tests/gnrc_sock_dns](tests/gnrc_sock_dns/test.failed) - [tests/ps_schedstatistics](tests/ps_schedstatistics/test.failed) - [tests/thread_flags](tests/thread_flags/test.failed) - [tests/xtimer_periodic_wakeup](tests/xtimer_periodic_wakeup/test.failed) - [tests/xtimer_usleep](tests/xtimer_usleep/test.failed) #### msba2/failuresummary.md Failures during test: - [tests/driver_grove_ledbar](tests/driver_grove_ledbar/test.failed) - [tests/driver_my9221](tests/driver_my9221/test.failed) - [tests/gnrc_ipv6_ext](tests/gnrc_ipv6_ext/test.failed) - [tests/gnrc_rpl_srh](tests/gnrc_rpl_srh/test.failed) - [tests/gnrc_sixlowpan](tests/gnrc_sixlowpan/test.failed) - [tests/gnrc_sock_dns](tests/gnrc_sock_dns/test.failed) - [tests/libfixmath](tests/libfixmath/test.failed) - [tests/libfixmath_unittests](tests/libfixmath_unittests/test.failed) - [tests/lwip_sock_tcp](tests/lwip_sock_tcp/test.failed) - [tests/periph_rtc](tests/periph_rtc/test.failed) - [tests/pkg_fatfs_vfs](tests/pkg_fatfs_vfs/test.failed) - [tests/pkg_libcose](tests/pkg_libcose/test.failed) - [tests/pkg_qdsa](tests/pkg_qdsa/test.failed) - [tests/pkg_relic](tests/pkg_relic/test.failed) - [tests/pkg_tweetnacl](tests/pkg_tweetnacl/test.failed) - [tests/pthread_tls](tests/pthread_tls/test.failed) - [tests/xtimer_periodic_wakeup](tests/xtimer_periodic_wakeup/test.failed) #### mulle/failuresummary.md Failures during test: - [tests/gnrc_ipv6_ext](tests/gnrc_ipv6_ext/test.failed) - [tests/gnrc_rpl_srh](tests/gnrc_rpl_srh/test.failed) - [tests/gnrc_sock_dns](tests/gnrc_sock_dns/test.failed) - [tests/periph_timer](tests/periph_timer/test.failed) - [tests/pkg_fatfs_vfs](tests/pkg_fatfs_vfs/test.failed) - [tests/pkg_littlefs](tests/pkg_littlefs/test.failed) - [tests/pkg_spiffs](tests/pkg_spiffs/test.failed) - [tests/shell](tests/shell/test.failed) #### nrf52dk/failuresummary.md Failures during compilation: - [tests/mcuboot](tests/mcuboot/compilation.failed) Failures during test: - [tests/gnrc_ipv6_ext](tests/gnrc_ipv6_ext/test.failed) - [tests/gnrc_rpl_srh](tests/gnrc_rpl_srh/test.failed) - [tests/gnrc_sock_dns](tests/gnrc_sock_dns/test.failed) - [tests/pkg_fatfs_vfs](tests/pkg_fatfs_vfs/test.failed) - [tests/pthread_rwlock](tests/pthread_rwlock/test.failed) #### nucleo-f103rb/failuresummary.md Failures during test: - [tests/driver_grove_ledbar](tests/driver_grove_ledbar/test.failed) - [tests/driver_hd44780](tests/driver_hd44780/test.failed) - [tests/driver_my9221](tests/driver_my9221/test.failed) - [tests/gnrc_ipv6_ext](tests/gnrc_ipv6_ext/test.failed) - [tests/gnrc_rpl_srh](tests/gnrc_rpl_srh/test.failed) - [tests/gnrc_sock_dns](tests/gnrc_sock_dns/test.failed) - [tests/pkg_fatfs_vfs](tests/pkg_fatfs_vfs/test.failed) - [tests/xtimer_periodic_wakeup](tests/xtimer_periodic_wakeup/test.failed) - [tests/xtimer_usleep_short](tests/xtimer_usleep_short/test.failed) #### pba-d-01-kw2x/failuresummary.md Failures during test: - [tests/gnrc_ipv6_ext](tests/gnrc_ipv6_ext/test.failed) - [tests/gnrc_rpl_srh](tests/gnrc_rpl_srh/test.failed) - [tests/gnrc_sock_dns](tests/gnrc_sock_dns/test.failed) - [tests/pkg_fatfs_vfs](tests/pkg_fatfs_vfs/test.failed) - [tests/ps_schedstatistics](tests/ps_schedstatistics/test.failed) #### sltb001a/failuresummary.md Failures during test: - [tests/gnrc_ipv6_ext](tests/gnrc_ipv6_ext/test.failed) - [tests/gnrc_rpl_srh](tests/gnrc_rpl_srh/test.failed) - [tests/gnrc_sock_dns](tests/gnrc_sock_dns/test.failed) - [tests/pkg_fatfs_vfs](tests/pkg_fatfs_vfs/test.failed) - [tests/ps_schedstatistics](tests/ps_schedstatistics/test.failed) - [tests/xtimer_periodic_wakeup](tests/xtimer_periodic_wakeup/test.failed) #### stm32f3discovery/failuresummary.md Failures during test: - [tests/gnrc_ipv6_ext](tests/gnrc_ipv6_ext/test.failed) - [tests/gnrc_rpl_srh](tests/gnrc_rpl_srh/test.failed) - [tests/gnrc_sock_dns](tests/gnrc_sock_dns/test.failed) - [tests/pkg_fatfs_vfs](tests/pkg_fatfs_vfs/test.failed) ```
fjmolinas commented 5 years ago

For boards failing with tests/ps_schedstatistics fix can come from:

For:

- [tests/gnrc_ipv6_ext](tests/gnrc_ipv6_ext/test.failed)
- [tests/gnrc_rpl_srh](tests/gnrc_rpl_srh/test.failed)
- [tests/gnrc_sock_dns](tests/gnrc_sock_dns/test.failed)

I think they fail because they need to be run with sudo (because of ethos). @kaspar030 has done some work with the CI to enable running ethos without sudo (https://github.com/RIOT-OS/RIOT/pull/11816 is one of the PR's), we needed and have been using this to tests https://github.com/RIOT-OS/RIOT/pull/11818.

MrKevinWeiss commented 5 years ago

Compared to the last RC1 no knew bugs seem to be introduced.

cladmi commented 5 years ago

For:

- [tests/gnrc_ipv6_ext](tests/gnrc_ipv6_ext/test.failed)
- [tests/gnrc_rpl_srh](tests/gnrc_rpl_srh/test.failed)
- [tests/gnrc_sock_dns](tests/gnrc_sock_dns/test.failed)

I think they fail because they need to be run with sudo (because of ethos). @kaspar030 has done some work with the CI to enable running ethos without sudo (RIOT-OS/RIOT#11816 is one of the PR's), we needed and have been using this to tests RIOT-OS/RIOT#11818.

Indeed. The test are the result of running make flash test without root or manual setup as currently done in CI. As TEST_ON_CI_WHITELIST += all is not set for these tests there is no failure in murdock.

I do not consider TEST_ON_CI_WHITELIST as I want to see what does not currently work through make test alone. So it includes the current state of the tests automation even if tests could succeed when run differently.

It is stupid automated testing :)


The test were run using master cb57c6ff1 with some other required commits to run on my setup with multiple boards connected and no local toolchain. But these should not affect testing as they only modify flash/reset.

I checked the output and I also had other issues on my test machine:

Despite being installed, tcpdump and bridge were not in regular users PATH. So I added symlinks. It may come up later when tests must be run without sudo.

The test result should not change but the output should.

cladmi commented 5 years ago

I will run the task 01-ci -- Task #01 - Compile test with BUILD_IN_DOCKER=1 and also with TOOLCHAIN=llvm.

sudo docker images
REPOSITORY          TAG                 IMAGE ID            CREATED             SIZE
riot/riotbuild      latest              bc9d9f175587        7 months ago        3.58GB

sudo docker pull riot/riotbuild:latest
latest: Pulling from riot/riotbuild
Digest: sha256:5218a0692039934276c98c44b70bbb1cc8bc31a7f171670e625cecd2e3f0fc24
Status: Image is up to date for riot/riotbuild:latest

I can also run the automated test suite on different boards as I already did.

MrKevinWeiss commented 5 years ago

@kb2ma Would you be interested in testing task 09-coap by chance?

miri64 commented 5 years ago

Please run the failing sudo tests regardless of whether there will be execution without root in the CI or not (since that requires some bootstrapping for scapy, rather not for this release), manually. That is the point why they are included in the release specs ;-)

kb2ma commented 5 years ago

Yes, happy to run 09-coap.

kb2ma commented 5 years ago

@leandrolanzieri, I don't want to interrupt if you plan to continue with 09-coap. Let me know.

leandrolanzieri commented 5 years ago

@kb2ma sorry, yes, almost done with them

leandrolanzieri commented 5 years ago

Everything looks good for 09-coap and 06-single-hop-udp

MrKevinWeiss commented 5 years ago

07-Multihop had 0% packet loss and no packet buff problems!


# task 1
100 packets transmitted, 100 packets received, 0% packet loss
round-trip min/avg/max = 36.106/41.166/46.321 ms

# task 3
100 packets transmitted, 100 packets received, 0% packet loss
round-trip min/avg/max = 0.364/0.364/0.365 ms
MrKevinWeiss commented 5 years ago

05

# task 1
round-trip min/avg/max = 0.309/0.829/2.558 ms

# task 3
round-trip min/avg/max = 0.325/0.830/2.391 ms
cladmi commented 5 years ago

For the TOOLCHAIN=llvm with BUILD_IN_DOCKER=1 I got a lot of failures due to the fact that llvm is generating bigger firmwares. Like

RIOT_CI_BUILD=1 BOARD=blackpill TOOLCHAIN=llvm BUILD_IN_DOCKER=1 make -C examples/asymcute_mqttsn/ ``` RIOT_CI_BUILD=1 BOARD=blackpill TOOLCHAIN=llvm BUILD_IN_DOCKER=1 make -C examples/asymcute_mqttsn/ make: Entering directory '/home/harter/work/git/worktree/riot_release_llvm/examples/asymcute_mqttsn' Launching build container using image "riot/riotbuild:latest". sudo docker run --rm -t -u "$(id -u)" \ -v '/usr/share/zoneinfo/Europe/Berlin:/etc/localtime:ro' -v '/home/harter/work/git/worktree/riot_release_llvm:/data/riotbuild/riotbase' -e 'RIOTBASE=/data/riotbuild/riotbase' -e 'CCACHE_BASEDIR=/data/riotbuild/riotbase' -e 'BUILD_DIR=/data/riotbuild/riotbase/build' -e 'RIOTPROJECT=/data/riotbuild/riotbase' -e 'RIOTCPU=/data/riotbuild/riotbase/cpu' -e 'RIOTBOARD=/data/riotbuild/riotbase/boards' -e 'RIOTMAKE=/data/riotbuild/riotbase/makefiles' -v /home/harter/.gitcache:/data/riotbuild/gitcache -e GIT_CACHE_DIR=/data/riotbuild/gitcache -v /home/harter/work/git/RIOT/.git:/home/harter/work/git/RIOT/.git \ -e 'BOARD=blackpill' -e 'RIOT_CI_BUILD=1' -e 'TOOLCHAIN=llvm' \ -w '/data/riotbuild/riotbase/examples/asymcute_mqttsn/' \ 'riot/riotbuild:latest' make [sudo] password for harter: Building application "asymcute_mqttsn" for "blackpill" with MCU "stm32f1". /opt/gcc-arm-none-eabi-7-2018-q2-update/bin/../lib/gcc/arm-none-eabi/7.3.1/../../../../arm-none-eabi/bin/ld: /data/riotbuild/riotbase/examples/asymcute_mqttsn/bin/blackpill/asymcute_mqttsn.elf section `.text' will not fit in region `rom' /opt/gcc-arm-none-eabi-7-2018-q2-update/bin/../lib/gcc/arm-none-eabi/7.3.1/../../../../arm-none-eabi/bin/ld: region `rom' overflowed by 900 bytes collect2: error: ld returned 1 exit status /data/riotbuild/riotbase/Makefile.include:475: recipe for target '/data/riotbuild/riotbase/examples/asymcute_mqttsn/bin/blackpill/asymcute_mqttsn.elf' failed make: *** [/data/riotbuild/riotbase/examples/asymcute_mqttsn/bin/blackpill/asymcute_mqttsn.elf] Error 1 /home/harter/work/git/worktree/riot_release_llvm/makefiles/docker.inc.mk:266: recipe for target '..in-docker-container' failed make: *** [..in-docker-container] Error 2 make: Leaving directory '/home/harter/work/git/worktree/riot_release_llvm/examples/asymcute_mqttsn' ```

I will limit to the boards we run in CI for the next run.

MrKevinWeiss commented 5 years ago

Hmm it seems that I am have some problems on Task #02 - ICMPv6 echo unicast addresess on iotlab-m3 (default route)

Can someone confirm?

cladmi commented 5 years ago

I did a run of scan-build-analyze for the boards tested with llvm using

BUILD_IN_DOCKER=1 TOOLCHAIN=llvm ./dist/tools/compile_and_test_for_board/compile_and_test_for_board.py --compile-targets scan-build-analyze --no-test . iotlab-m3

I used some sed hack to split the warnings in RIOT and in packages. Otherwise the ones in packages would be repeated for each board/application.

It currently reported ~130 in RIOT (including deprecation warnings for ubjson). I will try to do a dedicated issue for the ones that look like bugs.

https://ci-ilab.imp.fu-berlin.de/job/RIOT%20scan-build-analyze/16/riot_scan_build/

leandrolanzieri commented 5 years ago

Hmm it seems that I am have some problems on Task #02 - ICMPv6 echo unicast addresess on iotlab-m3 (default route)

Can someone confirm?

It's working for me, 1% Packet Loss

cladmi commented 5 years ago

Please run the failing sudo tests regardless of whether there will be execution without root in the CI or not (since that requires some bootstrapping for scapy, rather not for this release), manually. That is the point why they are included in the release specs ;-)

In theory, also all the non automated tests should also be executed. Not only the ones with a test target.


From https://github.com/RIOT-OS/RIOT/pull/11821 I noticed that currently, the first issue with CI is not the sudo but that scapy is not installed in workers and on tests RPis.

kaspar030 commented 5 years ago

I noticed that currently, the first issue with CI is not the sudo but that scapy is not installed in workers and on tests RPis.

Yup. This is unfortunately not a matter of just installing scapy, as scapy wants to open a raw socket, which only root can do.

miri64 commented 5 years ago

[…] as scapy wants to open a raw socket, which only root can do.

Will fix very soon (but not in a backportable state I fear)

kaspar030 commented 5 years ago

Will fix very soon (but not in a backportable state I fear)

There might also be a workaround using ambient capabilities: https://stackoverflow.com/a/47982075/5910429 If only the raw socket capability is missing, we can create a wrapper binary that allows only that.

miri64 commented 5 years ago

There is a TUN/TAP-Wrapper hidden inside scapy's socket abstraction I'd like to experiment with tomorrow. If that works without root I'll rather go for that than some permission foobar.

miri64 commented 5 years ago

(all our scapy raw sockets use either TUN or TAP interfaces so far, so if they have user permissions granted at creation—see ip tuntap help—this seems to me the more obvious way)

kaspar030 commented 5 years ago

There is a TUN/TAP-Wrapper hidden inside scapy's socket abstraction I'd like to experiment with tomorrow. If that works without root I'll rather go for that than some permission foobar.

Totally agreed! And let me know if I can help. (I just stumbled over the workaround and wanted to share it.)

MrKevinWeiss commented 5 years ago

So I did 05-task4 and with 0% packetloss round-trip min/avg/max = 122.022/140.804/158.292 ms

MrKevinWeiss commented 5 years ago

I am getting some pretty high packet loss for 04-Task #01 - ICMPv6 link-local echo with iotlab-m3 only when running locally, between 10% and 50%. I tried on other channels and it still seems to fail. I tried on IoTlabs and it was fine. I tried switching m3 boards and it still fails I tried increasing the interval from 10 to 50 and it really helped I tried with samr to samr and it still failed

I was using the tests/gnrc_udp to test.

@miri64, @cgundogan, any ideas? @PeterKietzmann said it could be the Hamburg air or maybe it is our M3 boards that have some issue, maybe floating pins or something? I will note the the USB is a little bit sensitive to position.

miri64 commented 5 years ago

@PeterKietzmann said it could be the Hamburg air or maybe it is our M3 boards that have some issue, maybe floating pins or something?

If it works on IoT-LAB (have you tried different sites?) or Varduz, it seems to be the Hamburg air. We had problmes in the past where tests over radio conducted at HAW Hamburg failed while they worked at other places. Maybe there is something jamming the spectrum in your building.

MrKevinWeiss commented 5 years ago

Well I am trying to run the stress test on the same nodes in IoTlabs and it seems like around 7 to 8 ms interval makes it unreadable... is this expected? Could it be an issue with the tests/gnrc_udp?

MrKevinWeiss commented 5 years ago

It seems like the ping6 command is async meaning that the -i is time to send not time to wait until the next send after an ack. If this is the case either the test should be adapted or the ping6 command should be adapted to handle that (maybe add a -s flag)

miri64 commented 5 years ago

It seems like the ping6 command is async meaning that the -i is time to send not time to wait until the next send after an ack. If this is the case either the test should be adapted or the ping6 command should be adapted to handle that (maybe add a -s flag)

Didn't we do that already last time? If you are referring to your stress test, which test parameters should be changed? They are not part of the release tests.

miri64 commented 5 years ago

What would the -s flag do?

miri64 commented 5 years ago

ping is supposed to be asynchronous. If we wait for the next echo response to come in the delay between packets is not -i, but -i + RTT.

miri64 commented 5 years ago

There is a TUN/TAP-Wrapper hidden inside scapy's socket abstraction I'd like to experiment with tomorrow. If that works without root I'll rather go for that than some permission foobar.

Totally agreed! And let me know if I can help. (I just stumbled over the workaround and wanted to share it.)

Sadly, this does not work this way :confused:. Opening a TAP with user rights is only allowed for the application end of the TAP interface (the part we usually use with netdev_tap and ethos), so if you try to open two application ends (one for scapy, one for netdev_tap/ethos) you will get an EBUSY for one of the two. TAPs are just supposed to be used as (app, interface)-pair. The interface end is just a normal interface to the OS and thus can only be accessed with raw sockets.

cladmi commented 5 years ago

There is a TUN/TAP-Wrapper hidden inside scapy's socket abstraction I'd like to experiment with tomorrow. If that works without root I'll rather go for that than some permission foobar.

Totally agreed! And let me know if I can help. (I just stumbled over the workaround and wanted to share it.)

Sadly, this does not work this way confused. Opening a TAP with user rights is only allowed for the application end of the TAP interface (the part we usually use with netdev_tap and ethos), so if you try to open two application ends (one for scapy, one for netdev_tap/ethos) you will get an EBUSY for one of the two. TAPs are just supposed to be used as (app, interface)-pair. The interface end is just a normal interface to the OS and thus can only be accessed with raw sockets.

It could just use another input to ethos, a packet based unix socket or something. And allow tap socket or even maybe tap+socket. For raw data injections, it is just needed to have buffers go through ethos encapsulation I guess. Would you have a need for having both at the same time or is not needed to handle it?

For running tcpdump without root it may also require having an additional pipe or something I think. If possible one that ignores when nobody is listening to not block… Not sure what there is.

This could be something I am interested to implement.

cladmi commented 5 years ago

For 01-ci - Task #01 I could correctly compile with BUILD_IN_DOCKER=1 and the ./task01.py script with not TOOLCHAIN specified (so gnu). With a re-run due to network failure or something.

PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin BUILD_IN_DOCKER=1 ./task01.py /home/harter/work/git/worktree/riot_release


I however noticed that as it is done through buildtest that is completely executed in docker it hides issues with some examples that use the host toolchain like tests/mcuboot https://github.com/RIOT-OS/RIOT/pull/11083.

DOCKER="sudo docker" BOARDS=nrf52dk PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin BUILD_IN_DOCKER=1 make -C tests/mcuboot/ all ``` DOCKER="sudo docker" BOARDS=nrf52dk PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin BUILD_IN_DOCKER=1 make -C tests/mcuboot/ all make: Entering directory '/home/harter/work/git/worktree/riot_release_llvm/tests/mcuboot' Launching build container using image "riot/riotbuild:latest". sudo docker run --rm -t -u "$(id -u)" \ -v '/usr/share/zoneinfo/Europe/Berlin:/etc/localtime:ro' -v '/home/harter/work/git/worktree/riot_release_llvm:/data/riotbuild/riotbase' -e 'RIOTBASE=/data/riotbuild/riotbase' -e 'CCACHE_BASEDIR=/data/riotbuild/riotbase' -e 'BUILD_DIR=/data/riotbuild/riotbase/build' -e 'RIOTPROJECT=/data/riotbuild/riotbase' -e 'RIOTCPU=/data/riotbuild/riotbase/cpu' -e 'RIOTBOARD=/data/riotbuild/riotbase/boards' -e 'RIOTMAKE=/data/riotbuild/riotbase/makefiles' -v /home/harter/.gitcache:/data/riotbuild/gitcache -e GIT_CACHE_DIR=/data/riotbuild/gitcache -v /home/harter/work/git/RIOT/.git:/home/harter/work/git/RIOT/.git \ -e 'BOARDS=nrf52dk' \ -w '/data/riotbuild/riotbase/tests/mcuboot/' \ 'riot/riotbuild:latest' make all Building application "tests_mcuboot" for "nrf52dk" with MCU "nrf52". "make" -C /data/riotbuild/riotbase/boards/nrf52dk "make" -C /data/riotbuild/riotbase/boards/common/nrf52xxxdk "make" -C /data/riotbuild/riotbase/core "make" -C /data/riotbuild/riotbase/cpu/nrf52 "make" -C /data/riotbuild/riotbase/cpu/cortexm_common "make" -C /data/riotbuild/riotbase/cpu/cortexm_common/periph "make" -C /data/riotbuild/riotbase/cpu/nrf52/periph "make" -C /data/riotbuild/riotbase/cpu/nrf5x_common "make" -C /data/riotbuild/riotbase/cpu/nrf5x_common/periph "make" -C /data/riotbuild/riotbase/drivers "make" -C /data/riotbuild/riotbase/drivers/periph_common "make" -C /data/riotbuild/riotbase/sys "make" -C /data/riotbuild/riotbase/sys/auto_init "make" -C /data/riotbuild/riotbase/sys/newlib_syscalls_default "make" -C /data/riotbuild/riotbase/sys/stdio_uart text data bss dec hex filename 7700 116 2540 10356 2874 /data/riotbuild/riotbase/tests/mcuboot/bin/nrf52dk/tests_mcuboot.elf Re-linking for MCUBoot at 0x8000... Signed with /data/riotbuild/riotbase/tests/mcuboot/bin/nrf52dk/key.pem for version 1.1.1+1\ Re-linking for MCUBoot at 0x8000... /bin/sh: 1: arm-none-eabi-gcc: not found /home/harter/work/git/worktree/riot_release_llvm/makefiles/mcuboot.mk:27: recipe for target 'mcuboot' failed make: *** [mcuboot] Error 127 make: Leaving directory '/home/harter/work/git/worktree/riot_release_llvm/tests/mcuboot' ```
MrKevinWeiss commented 5 years ago

Good so riot is good but the tests need some love!

MrKevinWeiss commented 5 years ago

Maybe we use the following as a template to paste results

42-example-test Task 01 ``` paste the valuable test information here for example the output of the terminal of packet loss and times 100 packets transmitted, 100 packets received, 0% packet loss round-trip min/avg/max = 36.106/41.166/46.321 ms ```
MrKevinWeiss commented 5 years ago
04-single-hop-6lowpan-icmp Task 04 ``` 2019-07-12 10:23:29,356 - INFO # 108 bytes from fe80::7b79:4946:539e:75e: icmp_seq=9997 ttl=64 rssi=-24 dBm time=26.359 ms 2019-07-12 10:23:29,454 - INFO # 108 bytes from fe80::7b79:4946:539e:75e: icmp_seq=9998 ttl=64 rssi=-23 dBm time=24.189 ms 2019-07-12 10:23:29,555 - INFO # 108 bytes from fe80::7b79:4946:539e:75e: icmp_seq=9999 ttl=64 rssi=-24 dBm time=24.490 ms 2019-07-12 10:23:30,529 - INFO # 2019-07-12 10:23:30,534 - INFO # --- fe80::7b79:4946:539e:75e PING statistics --- 2019-07-12 10:23:30,539 - INFO # 10000 packets transmitted, 9998 packets received, 0% packet loss 2019-07-12 10:23:30,543 - INFO # round-trip min/avg/max = 18.471/23.406/37.488 ms ```
MrKevinWeiss commented 5 years ago

It seems that packet loss and duplications are a bit high here @miri64 does this seem to be correct?

04-single-hop-6lowpan-icmp Task 05 ``` #samr pinging 2019-07-12 11:01:50,376 - INFO # 58 bytes from fe80::212:4b00:60d:b2db: icmp_seq=991 ttl=64 rssi=-66 dBm time=10.912 ms 2019-07-12 11:01:50,576 - INFO # 58 bytes from fe80::212:4b00:60d:b2db: icmp_seq=993 ttl=64 rssi=-66 dBm time=9.651 ms 2019-07-12 11:01:50,677 - INFO # 58 bytes from fe80::212:4b00:60d:b2db: icmp_seq=994 ttl=64 rssi=-66 dBm time=9.968 ms 2019-07-12 11:01:51,180 - INFO # 58 bytes from fe80::212:4b00:60d:b2db: icmp_seq=999 ttl=64 rssi=-65 dBm time=9.341 ms 2019-07-12 11:01:52,169 - INFO # 2019-07-12 11:01:52,172 - INFO # --- ff02::1 PING statistics --- 2019-07-12 11:01:52,178 - INFO # 1000 packets transmitted, 871 packets received, 12% packet loss 2019-07-12 11:01:52,182 - INFO # round-trip min/avg/max = 9.321/10.455/15.829 ms #remote pinging 2019-07-12 10:58:36,430 - INFO # 58 bytes from fe80::7b64:c7d:9f31:2ee: icmp_seq=993 ttl=64 rssi=9 dBm time=11.151 ms 2019-07-12 10:58:36,432 - INFO # 58 bytes from fe80::7b64:c7d:9f31:2ee: icmp_seq=993 ttl=64 rssi=9 dBm time=17.852 ms (DUP!) 2019-07-12 10:58:36,621 - INFO # 58 bytes from fe80::7b64:c7d:9f31:2ee: icmp_seq=995 ttl=64 rssi=9 dBm time=9.554 ms 2019-07-12 10:58:36,720 - INFO # 58 bytes from fe80::7b64:c7d:9f31:2ee: icmp_seq=996 ttl=64 rssi=9 dBm time=9.221 ms 2019-07-12 10:58:36,822 - INFO # 58 bytes from fe80::7b64:c7d:9f31:2ee: icmp_seq=997 ttl=64 rssi=9 dBm time=10.517 ms 2019-07-12 10:58:36,923 - INFO # 58 bytes from fe80::7b64:c7d:9f31:2ee: icmp_seq=998 ttl=64 rssi=9 dBm time=10.834 ms 2019-07-12 10:58:37,019 - INFO # 58 bytes from fe80::7b64:c7d:9f31:2ee: icmp_seq=999 ttl=64 rssi=9 dBm time=9.231 ms 2019-07-12 10:58:38,000 - INFO # 2019-07-12 10:58:38,015 - INFO # --- ff02::1 PING statistics --- 2019-07-12 10:58:38,019 - INFO # 1000 packets transmitted, 874 packets received, 40 duplicates, 12% packet loss 2019-07-12 10:58:38,021 - INFO # round-trip min/avg/max = 9.214/10.694/25.830 ms ```
MrKevinWeiss commented 5 years ago

It seems the test failed When the size is 100 it cannot transmit any packets. @miri64 @cgundogan any reason why? This happens both ways. This also only seems to happen between samr21 and remote, it works fine when between remote and m3... Also samr21 and m3 work OK.

04-single-hop-6lowpan-icmp Task 06 ``` 2019-07-12 11:15:42,848 - INFO # ping6 -c 10 -i 100 -s 50 fe80::212:4b00:60d:b395 2019-07-12 11:15:42,866 - INFO # 58 bytes from fe80::212:4b00:60d:b395: icmp_seq=0 ttl=64 rssi=-67 dBm time=9.895 ms 2019-07-12 11:15:42,968 - INFO # 58 bytes from fe80::212:4b00:60d:b395: icmp_seq=1 ttl=64 rssi=-67 dBm time=10.816 ms 2019-07-12 11:15:43,069 - INFO # 58 bytes from fe80::212:4b00:60d:b395: icmp_seq=2 ttl=64 rssi=-67 dBm time=10.513 ms 2019-07-12 11:15:43,170 - INFO # 58 bytes from fe80::212:4b00:60d:b395: icmp_seq=3 ttl=64 rssi=-67 dBm time=10.527 ms 2019-07-12 11:15:43,272 - INFO # 58 bytes from fe80::212:4b00:60d:b395: icmp_seq=4 ttl=64 rssi=-67 dBm time=10.822 ms 2019-07-12 11:15:43,374 - INFO # 58 bytes from fe80::212:4b00:60d:b395: icmp_seq=5 ttl=64 rssi=-67 dBm time=11.777 ms 2019-07-12 11:15:43,474 - INFO # 58 bytes from fe80::212:4b00:60d:b395: icmp_seq=6 ttl=64 rssi=-67 dBm time=11.141 ms 2019-07-12 11:15:43,575 - INFO # 58 bytes from fe80::212:4b00:60d:b395: icmp_seq=7 ttl=64 rssi=-67 dBm time=10.836 ms 2019-07-12 11:15:43,676 - INFO # 58 bytes from fe80::212:4b00:60d:b395: icmp_seq=8 ttl=64 rssi=-67 dBm time=10.505 ms 2019-07-12 11:15:43,777 - INFO # 58 bytes from fe80::212:4b00:60d:b395: icmp_seq=9 ttl=64 rssi=-67 dBm time=10.503 ms 2019-07-12 11:15:43,778 - INFO # 2019-07-12 11:15:43,781 - INFO # --- fe80::212:4b00:60d:b395 PING statistics --- 2019-07-12 11:15:43,787 - INFO # 10 packets transmitted, 10 packets received, 0% packet loss 2019-07-12 11:15:43,791 - INFO # round-trip min/avg/max = 9.895/10.733/11.777 ms ping6 -c 10 -i 100 -s 100 fe80::212:4b00:60d:b395 2019-07-12 11:15:48,528 - INFO # ping6 -c 10 -i 100 -s 100 fe80::212:4b00:60d:b395 2019-07-12 11:15:50,450 - INFO # 2019-07-12 11:15:50,454 - INFO # --- fe80::212:4b00:60d:b395 PING statistics --- 2019-07-12 11:15:50,460 - INFO # 10 packets transmitted, 0 packets received, 100% packet loss ```
miri64 commented 5 years ago

Have you confirmed that it ever worked? The test is marked as experimental. Maybe fragmentation + cc2538 still is problematic? We had similar issues in the beginning with the kw2x radios.

miri64 commented 5 years ago

(I never used the remote nor was involved in the tests, so I'm not sure how "normal" those results are)

miri64 commented 5 years ago

I think @smlng was doing these tests in the past. Maybe he can give some insights.

MrKevinWeiss commented 5 years ago

No I haven't, I will try on the last release or so. I just wanted to make sure this wasn't something I was doing wrong.

Error codes from remote-revb during tests ``` 2019-07-12 11:34:43,525 - INFO # RFCORE_ASSERT(rssi_val > CC2538_RF_SENSITIVITY) failed at line 340 in _recv()! 2019-07-12 11:34:43,525 - INFO # RFCORE_SFR_RFERRF = 0x00 2019-07-12 11:34:43,527 - INFO # RFCORE_ASSERT(RFCORE_XREG_RXFIFOCNT > 0) failed at line 77 in rfcore_read_byte()! 2019-07-12 11:34:43,528 - INFO # RFCORE_SFR_RFERRF = 0x00 2019-07-12 11:34:43,530 - INFO # RFCORE_ASSERT(NOT(flags & RXUNDERF)) failed at line 40 in isr_rfcoreerr()! 2019-07-12 11:34:43,531 - INFO # RFCORE_SFR_RFERRF = 0x00 2019-07-12 11:35:12,227 - INFO # RFCORE_ASSERT(len <= RFCORE_XREG_RXFIFOCNT) failed at line 91 in rfcore_read_fifo()! 2019-07-12 11:35:12,229 - INFO # RFCORE_SFR_RFERRF = 0x02 2019-07-12 11:35:12,233 - INFO # RFCORE_ASSERT(NOT(flags & RXUNDERF)) failed at line 40 in isr_rfcoreerr()! 2019-07-12 11:35:12,235 - INFO # RFCORE_SFR_RFERRF = 0x00 2019-07-12 11:35:12,246 - INFO # RFCORE_ASSERT(NOT(flags & RXUNDERF)) failed at line 40 in isr_rfcoreerr()! 2019-07-12 11:35:12,248 - INFO # RFCORE_SFR_RFERRF = 0x00 2019-07-12 11:35:12,251 - INFO # RFCORE_ASSERT(NOT(flags & RXUNDERF)) failed at line 40 in isr_rfcoreerr()! 2019-07-12 11:35:12,252 - INFO # RFCORE_SFR_RFERRF = 0x00 2019-07-12 11:35:12,256 - INFO # RFCORE_ASSERT(NOT(flags & RXUNDERF)) failed at line 40 in isr_rfcoreerr()! 2019-07-12 11:35:12,268 - INFO # RFCORE_SFR_RFERRF = 0x00 2019-07-12 11:35:12,272 - INFO # RFCORE_ASSERT(NOT(flags & RXUNDERF)) failed at line 40 in isr_rfcoreerr()! 2019-07-12 11:35:12,273 - INFO # RFCORE_SFR_RFERRF = 0x00 2019-07-12 11:35:12,277 - INFO # RFCORE_ASSERT(NOT(flags & RXUNDERF)) failed at line 40 in isr_rfcoreerr()! 2019-07-12 11:35:12,279 - INFO # RFCORE_SFR_RFERRF = 0x00 2019-07-12 11:35:12,291 - INFO # RFCORE_ASSERT(RFCORE_XREG_RXFIFOCNT > 0) failed at line 77 in rfcore_read_byte()! 2019-07-12 11:35:12,293 - INFO # RFCORE_SFR_RFERRF = 0x00 2019-07-12 11:35:12,297 - INFO # RFCORE_ASSERT(NOT(flags & RXUNDERF)) failed at line 40 in isr_rfcoreerr()! 2019-07-12 11:35:12,298 - INFO # RFCORE_SFR_RFERRF = 0x00 2019-07-12 11:35:12,301 - INFO # RFCORE_ASSERT(RFCORE_XREG_RXFIFOCNT > 0) failed at line 77 in rfcore_read_byte()! 2019-07-12 11:35:12,303 - INFO # RFCORE_SFR_RFERRF = 0x00 2019-07-12 11:35:12,306 - INFO # RFCORE_ASSERT(NOT(flags & RXUNDERF)) failed at line 40 in isr_rfcoreerr()! 2019-07-12 11:35:12,307 - INFO # RFCORE_SFR_RFERRF = 0x00 2019-07-12 11:35:18,281 - INFO # RFCORE_ASSERT(len <= RFCORE_XREG_RXFIFOCNT) failed at line 91 in rfcore_read_fifo()! 2019-07-12 11:35:18,282 - INFO # RFCORE_SFR_RFERRF = 0x02 2019-07-12 11:35:18,286 - INFO # RFCORE_ASSERT(NOT(flags & RXUNDERF)) failed at line 40 in isr_rfcoreerr()! 2019-07-12 11:35:18,287 - INFO # RFCORE_SFR_RFERRF = 0x00 2019-07-12 11:35:18,302 - INFO # RFCORE_ASSERT(NOT(flags & RXUNDERF)) failed at line 40 in isr_rfcoreerr()! 2019-07-12 11:35:18,303 - INFO # RFCORE_SFR_RFERRF = 0x00 2019-07-12 11:35:18,307 - INFO # RFCORE_ASSERT(RFCORE_XREG_RXFIFOCNT > 0) failed at line 77 in rfcore_read_byte()! 2019-07-12 11:35:18,308 - INFO # RFCORE_SFR_RFERRF = 0x00 2019-07-12 11:35:18,313 - INFO # RFCORE_ASSERT(NOT(flags & RXUNDERF)) failed at line 40 in isr_rfcoreerr()! 2019-07-12 11:35:18,316 - INFO # RFCORE_SFR_RFERRF = 0x00 2019-07-12 11:35:18,423 - INFO # RFCORE_ASSERT(RFCORE_XREG_RXFIFOCNT > 0) failed at line 77 in rfcore_read_byte()! 2019-07-12 11:35:18,425 - INFO # RFCORE_SFR_RFERRF = 0x02 2019-07-12 11:35:18,428 - INFO # RFCORE_ASSERT(NOT(flags & RXUNDERF)) failed at line 40 in isr_rfcoreerr()! 2019-07-12 11:35:18,430 - INFO # RFCORE_SFR_RFERRF = 0x00
miri64 commented 5 years ago

Have you confirmed that it ever worked? The test is marked as experimental.

If it works a good first step would be a git bisect to determine what caused the regression.

MrKevinWeiss commented 5 years ago

He is on vacation...

miri64 commented 5 years ago

He is on vacation...

Bus-factor ;-P

MrKevinWeiss commented 5 years ago

It also really seems like it cannot handle a stress test as soon as I try pinging with 2 nodes it dies.

MrKevinWeiss commented 5 years ago

7% packet loss

04-single-hop-6lowpan-icmp Task 07 with m3 and arduino-zero ``` 58 bytes from fe80::1711:6b10:65f8:8f32: icmp_seq=997 ttl=64 rssi=59 dBm time=174.723 ms 58 bytes from fe80::1711:6b10:65f8:8f32: icmp_seq=998 ttl=64 rssi=59 dBm time=174.720 ms 58 bytes from fe80::1711:6b10:65f8:8f32: icmp_seq=999 ttl=64 rssi=59 dBm time=166.556 ms --- ff02::1 PING statistics --- 1000 packets transmitted, 927 packets received, 7% packet loss round-trip min/avg/max = 166.556/176.578/274.780 ms
MrKevinWeiss commented 5 years ago

High packet loss

04-single-hop-6lowpan-icmp Task 08 with m3 and arduino-zero ``` 108 bytes from fe80::1711:6b10:65f8:8f32: icmp_seq=812 ttl=64 rssi=59 dBm time=786.739 ms 108 bytes from fe80::1711:6b10:65f8:8f32: icmp_seq=870 ttl=64 rssi=59 dBm time=629.803 ms 108 bytes from fe80::1711:6b10:65f8:8f32: icmp_seq=892 ttl=64 rssi=59 dBm time=629.800 ms 108 bytes from fe80::1711:6b10:65f8:8f32: icmp_seq=907 ttl=64 rssi=59 dBm time=786.786 ms 108 bytes from fe80::1711:6b10:65f8:8f32: icmp_seq=914 ttl=64 rssi=59 dBm time=629.824 ms 108 bytes from fe80::1711:6b10:65f8:8f32: icmp_seq=998 ttl=64 rssi=59 dBm time=546.822 ms --- fe80::1711:6b10:65f8:8f32 PING statistics --- 1000 packets transmitted, 28 packets received, 97% packet loss round-trip min/avg/max = 546.822/711.505/794.887 ms