RIOT-OS / Release-Specs

Specification for RIOT releases and corresponding test configurations
4 stars 21 forks source link

Release 2019.07 - RC1 #128

Closed MrKevinWeiss closed 5 years ago

MrKevinWeiss commented 5 years ago

This issue lists the status of all tests for the Release Candidate 1 of the 2019.07 release.

Specs tested:

MrKevinWeiss commented 5 years ago

6% packet loss

04-single-hop-6lowpan-icmp Task 07 ``` 58 bytes from fe80::7b67:1a62:4c53:566a: icmp_seq=995 ttl=64 rssi=53 dBm time=174.719 ms 58 bytes from fe80::7b67:1a62:4c53:566a: icmp_seq=997 ttl=64 rssi=53 dBm time=174.723 ms 58 bytes from fe80::7b67:1a62:4c53:566a: icmp_seq=998 ttl=64 rssi=53 dBm time=174.725 ms 58 bytes from fe80::7b67:1a62:4c53:566a: icmp_seq=999 ttl=64 rssi=53 dBm time=167.351 ms --- ff02::1 PING statistics --- 1000 packets transmitted, 936 packets received, 6% packet loss round-trip min/avg/max = 167.351/176.903/296.998 ms
MrKevinWeiss commented 5 years ago

High packet loss

04-single-hop-6lowpan-icmp Task 08 ``` 108 bytes from fe80::7b67:1a62:4c53:566a: icmp_seq=822 ttl=64 rssi=53 dBm time=786.752 ms 108 bytes from fe80::7b67:1a62:4c53:566a: icmp_seq=859 ttl=64 rssi=53 dBm time=786.731 ms 108 bytes from fe80::7b67:1a62:4c53:566a: icmp_seq=896 ttl=64 rssi=53 dBm time=786.833 ms 108 bytes from fe80::7b67:1a62:4c53:566a: icmp_seq=925 ttl=64 rssi=53 dBm time=786.725 ms 108 bytes from fe80::7b67:1a62:4c53:566a: icmp_seq=962 ttl=64 rssi=53 dBm time=786.726 ms 108 bytes from fe80::7b67:1a62:4c53:566a: icmp_seq=998 ttl=64 rssi=53 dBm time=500.493 ms 108 bytes from fe80::7b67:1a62:4c53:566a: icmp_seq=999 ttl=64 rssi=53 dBm time=513.819 ms --- fe80::7b67:1a62:4c53:566a PING statistics --- 1000 packets transmitted, 33 packets received, 96% packet loss round-trip min/avg/max = 500.493/751.015/786.928 ms
MrKevinWeiss commented 5 years ago

It seems like the high packet lost could be because of the async ping6

04-single-hop-6lowpan-icmp Task 08 -i 500 ``` 108 bytes from fe80::7b67:1a62:4c53:566a: icmp_seq=98 ttl=64 rssi=53 dBm time=336.174 ms 108 bytes from fe80::7b67:1a62:4c53:566a: icmp_seq=99 ttl=64 rssi=53 dBm time=326.207 ms --- fe80::7b67:1a62:4c53:566a PING statistics --- 100 packets transmitted, 99 packets received, 1% packet loss round-trip min/avg/max = 325.480/327.549/336.174 ms
miri64 commented 5 years ago

It seems like the high packet lost could be because of the async ping6

Looks like, so the test parameters should be adopted.

MrKevinWeiss commented 5 years ago

I am working on that and will give a PR to update

MrKevinWeiss commented 5 years ago

I think it is fine with -i 350

MrKevinWeiss commented 5 years ago

Running the build test on my cpu seems to work on everything but the following boards, likely due to not having the toolchain. Though not everything is there since my console truncated the total output (I should have piped to a file). Since everything works in docker I am inclined to say it isn't a problem. I may try to run over the weekend again after installing all the toolchains.

arduino-leonardo
chronos
hifive1
mips-malta
msb-430
msb-430h
pic32-clicker
pic32-wifire
telosb
wsn430-v1_3b
wsn430-v1_4
z1

Errors:

Outcome:
    success: riotboot, ccn-lite-relay, gnrc_networking_mac, lorawan, lua_REPL, lua_basic, nimble_gatt, nimble_heart_rate_sensor, nimble_scanner, openthread, skald_eddystone, skald_ibeacon, usbus_minimal, board_calliope-mini, board_microbit, build_system_utils, conn_can, cortexm_common_ldscript, cpu_cortexm_address_check, cpu_efm32_features, driver_at30tse75x, driver_bmx055, driver_ds18, driver_motor_driver, driver_mpu9150, driver_srf02, driver_tsl4531x, emb6, gnrc_gomach, gnrc_lwmac, lua_loader, mpu_stack_guard, netstats_l2, nimble_l2cap, nimble_l2cap_server, periph_dac, periph_dma, periph_eeprom, periph_i2c, periph_pwm, periph_qdec, periph_uart_mode, pkg_cmsis-dsp, pkg_fatfs_vfs, pkg_oonf_api, pkg_semtech-loramac, pkg_tinycrypt, pkg_ubasic, puf_sram, riotboot, riotboot_flashwrite, socket_zep, trace, usbus_cdc_ecm, warn_conflict
    failed: arduino_hello-world, asymcute_mqttsn, bindist, cord_ep, cord_epsim, default, dtls-echo, emcute_mqttsn, filesystem, gcoap, gnrc_border_router, gnrc_minimal, gnrc_networking, gnrc_tftp, hello-world, ipc_pingpong, javascript, nanocoap_server, ndn-ping, posix_sockets, riot_and_cpp, saul, timer_periodic_wakeup, bench_msg_pingpong, bench_mutex_pingpong, bench_runtime_coreapis, bench_sched_nop, bench_sizeof_coretypes, bench_thread_flags_pingpong, bench_thread_yield_pingpong, bench_timers, bitarithm_timings, bloom_bytes, buttons, can_trx, cb_mux, cb_mux_bench, cond_order, cpp11_condition_variable, cpp11_mutex, cpp11_thread, driver_ad7746, driver_adcxx1c, driver_ads101x, driver_adt7310, driver_adxl345, driver_apa102, driver_at, driver_at86rf2xx, driver_ata8520e, driver_bh1750, driver_bmp180, driver_bmx280, driver_ccs811, driver_ccs811_full, driver_dht, driver_ds1307, driver_ds3234, driver_ds75lx, driver_dsp0401, driver_dynamixel, driver_enc28j60, driver_encx24j600, driver_feetech, driver_fxos8700, driver_grove_ledbar, driver_hd44780, driver_hdc1000, driver_hih6130, driver_hts221, driver_ina220, driver_io1_xplained, driver_isl29020, driver_isl29125, driver_jc42, driver_kw2xrf, driver_l3g4200d, driver_lc709203f, driver_lis2dh12, driver_lis3dh, driver_lis3mdl, driver_lpd8808, driver_lpsxxx, driver_lsm303dlhc, driver_lsm6dsl, driver_ltc4150, driver_mag3110, driver_mma7660, driver_mma8x5x, driver_mpl3115a2, driver_mq3, driver_my9221, driver_nrf24l01p_lowlevel, driver_nvram_spi, driver_pcd8544, driver_pir, driver_pn532, driver_pulse_counter, driver_rn2xx3, driver_sdcard_spi, driver_sds011, driver_servo, driver_sht1x, driver_sht2x, driver_sht3x, driver_si114x, driver_si70xx, driver_soft_spi, driver_srf04, driver_srf08, driver_sx127x, driver_tcs37727, driver_tmp006, driver_tps6274x, driver_tsl2561, driver_vcnl40x0, driver_veml6070, driver_xbee, eepreg, embunit, event_wait_timeout, events, evtimer_msg, evtimer_underflow, external_module_dirs, fault_handler, float, fmt_print, gnrc_ipv6_ext, gnrc_ipv6_fwd_w_sub, gnrc_ipv6_nib, gnrc_ipv6_nib_6ln, gnrc_mac_timeout, gnrc_ndp, gnrc_netif, gnrc_rpl_srh, gnrc_sixlowpan, gnrc_sixlowpan_frag, gnrc_sock_dns, gnrc_sock_ip, gnrc_sock_udp, gnrc_tcp_client, gnrc_tcp_server, gnrc_udp, irq, isr_yield_higher, l2util, leds, libc_newlib, libfixmath, libfixmath_unittests, lwip, lwip_sock_ip, lwip_sock_tcp, lwip_sock_udp, malloc, mcuboot, memarray, minimal, msg_avail, msg_send_receive, msg_try_receive, mutex_order, mutex_unlock_and_sleep, nanocoap_cli, netdev_test, nhdp, od, periph_adc, periph_cpuid, periph_flashpage, periph_gpio, periph_gpio_arduino, periph_hwrng, periph_pm, periph_rtc, periph_rtt, periph_spi, periph_timer, periph_uart, pipe, pkg_c25519, pkg_cayenne-lpp, pkg_cn-cbor, pkg_fatfs, pkg_hacl, pkg_heatshrink, pkg_jsmn, pkg_libb2, pkg_libcoap, pkg_libcose, pkg_libhydrogen, pkg_littlefs, pkg_lora-serialization, pkg_micro-ecc, pkg_micro-ecc-with-hwrng, pkg_microcoap, pkg_minmea, pkg_monocypher, pkg_nanocbor, pkg_qdsa, pkg_relic, pkg_spiffs, pkg_tiny-asn1, pkg_tinycbor, pkg_tweetnacl, pkg_u8g2, pkg_ucglib, pkg_umorse, posix_semaphore, posix_time, ps_schedstatistics, pthread, pthread_barrier, pthread_cleanup, pthread_condition_variable, pthread_cooperation, pthread_rwlock, pthread_tls, riotboot_hdr, rmutex, rng, saul, sched_testing, shell, slip, sntp, ssp, stdin, struct_tm_utility, sys_arduino, thread_basic, thread_cooperation, thread_exit, thread_flags, thread_flags_xtimer, thread_float, thread_flood, thread_msg, thread_msg_block_race, thread_msg_block_w_queue, thread_msg_block_wo_queue, thread_msg_seq, thread_priority_inversion, thread_race, trickle, unittests, xtimer_drift, xtimer_hang, xtimer_longterm, xtimer_msg, xtimer_msg_receive_timeout, xtimer_mutex_lock_timeout, xtimer_now64_continuity, xtimer_periodic_wakeup, xtimer_remove, xtimer_reset, xtimer_usleep, xtimer_usleep_short
Errors: 0 Warnings: 0
MrKevinWeiss commented 5 years ago
03-single-hop-ipv6-icmp Task 01 ``` 8 bytes from fe80::4c7d:9eff:fe73:c610: icmp_seq=994 ttl=64 (DUP!) 8 bytes from fe80::4c7d:9eff:fe73:c60f: icmp_seq=995 ttl=64 8 bytes from fe80::4c7d:9eff:fe73:c610: icmp_seq=995 ttl=64 (DUP!) 8 bytes from fe80::4c7d:9eff:fe73:c60f: icmp_seq=996 ttl=64 8 bytes from fe80::4c7d:9eff:fe73:c610: icmp_seq=996 ttl=64 (DUP!) 8 bytes from fe80::4c7d:9eff:fe73:c60f: icmp_seq=997 ttl=64 8 bytes from fe80::4c7d:9eff:fe73:c610: icmp_seq=997 ttl=64 (DUP!) 8 bytes from fe80::4c7d:9eff:fe73:c60f: icmp_seq=998 ttl=64 8 bytes from fe80::4c7d:9eff:fe73:c610: icmp_seq=998 ttl=64 (DUP!) 8 bytes from fe80::4c7d:9eff:fe73:c60f: icmp_seq=999 ttl=64 --- ff02::1 PING statistics --- 1000 packets transmitted, 1000 packets received, 999 duplicates, 0% packet loss
MrKevinWeiss commented 5 years ago
03-single-hop-ipv6-icmp Task 02 ``` 1008 bytes from fe80::4c7d:9eff:fe73:c610: icmp_seq=993 ttl=64 time=0.969 ms │ 1008 bytes from fe80::4c7d:9eff:fe73:c610: icmp_seq=994 ttl=64 time=0.998 ms │ 1008 bytes from fe80::4c7d:9eff:fe73:c610: icmp_seq=995 ttl=64 time=1.032 ms │ 1008 bytes from fe80::4c7d:9eff:fe73:c610: icmp_seq=996 ttl=64 time=0.969 ms │ 1008 bytes from fe80::4c7d:9eff:fe73:c610: icmp_seq=997 ttl=64 time=0.998 ms │ 1008 bytes from fe80::4c7d:9eff:fe73:c610: icmp_seq=998 ttl=64 time=0.976 ms │ 1008 bytes from fe80::4c7d:9eff:fe73:c610: icmp_seq=999 ttl=64 time=0.811 ms │ │ --- fe80::4c7d:9eff:fe73:c610 PING statistics --- │ 1000 packets transmitted, 1000 packets received, 0% packet loss │ round-trip min/avg/max = 0.245/0.999/1.272 ms
MrKevinWeiss commented 5 years ago
03-single-hop-ipv6-icmp Task 03 ``` 1008 bytes from fe80::4c7d:9eff:fe73:c610: icmp_seq=3594 ttl=64 time=0.479 ms 1008 bytes from fe80::4c7d:9eff:fe73:c610: icmp_seq=3595 ttl=64 time=0.912 ms 1008 bytes from fe80::4c7d:9eff:fe73:c610: icmp_seq=3596 ttl=64 time=0.332 ms 1008 bytes from fe80::4c7d:9eff:fe73:c610: icmp_seq=3597 ttl=64 time=0.930 ms 1008 bytes from fe80::4c7d:9eff:fe73:c610: icmp_seq=3598 ttl=64 time=0.944 ms 1008 bytes from fe80::4c7d:9eff:fe73:c610: icmp_seq=3599 ttl=64 time=0.919 ms --- fe80::4c7d:9eff:fe73:c610 PING statistics --- 3600 packets transmitted, 3600 packets received, 0% packet loss round-trip min/avg/max = 0.183/0.987/2.961 ms
cladmi commented 5 years ago

I added an issue for the fact I found during 01-ci - Task #01 that buildtest hides some failures https://github.com/RIOT-OS/RIOT/issues/11842 I will try a fix.

Other wise, I could correctly compile with BUILD_IN_DOCKER=1 for both gnu and all boards

PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin BUILD_IN_DOCKER=1 ./task01.py /home/harter/work/git/worktree/riot_release

For TOOLCHAIN=llvm, I successfully tested the iotlab-m3 native nrf52dk mulle nucleo-f401re samr21-xpro slstk3402a as done by murdock. I needed to change the boards list in the file, as providing it through environment variable to docker was not handled. Must be a space handling issue.

MrKevinWeiss commented 5 years ago

It seems like 03-single-hop-ipv6-icmp Task 04 has some issues but I don't know what I should be looking for and how making the ping async affects everything.

I was pinging one node with 9 other instances of native and started getting some odd messages. I could use some insight if this is expected or if the test needs to be adapted.

03-single-hop-ipv6-icmp Task 04 src1 node with -i 3 ``` ... error: packet buffer full error: packet buffer full 1460 bytes from fe80::f857:f5ff:fe35:207f: icmp_seq=4807 ttl=64 time=4.322 ms 1460 bytes from fe80::f857:f5ff:fe35:207f: icmp_seq=4809 ttl=64 time=1.673 ms 1460 bytes from fe80::f857:f5ff:fe35:207f: icmp_seq=4810 ttl=64 time=3.483 ms ... 1460 bytes from fe80::f857:f5ff:fe35:207f: icmp_seq=27211 ttl=64 time=1004.441 ms 1460 bytes from fe80::f857:f5ff:fe35:207f: icmp_seq=27531 ttl=64 time=2.096 ms 1460 bytes from fe80::f857:f5ff:fe35:207f: icmp_seq=27532 ttl=64 time=2.353 ms ... 1460 bytes from fe80::f857:f5ff:fe35:207f: icmp_seq=41973 ttl=64 time=3.595 ms 1460 bytes from fe80::f857:f5ff:fe35:207f: icmp_seq=41983 ttl=64 time=2.786 msgnrc_netif: possibly lost interrupt. gnrc_netif: possibly lost interrupt. gnrc_netif: possibly lost interrupt. ... ```

Finally the node that I was pinging crashed... I think this is a bug, @miri64 can you look into it?

03-single-hop-ipv6-icmp Task 04 dst node crash ``` > sys/net/gnrc/network_layer/ipv6/gnrc_ipv6.c:409 => 0x804a04d *** RIOT kernel panic: FAILED ASSERTION. pid | name | state Q | pri | stack ( used) | base addr | current - | isr_stack | - - | - | 8192 ( -1) | 0x80855a0 | 0x80855a0 1 | idle | pending Q | 15 | 8192 ( 420) | 0x80832c0 | 0x8085130 2 | main | pending Q | 7 | 12288 ( 3120) | 0x80802c0 | 0x8083130 3 | ipv6 | running Q | 4 | 8192 ( 2868) | 0x80916a0 | 0x8093510 4 | udp | bl rx _ | 5 | 8192 ( 976) | 0x808d620 | 0x808f490 5 | gnrc_netdev_tap | bl rx _ | 2 | 8192 ( 2356) | 0x808f680 | 0x80914f0 6 | RPL | bl rx _ | 5 | 8192 ( 912) | 0x8093aa0 | 0x8095910 | SUM | | | 61440 (10652) *** halted.
MrKevinWeiss commented 5 years ago

@cgundogan might also be able to help?

MrKevinWeiss commented 5 years ago

It seems like -DGNRC_IPV6_NIB_NUMOF=10 helps the situation

MrKevinWeiss commented 5 years ago

I still get some packet loss but it isn't throwing errors

jia200x commented 5 years ago

Task 11.01 fails for me.

11-lorawan Task 01 ``` 2019-07-15 16:03:51,068 - INFO # LoRaWAN Class A low-power application 2019-07-15 16:03:51,072 - INFO # ===================================== 2019-07-15 16:03:51,138 - INFO # Starting join procedure 2019-07-15 16:03:56,318 - INFO # Join procedure succeeded 2019-07-15 16:03:56,320 - INFO # Sending: This is RIOT! 2019-07-15 16:03:57,436 - INFO # Cannot send message 'This is RIOT!', ret code: 6 2019-07-15 16:04:17,438 - INFO # Sending: This is RIOT! 2019-07-15 16:04:18,557 - INFO # Cannot send message 'This is RIOT!', ret code: 6 2019-07-15 16:04:38,555 - INFO # Sending: This is RIOT! 2019-07-15 16:04:39,671 - INFO # Cannot send message 'This is RIOT!', ret code: 6 2019-07-15 16:04:59,674 - INFO # Sending: This is RIOT! 2019-07-15 16:05:00,789 - INFO # Cannot send message 'This is RIOT!', ret code: 6 2019-07-15 16:05:20,792 - INFO # Sending: This is RIOT! 2019-07-15 16:05:21,908 - INFO # Cannot send message 'This is RIOT!', ret code: 6 2019-07-15 16:05:41,910 - INFO # Sending: This is RIOT! 2019-07-15 16:05:43,026 - INFO # Cannot send message 'This is RIOT!', ret code: 6 2019-07-15 16:06:03,028 - INFO # Sending: This is RIOT! 2019-07-15 16:06:04,144 - INFO # Cannot send message 'This is RIOT!', ret code: 6 2019-07-15 16:06:24,146 - INFO # Sending: This is RIOT! 2019-07-15 16:06:25,262 - INFO # Cannot send message 'This is RIOT!', ret code: 6 2019-07-15 16:06:45,264 - INFO # Sending: This is RIOT! 2019-07-15 16:06:46,379 - INFO # Cannot send message 'This is RIOT!', ret code: 6 2019-07-15 16:07:06,381 - INFO # Sending: This is RIOT! 2019-07-15 16:07:07,497 - INFO # Cannot send message 'This is RIOT!', ret code: 6 2019-07-15 16:07:27,499 - INFO # Sending: This is RIOT! 2019-07-15 16:07:28,615 - INFO # Cannot send message 'This is RIOT!', ret code: 6 ```

I can see messages in TTN, and it joins without any issues. I will investigate further

jia200x commented 5 years ago
11-lorawan Task 02 passes ``` loramac erase 2019-07-15 16:09:27,841 - INFO # loramac erase > loramac set deveui 70B3D57EDA06FF17 2019-07-15 16:09:33,563 - INFO # loramac set deveui 70B3D57EDA06FF17 > loramac set appeui 70B3D57ED000912E 2019-07-15 16:09:40,202 - INFO # loramac set appeui 70B3D57ED000912E > loramac set appkey 16C47933D61F1722437CF9992DAB373F 2019-07-15 16:09:48,194 - INFO # loramac set appkey 16C47933D61F1722437CF9992DAB373F > loramac join otaa 2019-07-15 16:09:51,265 - INFO # loramac join otaa 2019-07-15 16:09:51,267 - INFO # Warning: already joined! > 2019-07-15 16:09:56,367 - INFO # main(): This is RIOT! (Version: 2019.10-devel-HEAD) 2019-07-15 16:09:56,369 - INFO # All up, running the shell now > loramac sedeveui 70B3D57EDA06FF173F 2019-07-15 16:09:58,819 - INFO # loramac set deveui 70B3D57EDA06FF17 > loramaset appeui 70B3D57ED000912E 2019-07-15 16:10:00,459 - INFO # loramac set appeui 70B3D57ED000912E > loramaset appkey 16C47933D61F1722437CF9992DAB373F 2019-07-15 16:10:02,237 - INFO # loramac set appkey 16C47933D61F1722437CF9992DAB373F > loramajoin otaa3F 2019-07-15 16:10:03,777 - INFO # loramac join otaa 2019-07-15 16:10:13,144 - INFO # Join procedure succeeded! > loramac tx hola 2019-07-15 16:10:17,417 - INFO # loramac tx hola 2019-07-15 16:10:20,906 - INFO # Received ACK from network 2019-07-15 16:10:20,909 - INFO # Message sent with success > loramac set dr 4 2019-07-15 16:10:56,898 - INFO # loramac set dr 4 > loramac tx hola 2019-07-15 16:11:26,482 - INFO # loramac tx hola 2019-07-15 16:11:26,487 - INFO # Cannot send: dutycycle restriction > 2019-07-15 16:11:28,945 - INFO # main(): This is RIOT! (Version: 2019.10-devel-HEAD) 2019-07-15 16:11:28,948 - INFO # All up, running the shell now > loramac sedeveui 70B3D57EDA06FF173F 2019-07-15 16:11:31,020 - INFO # loramac set deveui 70B3D57EDA06FF17 > loramaset appeui 70B3D57ED000912E 2019-07-15 16:11:32,900 - INFO # loramac set appeui 70B3D57ED000912E > loramaset appkey 16C47933D61F1722437CF9992DAB373F 2019-07-15 16:11:34,910 - INFO # loramac set appkey 16C47933D61F1722437CF9992DAB373F > loramac set dr 0 2019-07-15 16:11:38,802 - INFO # loramac set dr 0 > loramajoin otaa 2019-07-15 16:11:43,162 - INFO # loramac join otaa 2019-07-15 16:11:52,529 - INFO # Join procedure succeeded! > loramac tx asd cnf 123 2019-07-15 16:12:19,827 - INFO # loramac tx asd cnf 123 2019-07-15 16:12:23,329 - INFO # Received ACK from network 2019-07-15 16:12:23,332 - INFO # Message sent with success > loramac tx asd uncnf 42 2019-07-15 16:12:36,691 - INFO # loramac tx asd uncnf 42 2019-07-15 16:12:36,696 - INFO # Cannot send: dutycycle restriction > loramac tx asd uncnf 42 2019-07-15 16:13:08,131 - INFO # loramac tx asd uncnf 42 2019-07-15 16:13:08,135 - INFO # Cannot send: dutycycle restriction > loramac tx asd uncnf 42 2019-07-15 16:13:09,306 - INFO # loramac tx asd uncnf 42 2019-07-15 16:13:09,312 - INFO # Cannot send: dutycycle restriction > loramac tx asd uncnf 42 2019-07-15 16:13:15,483 - INFO # loramac tx asd uncnf 42 2019-07-15 16:13:15,488 - INFO # Cannot send: dutycycle restriction > loramac tx asd uncnf 42 2019-07-15 16:13:21,691 - INFO # loramac tx asd uncnf 42 2019-07-15 16:13:21,696 - INFO # Cannot send: dutycycle restriction > loramac tx asd uncnf 42 2019-07-15 16:13:23,395 - INFO # loramac tx asd uncnf 42 2019-07-15 16:13:23,400 - INFO # Cannot send: dutycycle restriction > loramac tx asd uncnf 42 2019-07-15 16:13:24,499 - INFO # loramac tx asd uncnf 42 2019-07-15 16:13:24,504 - INFO # Cannot send: dutycycle restriction > loramac tx asd uncnf 42 2019-07-15 16:13:25,611 - INFO # loramac tx asd uncnf 42 2019-07-15 16:13:25,616 - INFO # Cannot send: dutycycle restriction > loramac tx asd uncnf 42 2019-07-15 16:13:46,011 - INFO # loramac tx asd uncnf 42 2019-07-15 16:13:46,016 - INFO # Cannot send: dutycycle restriction > loramac tx asd uncnf 42 2019-07-15 16:13:48,403 - INFO # loramac tx asd uncnf 42 2019-07-15 16:13:48,408 - INFO # Cannot send: dutycycle restriction > loramac tx asd uncnf 42 2019-07-15 16:14:37,132 - INFO # loramac tx asd uncnf 42 2019-07-15 16:14:41,199 - INFO # Message sent with success > 2019-07-15 16:15:17,192 - INFO # main(): This is RIOT! (Version: 2019.10-devel-HEAD) 2019-07-15 16:15:17,195 - INFO # All up, running the shell now > loramaset deveui 70B3D57EDA06FF17 2019-07-15 16:15:19,781 - INFO # loramac set deveui 70B3D57EDA06FF17 > loramaset appeui 70B3D57ED000912E 2019-07-15 16:15:21,534 - INFO # loramac set appeui 70B3D57ED000912E > loramaset appkey 16C47933D61F1722437CF9992DAB373F 2019-07-15 16:15:23,191 - INFO # loramac set appkey 16C47933D61F1722437CF9992DAB373F > loramaset dr 3 2019-07-15 16:15:34,923 - INFO # loramac set dr 3 > loramajoin otaa3F 2019-07-15 16:15:36,947 - INFO # loramac join otaa 2019-07-15 16:15:42,449 - INFO # Join procedure succeeded! > loramatx asd cnf 123 2019-07-15 16:15:53,180 - INFO # loramac tx asd cnf 123 2019-07-15 16:15:55,528 - INFO # Received ACK from network 2019-07-15 16:15:55,530 - INFO # Message sent with success > loramatx asd uncnf 42 2019-07-15 16:15:59,924 - INFO # loramac tx asd uncnf 42 2019-07-15 16:16:02,983 - INFO # Message sent with success > loramaset appkey 16C47933D61F1722437CF9992DAB373F2019-07-15 16:16:17,103 - INFO # main(): This is RIOT! (Version: 2019.10-devel-HEAD) 2019-07-15 16:16:17,105 - INFO # All up, running the shell now > set deveui 70B3D57EDA06FF17 2019-07-15 16:16:19,478 - INFO # loramac set deveui 70B3D57EDA06FF17 > loramaset appeui 70B3D57ED000912E 2019-07-15 16:16:21,846 - INFO # loramac set appeui 70B3D57ED000912E > loramaset appkey 16C47933D61F1722437CF9992DAB373F 2019-07-15 16:16:23,416 - INFO # loramac set appkey 16C47933D61F1722437CF9992DAB373F > loramajoin otaa3F 2019-07-15 16:16:25,292 - INFO # loramac join otaa 2019-07-15 16:16:34,659 - INFO # Join procedure succeeded! > loramatx asd cnf 123 2019-07-15 16:16:59,357 - INFO # loramac tx asd cnf 123 2019-07-15 16:17:02,859 - INFO # Received ACK from network 2019-07-15 16:17:02,862 - INFO # Message sent with success > loramatx asd uncnf 42 2019-07-15 16:17:13,901 - INFO # loramac tx asd uncnf 42 2019-07-15 16:17:13,906 - INFO # Cannot send: dutycycle restriction > loramac tx asd uncnf 42 2019-07-15 16:17:37,988 - INFO # loramac tx asd uncnf 42 2019-07-15 16:17:37,993 - INFO # Cannot send: dutycycle restriction > loramac tx asd uncnf 42 2019-07-15 16:24:07,872 - INFO # loramac tx asd uncnf 42 2019-07-15 16:24:11,939 - INFO # Message sent with success > 2019-07-15 16:24:21,227 - INFO # main(): This is RIOT! (Version: 2019.10-devel-HEAD) 2019-07-15 16:24:21,230 - INFO # All up, running the shell now > loramaset deveui 70B3D57EDA06FF17 2019-07-15 16:24:23,041 - INFO # loramac set deveui 70B3D57EDA06FF17 > loramaset appeui 70B3D57ED000912E 2019-07-15 16:24:24,329 - INFO # loramac set appeui 70B3D57ED000912E > loramaset appkey 16C47933D61F1722437CF9992DAB373F 2019-07-15 16:24:25,787 - INFO # loramac set appkey 16C47933D61F1722437CF9992DAB373F > loramaset dr 5 2019-07-15 16:24:30,655 - INFO # loramac set dr 5 > loramajoin otaa 2019-07-15 16:24:33,951 - INFO # loramac join otaa 2019-07-15 16:24:39,132 - INFO # Join procedure succeeded! > loramatx asd cnf 1233F 2019-07-15 16:24:42,512 - INFO # loramac tx asd cnf 123 2019-07-15 16:24:43,641 - INFO # Received ACK from network 2019-07-15 16:24:43,644 - INFO # Message sent with success > loramatx asd uncnf 42 2019-07-15 16:24:46,728 - INFO # loramac tx asd uncnf 42 2019-07-15 16:24:49,787 - INFO # Message sent with success ```
jia200x commented 5 years ago
11-lorawan Task 03 fails (it freezes) ``` 2019-07-15 16:59:20,888 - INFO # main(): This is RIOT! (Version: 2019.10-devel-HEAD) 2019-07-15 16:59:20,890 - INFO # All up, running the shell now loramac set devaddr 260127A0 2019-07-15 16:59:24,366 - INFO # loramac set devaddr 260127A0 loramac set nwkskey F5F612FDBDF4DFD9B0D0CAF80C2BCA7F 2019-07-15 16:59:26,697 - INFO # loramac set nwkskey F5F612FDBDF4DFD9B0D0CAF80C2BCA7F loramac set appskey 1EBB10A1AE58065BB4540A4181DC7BB8 2019-07-15 16:59:30,441 - INFO # loramac set appskey 1EBB10A1AE58065BB4540A4181DC7BB8 loramac join abp 2019-07-15 16:59:35,485 - INFO # loramac join abp 2019-07-15 16:59:35,487 - INFO # Join procedure succeeded! loramac set rx2_dr 3 2019-07-15 16:59:40,005 - INFO # loramac set rx2_dr 3 loramac set dr 0 2019-07-15 16:59:47,006 - INFO # loramac set dr 0 loramac tx "asd" cnf 123 2019-07-15 16:59:55,654 - INFO # loramac tx "asd" cnf 123 help help ```
jia200x commented 5 years ago

In fact, there's somethying wrong with the loramac shell commands:

> loramac set appskey 1EBB10A1AE58065BB4540A4181DC7BB8
2019-07-15 17:16:52,784 - INFO #  loramac set appskey 1EBB10A1AE58065BB4540A4181DC7BB8
> loramac get appskey
2019-07-15 17:16:55,244 - INFO #  loramac get appskey
2019-07-15 17:16:55,248 - INFO # APPSKEY: E9030008000000000000000000000000

EDIT: Both set nwkskey and set appskey seem to be broken...

fjmolinas commented 5 years ago

EDIT: Both set nwkskey and set appskey seem to be broken...

@jia200x are you testing on iotlab?

jia200x commented 5 years ago

@jia200x are you testing on iotlab?

Nope, locally on a b-l072z-lrwan1. For some reason, I cannot set the keys properly. I'm investigating why.

cladmi commented 5 years ago

Success:

Failure for unittests on iotlab-m3

The unittests do not compile on boards without periph_hwrng anymore… Yes "unittests" that have "hardware requirements"…

This is bad

01-ci Task #04 - Unittests on iotlab-m3 ``` IOTLAB_NODE=auto-ssh ./task04.py ~/work/git/worktree/riot_release Run task #04 There are unsatisfied feature requirements: periph_hwrng EXPECT ERRORS! Building application "tests_unittests" for "iotlab-m3" with MCU "stm32f1". ... /home/harter/work/git/worktree/riot_release/tests/unittests/bin/iotlab-m3/periph_common.a(init.o): In function `periph_init': /home/harter/work/git/worktree/riot_release/drivers/periph_common/init.c:69: undefined reference to `hwrng_init' /home/harter/work/git/worktree/riot_release/tests/unittests/bin/iotlab-m3/devfs.a(random-vfs.o): In function `hwrng_vfs_read': /home/harter/work/git/worktree/riot_release/sys/fs/devfs/random-vfs.c:38: undefined reference to `hwrng_read' collect2: error: ld returned 1 exit status /home/harter/work/git/worktree/riot_release/tests/unittests/../../Makefile.include:475: recipe for target '/home/harter/work/git/worktree/riot_release/tests/unittests/bin/iotlab-m3/tests_unittests.elf' failed make: *** [/home/harter/work/git/worktree/riot_release/tests/unittests/bin/iotlab-m3/tests_unittests.elf] Error 1 Traceback (most recent call last): File "./task04.py", line 20, in subprocess.check_call(['make', '-B', 'clean', 'all']) File "/usr/lib/python3.6/subprocess.py", line 311, in check_call raise CalledProcessError(retcode, cmd) subprocess.CalledProcessError: Command '['make', '-B', 'clean', 'all']' returned non-zero exit status 2. ```
cladmi commented 5 years ago

This added the periph_hwrng requirement for tests/unittests https://github.com/RIOT-OS/RIOT/pull/7421

fjmolinas commented 5 years ago

Nope, locally on a b-l072z-lrwan1. For some reason, I cannot set the keys properly. I'm investigating why.

I'm getting the same result as you.

aabadie commented 5 years ago

Maybe it's time to review (and merge ;) ) https://github.com/RIOT-OS/RIOT/pull/11783.

Cheers!

cladmi commented 5 years ago

Maybe it's time to review (and merge ;) ) RIOT-OS/RIOT#11783.

Cheers!

Would still be a good idea to know what broke it, if it is indeed the same as https://github.com/RIOT-OS/RIOT/issues/11626 or not.

fjmolinas commented 5 years ago

Would still be a good idea to know what broke it, if it is indeed the same as RIOT-OS/RIOT#11626 or not.

Seems to be a related problem, the problem with the nwskey was introduced by RIOT-OS/RIOT#11626, just after that PR it is working, and broken after wards. I'm looking into it too.

fjmolinas commented 5 years ago

RIOT-OS/RIOT#11783 does fix the issue. I will try to understand the problem better and see if I can get a formal explanation. But as @aabadie pointed out in the PR it is not the nicest solution but it works, back-porting RIOT-OS/RIOT#11783 could be the sensible solution.

MrKevinWeiss commented 5 years ago

@cladmi so it seems like the problem with the unittest failing is only due to incorrect test placement. Does this need to be backported or just mentioned in the release notes and an issue raised to move it out of unittests and into its own test as @miri64 suggested in the PR?

fjmolinas commented 5 years ago

I opened RIOT-OS/RIOT#11847 that fixes the issue with the lorawan tasks, its an alternative to RIOT-OS/RIOT#11783.

miri64 commented 5 years ago

I am running Task03.4 in the background to try to reproduce your assertion error @MrKevinWeiss.

MrKevinWeiss commented 5 years ago

For Task3.5 using many native node to ping one node I have recurring pktbuf hex dumps that do not get cleared.

Steps to reproduce

  1. create 11 tap interfaces sudo ./dist/tools/tapsetup/tapsetup -c 11
  2. Open session gnrc_udp native session on tmux tmux new-session -d -s nodes "make flash term -C tests/gnrc_udp"
  3. Add source node windows for i in {1..10}; do tmux new-window -t nodes: -n $i "make term PORT=tap$i -C tests/gnrc_udp/"; done
  4. The go to window 0 of the tmux session tmux attach -t nodes:0
  5. Type the ifconfig command and copy the
  6. Detach tmux ctrl+b d
  7. Ping node 0 with the other nodes at 0ms for i in {1..10}; do tmux send-keys -t nodes:$i "ping6 -c 1000000 -i 0 -s 1452 <ll_addr>" enter; done
  8. Get a coffee and wait 5 mins
  9. Check to see if the pings are complete then check the packet buff of all the nodes for i in {0..10}; do tmux send-keys -t nodes:$i "pktbuf" enter; done
  10. Evaluate each node by tmux attach and scroll with ctrl+b n
  11. There should be no hex dumps but I always get some and some nodes lock up
  12. Ping node 0 with the other nodes at 1ms for i in {1..10}; do tmux send-keys -t nodes:$i "ping6 -c 100000 -i 1 -s 1452 <ll_addr>" enter; done
  13. Repeat from step 8
MrKevinWeiss commented 5 years ago

I still don't know if native should be able to handle this or not. It seems pretty heavy.

miri64 commented 5 years ago

Yepp it should be :-/. I'll have a look if this was also the case in 2019.04 or if this is a regression.

MrKevinWeiss commented 5 years ago

Should I open a proper issue?

cladmi commented 5 years ago

@cladmi so it seems like the problem with the unittest failing is only due to incorrect test placement. Does this need to be backported or just mentioned in the release notes and an issue raised to move it out of unittests and into its own test as @miri64 suggested in the PR?

It is step 1.3 of the release spec.

miri64 commented 5 years ago

Let me first try to reproduce.

miri64 commented 5 years ago

Up until 11. I can not reproduce your finding on 2019.04-RC1. Still waiting for 12-13.

miri64 commented 5 years ago

I scriptified your steps to reproduce here btw.

miri64 commented 5 years ago

Up until 11. I can not reproduce your finding on 2019.04-RC1. Still waiting for 12-13.

12-13 I also wasn't able to reproduce on 2019.04-RC1. I will focus on 1-11 to find the regression.

MrKevinWeiss commented 5 years ago
10-icmpv6-error Task 1 ``` >>>>> >>>
miri64 commented 5 years ago

Up until 11. I can not reproduce your finding on 2019.04-RC1. Still waiting for 12-13.

12-13 I also wasn't able to reproduce on 2019.04-RC1. I will focus on 1-11 to find the regression.

Ran my script about 10 times and was not able to reproduce on 2019.07-RC1 :-/.

miri64 commented 5 years ago

Updated a bug in the script. Still can't reproduce.

miri64 commented 5 years ago

From what @MrKevinWeiss and I were able to assess offline it looks like some rare race-condition on native, that might not happen on all system is the cause... The packets that remain in the packet buffer look like they should after leaving the ICMPv6 module here and still have the gnrc_netif_hdr_t they got in that function, so they did not pass by here yet. The only way this can happen if they are not properly released after an unsuccessful dispatch (I checked, they are) or if they somehow got stuck in gnrc_ipv6's message queue. I try to reproduce it on a different system (or maybe someone else with a Lenovo X1 Carbon can try). If I'm not able to we should just mark it as a known issue and document the steps to reproduce in an issue upstream.

miri64 commented 5 years ago

On my laptop I do get the gnrc_netif: possibly lost interrupt. which I don't get on my work machine. I did not however see the filled packet buffer, even after several retries and it never froze. I did see similar packets to what @MrKevinWeiss sent me offline hanging out in the packet buffer for a while though but they later disappeared (they probably were stuck in the neighbor cache's queue where a new netif header is also added).

cladmi commented 5 years ago

I split the test outside https://github.com/RIOT-OS/RIOT/pull/11855

cladmi commented 5 years ago

I change that buildtest now behaves like doing make all for each board. https://github.com/RIOT-OS/RIOT/pull/11857

MrKevinWeiss commented 5 years ago
10-icmpv6-error Task 2 ``` srp1(Ether(dst="92:5C:55:A8:00:CA") / IPv6(src="fe80::743c:50ff:fe8b:ea46", dst="affe::1") / UDP(dport=48879) / "", iface="tapbr0") >>>>>
MrKevinWeiss commented 5 years ago
10-icmpv6-error Task 3 ``` srp1(Ether(dst="92:5C:55:A8:00:CA") / IPv6(src="fe80::743c:50ff:fe8b:ea46", dst="fe80::1") / UDP(dport=48879) / "", iface="tapbr0") Begin emission: Finished sending 1 packets. * Received 1 packets, got 1 answers, remaining 0 packets >>>>>
MrKevinWeiss commented 5 years ago
10-icmpv6-error Task 5 ``` srp1(Ether(dst="92:5C:55:A8:00:CA") / IPv6(src="fe80::743c:50ff:fe8b:ea46", dst="fe80::905c:55ff:fea8:ca") / UDP(dport=48879) / "", iface="tapbr0") Begin emission: Finished sending 1 packets. * Received 1 packets, got 1 answers, remaining 0 packets >>>>>