Closed fjmolinas closed 4 years ago
@RIOT-OS/maintainers people in Berlin, do you know if @cladmi paper-ci is still plugged in? I would like to run the tests on his boards as well. I should have unicasted this, sorry for the spam.
It is still plugged in. No guarantee if it still works though.
@JulianHolzwarth should be able to provide some info as he AFAIK sits right next to it and used it before
Seems to be working, just launched a build
@SemjonKerner also sits right next to it for the next few weeks ;-).
I'm currently adapting the scripts in https://github.com/RIOT-OS/Release-Specs/pull/79 to use pytest
and more importantly riotnode
abstraction https://github.com/RIOT-OS/RIOT/pull/10949. My working branch is https://github.com/fjmolinas/Release-Specs/tree/pr/utils. I'll as a PR when I finish reworking all scripts. But the results of some of the tests I run will use this. riotnode
is used as an external python library so I'm building on a clean release branch.
@miri64 ping
I'm having a weird behaviour with remote-revb
where shell reboot
causes a hardfault
on one of my setups, but not both... I;m investigating.
I'm having issues with task10, it has succeeded a couple of times but most of the time the buffer doesn't empty completely.
How soon after the test and how often did you run pktbuf
?
How soon after the test and how often did you run pktbuf?
Immediately after, every 3 seconds for about 1000s. When it succeeded it was empty in the first 10s after pings where over.
@JulianHolzwarth should be able to provide some info as he AFAIK sits right next to it and used it before
I think many of the boards are unplugged, @JulianHolzwarth can you confirm which ones are not?
I replugged some I saw were unplugged, but most of them were still running:
How soon after the test and how often did you run pktbuf?
Immediately after, every 3 seconds for about 1000s. When it succeeded it was empty in the first 10s after pings where over.
Will have a look.
Even the unused sections are the same (the only numbers different are the pointers, the datagram identifier in the fragment header and unused memory padding within the allocated chunks). The data in chunk 0
are two gnrc_pktsnip_t
instances of type NETTYPE_GNRC_IPV6_EXT
with their data pointer pointing to chunk 1
and chunk 2
(both IPv6 fragmentation headers) respectively. The next
pointer of both points to the same address somewhere in the unsused section, so that part is lost. The fragmentation headers are for the second fragment each, having an offset of 1232. I have a hunch where the problem might be, but need to look deeper for final confirmation.
All in all, this also seems to be broken in 2019.10 (since the last change to the fragmentation was https://github.com/RIOT-OS/RIOT/pull/12414)
I don't quite remember if it is normal to receive ping responses from the tap bridge in native?
Even the unused sections are the same (the only numbers different are the pointers, the datagram identifier in the fragment header and unused memory padding within the allocated chunks). The data in
chunk 0
are twognrc_pktsnip_t
instances of typeNETTYPE_GNRC_IPV6_EXT
with their data pointer pointing tochunk 1
andchunk 2
(both IPv6 fragmentation headers) respectively. Thenext
pointer of both points to the same address somewhere in the unsused section, so that part is lost. The fragmentation headers are for the second fragment each, having an offset of 1232. I have a hunch where the problem might be, but need to look deeper for final confirmation.
It appears to be happening when the first fragment of the echo reply is lost. However, the bug is heisenbuggy enough that I don't loose the first fragment, when I try to find out what is happening to the first fragment :-/. Will investigate further though.
It appears to be happening when the first fragment of the echo reply is lost. However, the bug is heisenbuggy enough that I don't loose the first fragment, when I try to find out what is happening to the first fragment :-/. Will investigate further though.
Fixed here https://github.com/RIOT-OS/RIOT/pull/13156
@miri64 ping Task 05 (Experimental) - UDP with large payload on iotlab-m3 with three hops (RPL route) FAILED as last release
And will continue to, unless we have a congestion avoiding MAC layer. See https://github.com/RIOT-OS/Release-Specs/issues/142#issuecomment-561677974.
It seems like I am getting some trouble with the arduino board. The same init problems and when it does init I cannot ping at all...
Originally posted by @MrKevinWeiss in https://github.com/RIOT-OS/Release-Specs/issues/142#issuecomment-545359685
@MrKevinWeiss it seems I'm having the same issues with xbee as you had before, can you tell me how you made it work?
All tests but tests/gnrc_tcp
are ok, I'll take a closer look later.
On non-native boards I have a timeout on tests/gnrc_tcp
> Timeout in expect script at "child.expect_exact('gnrc_tcp_recv: received ' + str(half_data_len))" (tests/gnrc_tcp/tests/06-receive_data_closed_conn.py:55)
File "/home/francisco/workspace/RIOT/dist/pythonlibs/testrunner/__init__.py", line 29, in run
testfunc(child)
File "/home/francisco/workspace/RIOT/tests/gnrc_tcp/tests/06-receive_data_closed_conn.py", line 55, in testfunc
child.expect_exact('gnrc_tcp_recv: received ' + str(half_data_len))
File "/home/francisco/.local/lib/python3.6/site-packages/pexpect/spawnbase.py", line 418, in expect_exact
return exp.expect_loop(timeout)
File "/home/francisco/.local/lib/python3.6/site-packages/pexpect/expect.py", line 119, in expect_loop
return self.timeout(e)
File "/home/francisco/.local/lib/python3.6/site-packages/pexpect/expect.py", line 82, in timeout
raise TIMEOUT(msg)
Enabling echo on the test I get:
buffer_init
buffer_get_max_size
> buffer_init: argc=1, argv[0] = buffer_init
> buffer_get_max_size: argc=1, argv[0] = buffer_get_max_size
buffer_get_max_size: returns 2048
ifconfig
> Iface 5 HWaddr: 00:53:CB:6B:A0:79
gnrc_tcp_tcb_init
gnrc_tcp_open_active AF_INET6 fe80::143f:baff:fe95:3e30%5 56991 0
L2-PDU:1500 MTU:1500 HL:64 Source address length: 6
Link type: wired
inet6 addr: fe80::253:cbff:fe6b:a079 scope: link VAL
inet6 group: ff02::1
inet6 group: ff02::1:ff6b:a079
> gnrc_tcp_tcb_init: argc=1, argv[0] = gnrc_tcp_tcb_init
> gnrc_tcp_open_active: argc=5, argv[0] = gnrc_tcp_open_active, argv[1] = AF_INET6, argv[2] = fe80::143f:baff:fe95:3e30%5, argv[3] = 56991, argv[4] = 0
gnrc_tcp_open_active: returns 0
gnrc_tcp_recv 1000000 5
> gnrc_tcp_recv: argc=3, argv[0] = gnrc_tcp_recv, argv[1] = 1000000, argv[2] = 5
gnrc_tcp_recv: returns 0
Can anyone try to reproduce?
IIRC this test was far from stable on boards, even when merged. Is that on native or real board?
IIRC this test was far from stable on boards, even when merged. Is that on native or real board?
non-native, works fine on native.
IIRC this test was far from stable on boards, even when merged. Is that on native or real board?
Can we just whitelist native then?
This issue lists the status of all tests for the Release Candidate 1 of the 2020.01 release.
Specs tested:
tests/gnrc_tcp
is excludedtests/gnrc_tcp
is excluded