calimero-project / calimero-core

Core library for KNX network access and management
Other
128 stars 65 forks source link

ManagementClient.readMemory : 4 attempts, failure, success on next attempt #34

Closed kgoderis closed 8 years ago

kgoderis commented 8 years ago

@bmalinowsky I have the following weird behaviour on the KNX bus when trying to read memory locations from devices. I am using a TUNNEL setup, connection-oriented Destinations. The behaviour is the same for all devices, all memory locations. There is one call made to readMemory() in a loop, but loops until there is no exception thrown and the number of byte[] read is meaningful.

This is part of a little routine where I read out the PID.TABLE_REFERENCE of the Address Table Object Index and then use the returned data to read out the # addresses, the Ind. Address and all the Group Addresses used by the KNX Actor.

It is a bit a lengthy log, but you can see that 4 attempts are made to read a memory location, then it fails with a "send data connected failed", but it works straight away on attempt 1 in the next cycle. Does this indicate timing issues at the end of the KNX actor, or on the bus itself?

2016-04-21 09:51:06.559 [DEBUG] [.k.h.KNXBridgeBaseThingHandler:1070 ] - Trying to read 2 bytes at memory location 16387 for 1.1.11 2016-04-21 09:51:06.559 [DEBUG] [tuwien.auto.calimero :45 ] - calimero.link.192.168.0.10:3671: cEMI L-Data.req from 1.0.255 to 1.1.11, system priority hop count 6 repeat tpdu 81 2016-04-21 09:51:06.559 [DEBUG] [tuwien.auto.calimero :45 ] - KNXnet/IP Tunneling 192.168.0.10:3671: sending cEMI frame seq 155, wait for cEMI.con, attempt 1 (channel 38) 06 10 04 20 00 14 04 26 9b 00 11 00 b0 60 10 ff 11 0b 00 81 2016-04-21 09:51:06.559 [DEBUG] [tuwien.auto.calimero :45 ] - TL 192.168.0.10:3671: sending data connected to 1.1.11, attempt 1 2016-04-21 09:51:06.559 [INFO ] [tuwien.auto.calimero :43 ] - calimero.link.192.168.0.10:3671: send message to 1.1.11, wait for confirmation 2016-04-21 09:51:06.560 [DEBUG] [tuwien.auto.calimero :45 ] - calimero.link.192.168.0.10:3671: cEMI L-Data.req from 1.0.255 to 1.1.11, low priority hop count 6 repeat tpdu 46 02 40 03 2016-04-21 09:51:06.575 [DEBUG] [tuwien.auto.calimero :45 ] - KNXnet/IP Tunneling 192.168.0.10:3671: received cEMI L-Data.con with req 240 2016-04-21 09:51:06.575 [DEBUG] [tuwien.auto.calimero :45 ] - calimero.link.192.168.0.10:3671: confirmation of 1.1.11 2016-04-21 09:51:06.594 [DEBUG] [tuwien.auto.calimero :45 ] - KNXnet/IP Tunneling 192.168.0.10:3671: received cEMI L-Data.con with req 241 2016-04-21 09:51:06.594 [DEBUG] [tuwien.auto.calimero :45 ] - calimero.link.192.168.0.10:3671: confirmation of 1.1.11 2016-04-21 09:51:06.594 [DEBUG] [tuwien.auto.calimero :45 ] - calimero.link.192.168.0.10:3671: send to 1.1.11 succeeded 2016-04-21 09:51:06.595 [DEBUG] [tuwien.auto.calimero :45 ] - TL 192.168.0.10:3671: send disconnect to 1.1.11 2016-04-21 09:51:06.595 [INFO ] [tuwien.auto.calimero :43 ] - calimero.link.192.168.0.10:3671: send message to 1.1.11, wait for confirmation 2016-04-21 09:51:06.595 [DEBUG] [tuwien.auto.calimero :45 ] - KNXnet/IP Tunneling 192.168.0.10:3671: sending cEMI frame seq 156, wait for cEMI.con, attempt 1 (channel 38) 06 10 04 20 00 17 04 26 9c 00 11 00 bc 60 10 ff 11 0b 03 46 02 40 03 2016-04-21 09:51:06.596 [DEBUG] [tuwien.auto.calimero :45 ] - calimero.link.192.168.0.10:3671: cEMI L-Data.req from 1.0.255 to 1.1.11, system priority hop count 6 repeat tpdu 81 2016-04-21 09:51:06.618 [DEBUG] [tuwien.auto.calimero :45 ] - KNXnet/IP Tunneling 192.168.0.10:3671: received cEMI L-Data.con with req 242 2016-04-21 09:51:06.619 [DEBUG] [tuwien.auto.calimero :45 ] - calimero.link.192.168.0.10:3671: confirmation of 1.1.11 2016-04-21 09:51:06.619 [DEBUG] [tuwien.auto.calimero :45 ] - calimero.link.192.168.0.10:3671: send to 1.1.11 succeeded 2016-04-21 09:51:06.619 [DEBUG] [tuwien.auto.calimero :45 ] - KNXnet/IP Tunneling 192.168.0.10:3671: sending cEMI frame seq 157, wait for cEMI.con, attempt 1 (channel 38) 06 10 04 20 00 14 04 26 9d 00 11 00 b0 60 10 ff 11 0b 00 81 2016-04-21 09:51:06.638 [DEBUG] [tuwien.auto.calimero :45 ] - KNXnet/IP Tunneling 192.168.0.10:3671: received cEMI L-Data.con with req 243 2016-04-21 09:51:06.638 [DEBUG] [tuwien.auto.calimero :45 ] - calimero.link.192.168.0.10:3671: confirmation of 1.1.11 2016-04-21 09:51:06.638 [DEBUG] [tuwien.auto.calimero :45 ] - calimero.link.192.168.0.10:3671: send to 1.1.11 succeeded 2016-04-21 09:51:06.639 [DEBUG] [tuwien.auto.calimero :45 ] - TL 192.168.0.10:3671: send disconnect to 1.1.11 2016-04-21 09:51:06.639 [INFO ] [tuwien.auto.calimero :43 ] - calimero.link.192.168.0.10:3671: send message to 1.1.11, wait for confirmation 2016-04-21 09:51:06.639 [DEBUG] [tuwien.auto.calimero :45 ] - calimero.link.192.168.0.10:3671: cEMI L-Data.req from 1.0.255 to 1.1.11, system priority hop count 6 repeat tpdu 81 2016-04-21 09:51:06.639 [DEBUG] [tuwien.auto.calimero :45 ] - KNXnet/IP Tunneling 192.168.0.10:3671: sending cEMI frame seq 158, wait for cEMI.con, attempt 1 (channel 38) 06 10 04 20 00 14 04 26 9e 00 11 00 b0 60 10 ff 11 0b 00 81 2016-04-21 09:51:06.659 [DEBUG] [tuwien.auto.calimero :45 ] - KNXnet/IP Tunneling 192.168.0.10:3671: received cEMI L-Data.con with req 244 2016-04-21 09:51:06.659 [DEBUG] [tuwien.auto.calimero :45 ] - calimero.link.192.168.0.10:3671: confirmation of 1.1.11 2016-04-21 09:51:06.660 [DEBUG] [tuwien.auto.calimero :45 ] - calimero.link.192.168.0.10:3671: send to 1.1.11 succeeded 2016-04-21 09:51:06.660 [DEBUG] [tuwien.auto.calimero :45 ] - TL 192.168.0.10:3671: send disconnect to 1.1.11 2016-04-21 09:51:06.660 [INFO ] [tuwien.auto.calimero :43 ] - calimero.link.192.168.0.10:3671: send message to 1.1.11, wait for confirmation 2016-04-21 09:51:06.660 [DEBUG] [tuwien.auto.calimero :45 ] - calimero.link.192.168.0.10:3671: cEMI L-Data.req from 1.0.255 to 1.1.11, system priority hop count 6 repeat tpdu 81 2016-04-21 09:51:06.660 [DEBUG] [tuwien.auto.calimero :45 ] - KNXnet/IP Tunneling 192.168.0.10:3671: sending cEMI frame seq 159, wait for cEMI.con, attempt 1 (channel 38) 06 10 04 20 00 14 04 26 9f 00 11 00 b0 60 10 ff 11 0b 00 81 2016-04-21 09:51:06.679 [DEBUG] [tuwien.auto.calimero :45 ] - KNXnet/IP Tunneling 192.168.0.10:3671: received cEMI L-Data.con with req 245 2016-04-21 09:51:06.679 [DEBUG] [tuwien.auto.calimero :45 ] - calimero.link.192.168.0.10:3671: confirmation of 1.1.11 2016-04-21 09:51:06.679 [DEBUG] [tuwien.auto.calimero :45 ] - calimero.link.192.168.0.10:3671: send to 1.1.11 succeeded 2016-04-21 09:51:06.680 [DEBUG] [tuwien.auto.calimero :45 ] - TL 192.168.0.10:3671: send disconnect to 1.1.11 2016-04-21 09:51:06.680 [INFO ] [tuwien.auto.calimero :43 ] - calimero.link.192.168.0.10:3671: send message to 1.1.11, wait for confirmation 2016-04-21 09:51:06.680 [DEBUG] [tuwien.auto.calimero :45 ] - calimero.link.192.168.0.10:3671: cEMI L-Data.req from 1.0.255 to 1.1.11, system priority hop count 6 repeat tpdu 81 2016-04-21 09:51:06.680 [DEBUG] [tuwien.auto.calimero :45 ] - KNXnet/IP Tunneling 192.168.0.10:3671: sending cEMI frame seq 160, wait for cEMI.con, attempt 1 (channel 38) 06 10 04 20 00 14 04 26 a0 00 11 00 b0 60 10 ff 11 0b 00 81 2016-04-21 09:51:06.700 [DEBUG] [tuwien.auto.calimero :45 ] - KNXnet/IP Tunneling 192.168.0.10:3671: received cEMI L-Data.con with req 246 2016-04-21 09:51:06.700 [DEBUG] [tuwien.auto.calimero :45 ] - calimero.link.192.168.0.10:3671: confirmation of 1.1.11 2016-04-21 09:51:06.700 [DEBUG] [tuwien.auto.calimero :45 ] - calimero.link.192.168.0.10:3671: send to 1.1.11 succeeded 2016-04-21 09:51:06.701 [DEBUG] [tuwien.auto.calimero :45 ] - TL 192.168.0.10:3671: send disconnect to 1.1.11 2016-04-21 09:51:06.701 [INFO ] [tuwien.auto.calimero :43 ] - calimero.link.192.168.0.10:3671: send message to 1.1.11, wait for confirmation 2016-04-21 09:51:06.701 [DEBUG] [tuwien.auto.calimero :45 ] - calimero.link.192.168.0.10:3671: cEMI L-Data.req from 1.0.255 to 1.1.11, system priority hop count 6 repeat tpdu 81 2016-04-21 09:51:06.701 [DEBUG] [tuwien.auto.calimero :45 ] - KNXnet/IP Tunneling 192.168.0.10:3671: sending cEMI frame seq 161, wait for cEMI.con, attempt 1 (channel 38) 06 10 04 20 00 14 04 26 a1 00 11 00 b0 60 10 ff 11 0b 00 81 2016-04-21 09:51:06.721 [DEBUG] [tuwien.auto.calimero :45 ] - KNXnet/IP Tunneling 192.168.0.10:3671: received cEMI L-Data.con with req 247 2016-04-21 09:51:06.722 [DEBUG] [tuwien.auto.calimero :45 ] - calimero.link.192.168.0.10:3671: confirmation of 1.1.11 2016-04-21 09:51:06.722 [DEBUG] [tuwien.auto.calimero :45 ] - calimero.link.192.168.0.10:3671: send to 1.1.11 succeeded 2016-04-21 09:51:06.722 [DEBUG] [tuwien.auto.calimero :45 ] - TL 192.168.0.10:3671: send disconnect to 1.1.11 2016-04-21 09:51:06.722 [INFO ] [tuwien.auto.calimero :43 ] - calimero.link.192.168.0.10:3671: send message to 1.1.11, wait for confirmation 2016-04-21 09:51:06.722 [DEBUG] [tuwien.auto.calimero :45 ] - calimero.link.192.168.0.10:3671: cEMI L-Data.req from 1.0.255 to 1.1.11, system priority hop count 6 repeat tpdu 81 2016-04-21 09:51:06.722 [DEBUG] [tuwien.auto.calimero :45 ] - KNXnet/IP Tunneling 192.168.0.10:3671: sending cEMI frame seq 162, wait for cEMI.con, attempt 1 (channel 38) 06 10 04 20 00 14 04 26 a2 00 11 00 b0 60 10 ff 11 0b 00 81 2016-04-21 09:51:06.742 [DEBUG] [tuwien.auto.calimero :45 ] - KNXnet/IP Tunneling 192.168.0.10:3671: received cEMI L-Data.con with req 248 2016-04-21 09:51:06.742 [DEBUG] [tuwien.auto.calimero :45 ] - calimero.link.192.168.0.10:3671: confirmation of 1.1.11 2016-04-21 09:51:06.742 [DEBUG] [tuwien.auto.calimero :45 ] - calimero.link.192.168.0.10:3671: send to 1.1.11 succeeded 2016-04-21 09:51:08.220 [DEBUG] [tuwien.auto.calimero :45 ] - calimero.link.192.168.0.10:3671: indication from 1.4.11 2016-04-21 09:51:08.223 [DEBUG] [tuwien.auto.calimero :45 ] - calimero.link.192.168.0.10:3671: indication from 1.3.102 2016-04-21 09:51:09.619 [DEBUG] [tuwien.auto.calimero :45 ] - TL 192.168.0.10:3671: sending data connected to 1.1.11, attempt 2 2016-04-21 09:51:09.619 [INFO ] [tuwien.auto.calimero :43 ] - calimero.link.192.168.0.10:3671: send message to 1.1.11, wait for confirmation 2016-04-21 09:51:09.619 [DEBUG] [tuwien.auto.calimero :45 ] - calimero.link.192.168.0.10:3671: cEMI L-Data.req from 1.0.255 to 1.1.11, low priority hop count 6 repeat tpdu 46 02 40 03 2016-04-21 09:51:09.620 [DEBUG] [tuwien.auto.calimero :45 ] - KNXnet/IP Tunneling 192.168.0.10:3671: sending cEMI frame seq 163, wait for cEMI.con, attempt 1 (channel 38) 06 10 04 20 00 17 04 26 a3 00 11 00 bc 60 10 ff 11 0b 03 46 02 40 03 2016-04-21 09:51:09.622 [DEBUG] [tuwien.auto.calimero :45 ] - calimero.link.192.168.0.10:3671: indication from 1.1.51 2016-04-21 09:51:09.625 [DEBUG] [tuwien.auto.calimero :45 ] - calimero.link.192.168.0.10:3671: indication from 1.1.51 2016-04-21 09:51:09.646 [DEBUG] [tuwien.auto.calimero :45 ] - KNXnet/IP Tunneling 192.168.0.10:3671: received cEMI L-Data.con with req 253 2016-04-21 09:51:09.646 [DEBUG] [tuwien.auto.calimero :45 ] - calimero.link.192.168.0.10:3671: confirmation of 1.1.11 2016-04-21 09:51:09.647 [DEBUG] [tuwien.auto.calimero :45 ] - calimero.link.192.168.0.10:3671: send to 1.1.11 succeeded 2016-04-21 09:51:10.358 [DEBUG] [tuwien.auto.calimero :45 ] - calimero.link.192.168.0.10:3671: indication from 1.1.23 2016-04-21 09:51:10.374 [DEBUG] [tuwien.auto.calimero :45 ] - calimero.link.192.168.0.10:3671: indication from 1.1.23 2016-04-21 09:51:10.437 [DEBUG] [tuwien.auto.calimero :45 ] - calimero.link.192.168.0.10:3671: indication from 1.1.23 2016-04-21 09:51:10.985 [DEBUG] [tuwien.auto.calimero :45 ] - calimero.link.192.168.0.10:3671: indication from 1.1.15 2016-04-21 09:51:10.989 [DEBUG] [tuwien.auto.calimero :45 ] - calimero.link.192.168.0.10:3671: indication from 1.1.13 2016-04-21 09:51:11.185 [DEBUG] [tuwien.auto.calimero :45 ] - calimero.link.192.168.0.10:3671: indication from 1.1.27 2016-04-21 09:51:11.239 [DEBUG] [tuwien.auto.calimero :45 ] - calimero.link.192.168.0.10:3671: indication from 1.1.27 2016-04-21 09:51:12.213 [DEBUG] [tuwien.auto.calimero :45 ] - calimero.link.192.168.0.10:3671: indication from 1.1.14 2016-04-21 09:51:12.217 [DEBUG] [tuwien.auto.calimero :45 ] - calimero.link.192.168.0.10:3671: indication from 1.1.12 2016-04-21 09:51:12.229 [DEBUG] [tuwien.auto.calimero :45 ] - calimero.link.192.168.0.10:3671: indication from 1.1.26 2016-04-21 09:51:12.646 [DEBUG] [tuwien.auto.calimero :45 ] - TL 192.168.0.10:3671: sending data connected to 1.1.11, attempt 3 2016-04-21 09:51:12.647 [INFO ] [tuwien.auto.calimero :43 ] - calimero.link.192.168.0.10:3671: send message to 1.1.11, wait for confirmation 2016-04-21 09:51:12.647 [DEBUG] [tuwien.auto.calimero :45 ] - calimero.link.192.168.0.10:3671: cEMI L-Data.req from 1.0.255 to 1.1.11, low priority hop count 6 repeat tpdu 46 02 40 03 2016-04-21 09:51:12.647 [DEBUG] [tuwien.auto.calimero :45 ] - KNXnet/IP Tunneling 192.168.0.10:3671: sending cEMI frame seq 164, wait for cEMI.con, attempt 1 (channel 38) 06 10 04 20 00 17 04 26 a4 00 11 00 bc 60 10 ff 11 0b 03 46 02 40 03 2016-04-21 09:51:12.672 [DEBUG] [tuwien.auto.calimero :45 ] - KNXnet/IP Tunneling 192.168.0.10:3671: received cEMI L-Data.con with req 8 2016-04-21 09:51:12.673 [DEBUG] [tuwien.auto.calimero :45 ] - calimero.link.192.168.0.10:3671: confirmation of 1.1.11 2016-04-21 09:51:12.673 [DEBUG] [tuwien.auto.calimero :45 ] - calimero.link.192.168.0.10:3671: send to 1.1.11 succeeded 2016-04-21 09:51:13.442 [DEBUG] [tuwien.auto.calimero :45 ] - calimero.link.192.168.0.10:3671: indication from 1.1.26 2016-04-21 09:51:13.446 [DEBUG] [tuwien.auto.calimero :45 ] - calimero.link.192.168.0.10:3671: indication from 1.1.26 2016-04-21 09:51:13.450 [DEBUG] [tuwien.auto.calimero :45 ] - calimero.link.192.168.0.10:3671: indication from 1.1.26 2016-04-21 09:51:15.678 [DEBUG] [tuwien.auto.calimero :45 ] - TL 192.168.0.10:3671: sending data connected to 1.1.11, attempt 4 2016-04-21 09:51:15.678 [INFO ] [tuwien.auto.calimero :43 ] - calimero.link.192.168.0.10:3671: send message to 1.1.11, wait for confirmation 2016-04-21 09:51:15.678 [DEBUG] [tuwien.auto.calimero :45 ] - calimero.link.192.168.0.10:3671: cEMI L-Data.req from 1.0.255 to 1.1.11, low priority hop count 6 repeat tpdu 46 02 40 03 2016-04-21 09:51:15.679 [DEBUG] [tuwien.auto.calimero :45 ] - KNXnet/IP Tunneling 192.168.0.10:3671: sending cEMI frame seq 165, wait for cEMI.con, attempt 1 (channel 38) 06 10 04 20 00 17 04 26 a5 00 11 00 bc 60 10 ff 11 0b 03 46 02 40 03 2016-04-21 09:51:15.682 [DEBUG] [tuwien.auto.calimero :45 ] - calimero.link.192.168.0.10:3671: indication from 1.1.101 2016-04-21 09:51:15.707 [DEBUG] [tuwien.auto.calimero :45 ] - KNXnet/IP Tunneling 192.168.0.10:3671: received cEMI L-Data.con with req 13 2016-04-21 09:51:15.707 [DEBUG] [tuwien.auto.calimero :45 ] - calimero.link.192.168.0.10:3671: confirmation of 1.1.11 2016-04-21 09:51:15.707 [DEBUG] [tuwien.auto.calimero :45 ] - calimero.link.192.168.0.10:3671: send to 1.1.11 succeeded 2016-04-21 09:51:16.808 [DEBUG] [tuwien.auto.calimero :45 ] - calimero.link.192.168.0.10:3671: indication from 1.2.152 2016-04-21 09:51:16.812 [DEBUG] [tuwien.auto.calimero :45 ] - calimero.link.192.168.0.10:3671: indication from 1.2.152 2016-04-21 09:51:16.834 [DEBUG] [tuwien.auto.calimero :45 ] - calimero.link.192.168.0.10:3671: indication from 1.2.152 2016-04-21 09:51:16.869 [DEBUG] [tuwien.auto.calimero :45 ] - calimero.link.192.168.0.10:3671: indication from 1.2.152 2016-04-21 09:51:16.904 [DEBUG] [tuwien.auto.calimero :45 ] - calimero.link.192.168.0.10:3671: indication from 1.2.152 2016-04-21 09:51:16.966 [DEBUG] [tuwien.auto.calimero :45 ] - calimero.link.192.168.0.10:3671: indication from 1.1.151 2016-04-21 09:51:17.007 [DEBUG] [tuwien.auto.calimero :45 ] - calimero.link.192.168.0.10:3671: indication from 1.1.151 2016-04-21 09:51:17.046 [DEBUG] [tuwien.auto.calimero :45 ] - calimero.link.192.168.0.10:3671: indication from 1.1.151 2016-04-21 09:51:17.191 [DEBUG] [tuwien.auto.calimero :45 ] - calimero.link.192.168.0.10:3671: indication from 1.2.151 2016-04-21 09:51:17.226 [DEBUG] [tuwien.auto.calimero :45 ] - calimero.link.192.168.0.10:3671: indication from 1.2.151 2016-04-21 09:51:17.260 [DEBUG] [tuwien.auto.calimero :45 ] - calimero.link.192.168.0.10:3671: indication from 1.2.151 2016-04-21 09:51:17.305 [DEBUG] [tuwien.auto.calimero :45 ] - calimero.link.192.168.0.10:3671: indication from 1.2.151 2016-04-21 09:51:17.345 [DEBUG] [tuwien.auto.calimero :45 ] - calimero.link.192.168.0.10:3671: indication from 1.2.151 2016-04-21 09:51:18.051 [DEBUG] [tuwien.auto.calimero :45 ] - calimero.link.192.168.0.10:3671: indication from 1.1.151 2016-04-21 09:51:18.054 [DEBUG] [tuwien.auto.calimero :45 ] - calimero.link.192.168.0.10:3671: indication from 1.1.151 2016-04-21 09:51:18.358 [DEBUG] [tuwien.auto.calimero :45 ] - calimero.link.192.168.0.10:3671: indication from 1.1.24 2016-04-21 09:51:18.393 [DEBUG] [tuwien.auto.calimero :45 ] - calimero.link.192.168.0.10:3671: indication from 1.1.24 2016-04-21 09:51:18.456 [DEBUG] [tuwien.auto.calimero :45 ] - calimero.link.192.168.0.10:3671: indication from 1.1.24 2016-04-21 09:51:18.570 [DEBUG] [tuwien.auto.calimero :45 ] - calimero.link.192.168.0.10:3671: indication from 1.1.55 2016-04-21 09:51:18.620 [DEBUG] [tuwien.auto.calimero :45 ] - calimero.link.192.168.0.10:3671: indication from 1.1.55 2016-04-21 09:51:18.707 [INFO ] [tuwien.auto.calimero :43 ] - calimero.link.192.168.0.10:3671: send message to 1.1.11, wait for confirmation 2016-04-21 09:51:18.708 [DEBUG] [tuwien.auto.calimero :45 ] - calimero.link.192.168.0.10:3671: cEMI L-Data.req from 1.0.255 to 1.1.11, system priority hop count 6 repeat tpdu 81 2016-04-21 09:51:18.708 [DEBUG] [tuwien.auto.calimero :45 ] - KNXnet/IP Tunneling 192.168.0.10:3671: sending cEMI frame seq 166, wait for cEMI.con, attempt 1 (channel 38) 06 10 04 20 00 14 04 26 a6 00 11 00 b0 60 10 ff 11 0b 00 81 2016-04-21 09:51:18.728 [DEBUG] [tuwien.auto.calimero :45 ] - KNXnet/IP Tunneling 192.168.0.10:3671: received cEMI L-Data.con with req 34 2016-04-21 09:51:18.728 [ERROR] [.k.h.KNXBridgeBaseThingHandler:1092 ] - An exception occurred while trying to read the memory for '1.1.11' : send data connected failed 2016-04-21 09:51:18.728 [DEBUG] [tuwien.auto.calimero :45 ] - calimero.link.192.168.0.10:3671: confirmation of 1.1.11 2016-04-21 09:51:18.728 [DEBUG] [.k.h.KNXBridgeBaseThingHandler:1070 ] - Trying to read 2 bytes at memory location 16387 for 1.1.11 2016-04-21 09:51:18.728 [DEBUG] [tuwien.auto.calimero :45 ] - calimero.link.192.168.0.10:3671: send to 1.1.11 succeeded 2016-04-21 09:51:18.728 [DEBUG] [tuwien.auto.calimero :45 ] - TL 192.168.0.10:3671: disconnected from 1.1.11 2016-04-21 09:51:18.728 [INFO ] [tuwien.auto.calimero :43 ] - calimero.link.192.168.0.10:3671: send message to 1.1.11, wait for confirmation 2016-04-21 09:51:18.729 [DEBUG] [tuwien.auto.calimero :45 ] - calimero.link.192.168.0.10:3671: cEMI L-Data.req from 1.0.255 to 1.1.11, system priority hop count 6 repeat tpdu 80 2016-04-21 09:51:18.729 [DEBUG] [tuwien.auto.calimero :45 ] - KNXnet/IP Tunneling 192.168.0.10:3671: sending cEMI frame seq 167, wait for cEMI.con, attempt 1 (channel 38) 06 10 04 20 00 14 04 26 a7 00 11 00 b0 60 10 ff 11 0b 00 80 2016-04-21 09:51:18.748 [DEBUG] [tuwien.auto.calimero :45 ] - KNXnet/IP Tunneling 192.168.0.10:3671: received cEMI L-Data.con with req 35 2016-04-21 09:51:18.749 [DEBUG] [tuwien.auto.calimero :45 ] - calimero.link.192.168.0.10:3671: confirmation of 1.1.11 2016-04-21 09:51:18.749 [DEBUG] [tuwien.auto.calimero :45 ] - calimero.link.192.168.0.10:3671: send to 1.1.11 succeeded 2016-04-21 09:51:18.749 [DEBUG] [tuwien.auto.calimero :45 ] - TL 192.168.0.10:3671: connected with 1.1.11 2016-04-21 09:51:18.749 [DEBUG] [tuwien.auto.calimero :45 ] - TL 192.168.0.10:3671: sending data connected to 1.1.11, attempt 1 2016-04-21 09:51:18.749 [INFO ] [tuwien.auto.calimero :43 ] - calimero.link.192.168.0.10:3671: send message to 1.1.11, wait for confirmation 2016-04-21 09:51:18.750 [DEBUG] [tuwien.auto.calimero :45 ] - calimero.link.192.168.0.10:3671: cEMI L-Data.req from 1.0.255 to 1.1.11, low priority hop count 6 repeat tpdu 42 02 40 03 2016-04-21 09:51:18.750 [DEBUG] [tuwien.auto.calimero :45 ] - KNXnet/IP Tunneling 192.168.0.10:3671: sending cEMI frame seq 168, wait for cEMI.con, attempt 1 (channel 38) 06 10 04 20 00 17 04 26 a8 00 11 00 bc 60 10 ff 11 0b 03 42 02 40 03 2016-04-21 09:51:18.774 [DEBUG] [tuwien.auto.calimero :45 ] - KNXnet/IP Tunneling 192.168.0.10:3671: received cEMI L-Data.con with req 36 2016-04-21 09:51:18.774 [DEBUG] [tuwien.auto.calimero :45 ] - calimero.link.192.168.0.10:3671: confirmation of 1.1.11 2016-04-21 09:51:18.774 [DEBUG] [tuwien.auto.calimero :45 ] - calimero.link.192.168.0.10:3671: send to 1.1.11 succeeded 2016-04-21 09:51:18.837 [DEBUG] [tuwien.auto.calimero :45 ] - calimero.link.192.168.0.10:3671: indication from 1.1.11 2016-04-21 09:51:18.837 [INFO ] [tuwien.auto.calimero :43 ] - calimero.link.192.168.0.10:3671: send message to 1.1.11, wait for confirmation 2016-04-21 09:51:18.838 [DEBUG] [tuwien.auto.calimero :45 ] - calimero.link.192.168.0.10:3671: cEMI L-Data.req from 1.0.255 to 1.1.11, system priority hop count 6 repeat tpdu 81 2016-04-21 09:51:18.838 [DEBUG] [tuwien.auto.calimero :45 ] - KNXnet/IP Tunneling 192.168.0.10:3671: sending cEMI frame seq 169, wait for cEMI.con, attempt 1 (channel 38) 06 10 04 20 00 14 04 26 a9 00 11 00 b0 60 10 ff 11 0b 00 81 2016-04-21 09:51:18.858 [DEBUG] [tuwien.auto.calimero :45 ] - KNXnet/IP Tunneling 192.168.0.10:3671: received cEMI L-Data.con with req 38 2016-04-21 09:51:18.858 [DEBUG] [tuwien.auto.calimero :45 ] - calimero.link.192.168.0.10:3671: confirmation of 1.1.11 2016-04-21 09:51:18.858 [DEBUG] [tuwien.auto.calimero :45 ] - calimero.link.192.168.0.10:3671: send to 1.1.11 succeeded 2016-04-21 09:51:18.859 [INFO ] [tuwien.auto.calimero :43 ] - calimero.link.192.168.0.10:3671: send message to 1.1.11, wait for confirmation 2016-04-21 09:51:18.859 [DEBUG] [tuwien.auto.calimero :45 ] - TL 192.168.0.10:3671: positive ack by 1.1.11 2016-04-21 09:51:18.859 [DEBUG] [tuwien.auto.calimero :45 ] - calimero.link.192.168.0.10:3671: cEMI L-Data.req from 1.0.255 to 1.1.11, system priority hop count 6 repeat tpdu 81 2016-04-21 09:51:18.859 [DEBUG] [tuwien.auto.calimero :45 ] - KNXnet/IP Tunneling 192.168.0.10:3671: sending cEMI frame seq 170, wait for cEMI.con, attempt 1 (channel 38) 06 10 04 20 00 14 04 26 aa 00 11 00 b0 60 10 ff 11 0b 00 81 2016-04-21 09:51:18.884 [DEBUG] [tuwien.auto.calimero :45 ] - calimero.link.192.168.0.10:3671: indication from 1.1.11 2016-04-21 09:51:18.903 [DEBUG] [tuwien.auto.calimero :45 ] - KNXnet/IP Tunneling 192.168.0.10:3671: received cEMI L-Data.con with req 40 2016-04-21 09:51:18.904 [DEBUG] [tuwien.auto.calimero :45 ] - calimero.link.192.168.0.10:3671: confirmation of 1.1.11 2016-04-21 09:51:18.904 [DEBUG] [tuwien.auto.calimero :45 ] - calimero.link.192.168.0.10:3671: send to 1.1.11 succeeded 2016-04-21 09:51:18.904 [INFO ] [tuwien.auto.calimero :43 ] - calimero.link.192.168.0.10:3671: send message to 1.1.11, wait for confirmation 2016-04-21 09:51:18.904 [DEBUG] [tuwien.auto.calimero :45 ] - calimero.link.192.168.0.10:3671: cEMI L-Data.req from 1.0.255 to 1.1.11, system priority hop count 6 repeat tpdu 81 2016-04-21 09:51:18.905 [DEBUG] [tuwien.auto.calimero :45 ] - KNXnet/IP Tunneling 192.168.0.10:3671: sending cEMI frame seq 171, wait for cEMI.con, attempt 1 (channel 38) 06 10 04 20 00 14 04 26 ab 00 11 00 b0 60 10 ff 11 0b 00 81 2016-04-21 09:51:18.924 [DEBUG] [tuwien.auto.calimero :45 ] - KNXnet/IP Tunneling 192.168.0.10:3671: received cEMI L-Data.con with req 41 2016-04-21 09:51:18.925 [DEBUG] [tuwien.auto.calimero :45 ] - calimero.link.192.168.0.10:3671: confirmation of 1.1.11 2016-04-21 09:51:18.925 [DEBUG] [tuwien.auto.calimero :45 ] - calimero.link.192.168.0.10:3671: send to 1.1.11 succeeded 2016-04-21 09:51:18.925 [INFO ] [tuwien.auto.calimero :43 ] - calimero.link.192.168.0.10:3671: send message to 1.1.11, wait for confirmation 2016-04-21 09:51:18.925 [DEBUG] [tuwien.auto.calimero :45 ] - calimero.link.192.168.0.10:3671: cEMI L-Data.req from 1.0.255 to 1.1.11, system priority hop count 6 repeat tpdu 81 2016-04-21 09:51:18.925 [DEBUG] [tuwien.auto.calimero :45 ] - KNXnet/IP Tunneling 192.168.0.10:3671: sending cEMI frame seq 172, wait for cEMI.con, attempt 1 (channel 38) 06 10 04 20 00 14 04 26 ac 00 11 00 b0 60 10 ff 11 0b 00 81 2016-04-21 09:51:18.946 [DEBUG] [tuwien.auto.calimero :45 ] - KNXnet/IP Tunneling 192.168.0.10:3671: received cEMI L-Data.con with req 42 2016-04-21 09:51:18.946 [DEBUG] [tuwien.auto.calimero :45 ] - calimero.link.192.168.0.10:3671: confirmation of 1.1.11 2016-04-21 09:51:18.946 [DEBUG] [tuwien.auto.calimero :45 ] - calimero.link.192.168.0.10:3671: send to 1.1.11 succeeded 2016-04-21 09:51:18.946 [INFO ] [tuwien.auto.calimero :43 ] - calimero.link.192.168.0.10:3671: send message to 1.1.11, wait for confirmation 2016-04-21 09:51:18.946 [DEBUG] [tuwien.auto.calimero :45 ] - calimero.link.192.168.0.10:3671: cEMI L-Data.req from 1.0.255 to 1.1.11, system priority hop count 6 repeat tpdu 81 2016-04-21 09:51:18.946 [DEBUG] [tuwien.auto.calimero :45 ] - KNXnet/IP Tunneling 192.168.0.10:3671: sending cEMI frame seq 173, wait for cEMI.con, attempt 1 (channel 38) 06 10 04 20 00 14 04 26 ad 00 11 00 b0 60 10 ff 11 0b 00 81 2016-04-21 09:51:18.967 [DEBUG] [tuwien.auto.calimero :45 ] - KNXnet/IP Tunneling 192.168.0.10:3671: received cEMI L-Data.con with req 43 2016-04-21 09:51:18.967 [DEBUG] [tuwien.auto.calimero :45 ] - calimero.link.192.168.0.10:3671: confirmation of 1.1.11 2016-04-21 09:51:18.967 [DEBUG] [tuwien.auto.calimero :45 ] - calimero.link.192.168.0.10:3671: send to 1.1.11 succeeded 2016-04-21 09:51:18.968 [INFO ] [tuwien.auto.calimero :43 ] - calimero.link.192.168.0.10:3671: send message to 1.1.11, wait for confirmation 2016-04-21 09:51:18.968 [DEBUG] [tuwien.auto.calimero :45 ] - calimero.link.192.168.0.10:3671: cEMI L-Data.req from 1.0.255 to 1.1.11, system priority hop count 6 repeat tpdu 81 2016-04-21 09:51:18.968 [DEBUG] [tuwien.auto.calimero :45 ] - KNXnet/IP Tunneling 192.168.0.10:3671: sending cEMI frame seq 174, wait for cEMI.con, attempt 1 (channel 38) 06 10 04 20 00 14 04 26 ae 00 11 00 b0 60 10 ff 11 0b 00 81 2016-04-21 09:51:18.986 [DEBUG] [tuwien.auto.calimero :45 ] - KNXnet/IP Tunneling 192.168.0.10:3671: received cEMI L-Data.con with req 44 2016-04-21 09:51:18.987 [DEBUG] [tuwien.auto.calimero :45 ] - calimero.link.192.168.0.10:3671: confirmation of 1.1.11 2016-04-21 09:51:18.987 [DEBUG] [tuwien.auto.calimero :45 ] - calimero.link.192.168.0.10:3671: send to 1.1.11 succeeded 2016-04-21 09:51:18.987 [INFO ] [tuwien.auto.calimero :43 ] - calimero.link.192.168.0.10:3671: send message to 1.1.11, wait for confirmation 2016-04-21 09:51:18.987 [DEBUG] [tuwien.auto.calimero :45 ] - calimero.link.192.168.0.10:3671: cEMI L-Data.req from 1.0.255 to 1.1.11, system priority hop count 6 repeat tpdu 81 2016-04-21 09:51:18.987 [DEBUG] [tuwien.auto.calimero :45 ] - KNXnet/IP Tunneling 192.168.0.10:3671: sending cEMI frame seq 175, wait for cEMI.con, attempt 1 (channel 38) 06 10 04 20 00 14 04 26 af 00 11 00 b0 60 10 ff 11 0b 00 81 2016-04-21 09:51:19.006 [DEBUG] [tuwien.auto.calimero :45 ] - KNXnet/IP Tunneling 192.168.0.10:3671: received cEMI L-Data.con with req 45 2016-04-21 09:51:19.007 [DEBUG] [tuwien.auto.calimero :45 ] - calimero.link.192.168.0.10:3671: confirmation of 1.1.11 2016-04-21 09:51:19.007 [DEBUG] [tuwien.auto.calimero :45 ] - calimero.link.192.168.0.10:3671: send to 1.1.11 succeeded 2016-04-21 09:51:19.008 [INFO ] [tuwien.auto.calimero :43 ] - calimero.link.192.168.0.10:3671: send message to 1.1.11, wait for confirmation 2016-04-21 09:51:19.008 [DEBUG] [tuwien.auto.calimero :45 ] - calimero.link.192.168.0.10:3671: cEMI L-Data.req from 1.0.255 to 1.1.11, system priority hop count 6 repeat tpdu 81 2016-04-21 09:51:19.008 [DEBUG] [tuwien.auto.calimero :45 ] - KNXnet/IP Tunneling 192.168.0.10:3671: sending cEMI frame seq 176, wait for cEMI.con, attempt 1 (channel 38) 06 10 04 20 00 14 04 26 b0 00 11 00 b0 60 10 ff 11 0b 00 81 2016-04-21 09:51:19.026 [DEBUG] [tuwien.auto.calimero :45 ] - KNXnet/IP Tunneling 192.168.0.10:3671: received cEMI L-Data.con with req 46 2016-04-21 09:51:19.026 [DEBUG] [tuwien.auto.calimero :45 ] - calimero.link.192.168.0.10:3671: confirmation of 1.1.11 2016-04-21 09:51:19.027 [DEBUG] [tuwien.auto.calimero :45 ] - calimero.link.192.168.0.10:3671: send to 1.1.11 succeeded 2016-04-21 09:51:19.027 [DEBUG] [tuwien.auto.calimero :45 ] - TL 192.168.0.10:3671: send disconnect to 1.1.11 2016-04-21 09:51:19.027 [INFO ] [tuwien.auto.calimero :43 ] - calimero.link.192.168.0.10:3671: send message to 1.1.11, wait for confirmation 2016-04-21 09:51:19.027 [DEBUG] [tuwien.auto.calimero :45 ] - calimero.link.192.168.0.10:3671: cEMI L-Data.req from 1.0.255 to 1.1.11, system priority hop count 6 repeat tpdu 81 2016-04-21 09:51:19.027 [DEBUG] [tuwien.auto.calimero :45 ] - KNXnet/IP Tunneling 192.168.0.10:3671: sending cEMI frame seq 177, wait for cEMI.con, attempt 1 (channel 38) 06 10 04 20 00 14 04 26 b1 00 11 00 b0 60 10 ff 11 0b 00 81 2016-04-21 09:51:19.046 [DEBUG] [tuwien.auto.calimero :45 ] - KNXnet/IP Tunneling 192.168.0.10:3671: received cEMI L-Data.con with req 47 2016-04-21 09:51:19.046 [DEBUG] [tuwien.auto.calimero :45 ] - calimero.link.192.168.0.10:3671: confirmation of 1.1.11 2016-04-21 09:51:19.046 [DEBUG] [tuwien.auto.calimero :45 ] - calimero.link.192.168.0.10:3671: send to 1.1.11 succeeded 2016-04-21 09:51:19.046 [INFO ] [tuwien.auto.calimero :43 ] - calimero.link.192.168.0.10:3671: send message to 1.1.11 2016-04-21 09:51:19.046 [DEBUG] [tuwien.auto.calimero :45 ] - calimero.link.192.168.0.10:3671: cEMI L-Data.req from 1.0.255 to 1.1.11, system priority hop count 6 repeat tpdu c2 2016-04-21 09:51:19.046 [DEBUG] [tuwien.auto.calimero :45 ] - KNXnet/IP Tunneling 192.168.0.10:3671: sending cEMI frame seq 178, wait for ack, attempt 1 (channel 38) 06 10 04 20 00 14 04 26 b2 00 11 00 b0 60 10 ff 11 0b 00 c2 2016-04-21 09:51:19.049 [DEBUG] [tuwien.auto.calimero :45 ] - calimero.link.192.168.0.10:3671: send to 1.1.11 succeeded 2016-04-21 09:51:19.050 [DEBUG] [.k.h.KNXBridgeBaseThingHandler:1081 ] - Reading 2 bytes at memory location 16387 for 1.1.11 yields 2 bytes

bmalinowsky commented 8 years ago

I did not go through the log in detail, and I also don't know what exact loop you are doing (the log excerpt doesn't seem to be the first read): it looks like a L4 problem, there are immediate TL disconnects sent by calimero meaning that your Destination won't accept any responses. On your second attempt the destination is actually not trying to disconnect, and therefore, gets a positive ack by the remote endpoint (around second 18).

kgoderis commented 8 years ago

@bmalinowsky Do you know why calimero is sending the disconnects? And why would a Destination not accept any responses? I create my Destinations once, and do not tear them down in between sessions with the actors. Here is the actual source code

    public byte[] readDeviceMemory(IndividualAddress address, int startAddress, int bytes, boolean hex) {

        boolean success = false;
        byte[] result = null;

        while (!success) {
            try {

                logger.debug("Trying to read  {} bytes at memory location {} for {}",
                        new Object[] { bytes, startAddress, address });

                if (destinations.get(address) == null) {
                    this.createDestination(address);
                }

                result = mc.readMemory(destinations.get(address), startAddress, bytes);
                logger.debug("Reading  {} bytes at memory location {} for {} yields {} bytes",
                        new Object[] { bytes, startAddress, address, result == null ? null : result.length });
                success = true;

            } catch (KNXTimeoutException e) {
                logger.error("An exception occurred while trying to read the memory for address '{}' : {}",
                        address.toString(), e.getMessage());
            } catch (KNXRemoteException e) {
                logger.error("An exception occurred while trying to read the memory for '{}' : {}", address.toString(),
                        e.getMessage());
            } catch (KNXDisconnectException e) {
                logger.error("An exception occurred while trying to read the memory for '{}' : {}", address.toString(),
                        e.getMessage());
            } catch (KNXLinkClosedException e) {
                logger.error("An exception occurred while trying to read the memory for '{}' : {}", address.toString(),
                        e.getMessage());
            } catch (KNXException e) {
                logger.error("An exception occurred while trying to read the memory for '{}' : {}", address.toString(),
                        e.getMessage());
            } catch (InterruptedException e) {
                logger.error("An exception occurred while trying to read the memory for '{}' : {}", address.toString(),
                        e.getMessage());
                e.printStackTrace();
            }
        }
        return result;

    }

the createDestination call does merely a

    private void createDestination(IndividualAddress address) {
        if (mc != null) {
            Destination destination = mc.createDestination(address, true, false, false);
            destinations.put(address, destination);
        }
    }

Am I right that 0xFFFF authorization is not really needed in this case? I tried to observe what ETS4 itself is doing when you query information from a device (I do not have an IP bridge that supports the Bus Monitor) and I saw that it authenticates with ReadProperties and ReadMemory variably.

Karel

bmalinowsky commented 8 years ago

Do you know why calimero is sending the disconnects?

No. As I said before, the log shows that disconnects are sent. Already line 2 of your log is a disconnect. I cannot know what happened before (at least not from your log).

And why would a Destination not accept any responses?

Because reading memory is L4 p2p connection-oriented. No connection, no accept.

Side-note (ignore if there is some other code foo going on I don't know about): on link-closed and remote exceptions there is no positive future outcome in continuing reading. Either you operate on illegal state (and for that i would then catch the RTEs), or the remote endpoint doesn't want you to read. Receiving memory data with different length as requested is non-spec anyway.

bmalinowsky commented 8 years ago

Re authorization: non-authorized reads will execute on minimum access rights, i.e., as long as you receive data everything is ok. Otherwise, you'll get a KnxRemoteException. (Old devices might behave differently, though.)

(I'm not sure what 0xFFFF means, you mean the 4 byte key?)

kgoderis commented 8 years ago

Re authorization: non-authorized reads will execute on minimum access rights, i.e., as long as you receive data everything is ok. Otherwise, you'll get a KnxRemoteException. (Old devices might behave differently, though.)

(I'm not sure what 0xFFFF means, you mean the 4 byte key?)

0xffff is indeed the 4 byte key. I have seen any other one on my KNX network so far, so my quick assumption is that it is a defect standard key.

What I have seen however, is that after a bunch of misreads and other exceptions, the IP Gateway seems to be completely disconnected with reality, amongst other things reporting that is does not get Acks and so forth. I then have to reset the device (it is an MDT one)

kgoderis commented 8 years ago

Because reading memory is L4 p2p connection-oriented. No connection, no accept.

Side-note (ignore if there is some other code foo going on I don't know about): on link-closed and remote exceptions there is no positive future outcome in continuing reading. Either you operate on illegal state (and for that i would then catch the RTEs), or the remote endpoint doesn't want you to read. Receiving memory data with different length as requested is non-spec anyway.

Ok - is the right way forward then to not cache Destinations, but rather .destroy() them and rebuild them at each usage (or at least on the link-closed and remote exceptions? So far the exception handling was a copy/paste in order to be elaborate and catch them all, but it needs refinement obviously.

bmalinowsky commented 8 years ago

[...] after a bunch of misreads and other exceptions, the IP Gateway seems to be completely disconnected with reality [...]

but those should show up on L2 or in the tunneling protocol. L4 connections should not cause that; at least in the log the cEMI and tunneling stuff is fine, even though the memory read fails.

Ok - is the right way forward then to not cache Destinations [...]

You can cache the destinations for a single mgmt task, that's actually good. Accessing a remote endpoint ideally uses a single connect, send all the data (e.g., read/write memory, properties), then do a single disconnect. Idle destinations time out after 6 seconds without keep-alive, so if there is no special reason to keep them around, destroy them (also the remote endpoint will time out).

If the network link got closed, connection state is lost, and the mgmt client always gets detached and all destinations destroyed anyway.

It's not visible from your log, but I think you got a disconnect exception, meaning that both endpoints have disconnected (or the remote endpoint is still doing so). Network and devices are quite slow, allow them a little time to reach a consistent state. Trying to reconnect while there is still a disconnect msg on its way will certainly manifest itself like that.

kgoderis commented 8 years ago

[...] after a bunch of misreads and other exceptions, the IP Gateway seems to be completely disconnected with reality [...] but those should show up on L2 or in the tunneling protocol. L4 connections should not cause that; at least in the log the cEMI and tunneling stuff is fine, even though the memory read fails.

They are indeed - they are not in the logs, it is a side effect currently

bmalinowsky commented 8 years ago

Ah ok, I misread -- thought it had the same cause :)

kgoderis commented 8 years ago

It's not visible from your log, but I think you got a disconnect exception, meaning that both endpoints have disconnected (or the remote endpoint is still doing so). Network and devices are quite slow, allow them a little time to reach a consistent state. Trying to reconnect while there is still a disconnect msg on its way will certainly manifest itself like that.

Ah! I was just in the process of testing things, and I see regularly the following sequence of tpdus: 80 - connect. that's good 42 02 40 01 or whatever, e.g. related to memory read or alike. good as well a bunch of 81 - disconnects, but why a bunch? sometimes 4, sometimes 10+. c2 - no clue

In an "operation", the 81 is also sent by the L4 (?) after the method (e.g. memory read) returned its byte[]. So, when this happens, and I have a fast sequence of operations (e.g. memory reads), it seems that the 81 of the previous operation is screwing up the next operation. In the logs I see that the next operation does not start with a tpdu 80, but simply tries to start to send the apdu, retrying until it triggers a TimeOutException, and then on the next iteration, when things are cleared, a tpdu 80 is issued.

I know that the KNX bus is very very very timing sensitive. When I run ETS4 in a virtual machine on my Mac, just increasing the load of the machine may already generate troubles for ETS4 itself. I have an installation with 1 main and 4 lines, all connected with line couplers, and the IP gateway sitting on the main line. In total about 120 actors, and about 4000 GA's in use (many 12-fold actors). Just in order to be able to program a device, I have to shut down couplers in order to reduce the bus load on the main line, if not, ETS4 simply fails at the task.

kgoderis commented 8 years ago

then do a single disconnect

by that, I presume Destination.destroy() ? So far, I have not triggered any disconnects, the tpdu 81 seem to be coming from the L4 layer itself

bmalinowsky commented 8 years ago

if all the bunch of disconnects you mention log "send disconnect to x.y.z", (x.y.z being the same address you're talking to) then this is just the aftermath of being in disconnected state. On any received L4 data_connected it will do so. That's actually expected and not surprising.

That a sequence of quick sends does not issue a connect each time is also normal, because at that moment the destination still knows it's connected (it checks its state for that, you can also do that with .getState() ). Local L4 won't even send out a message if disconnected, but throws KNXDisconnectException.

So, the interesting question is, where the first disconnect comes from.

The disconnect that is sent after your method returned with a byte[] can locally only be caused by a timeout. What you can try to avoid sending that, is creating the destination with keep-alive set true. And see if that makes a difference. Otherwise the disconnect comes from the remote endpoint.

bmalinowsky commented 8 years ago

then do a single disconnect

by that, I presume Destination.destroy()

yes (after that, you have to create a new destination)

bmalinowsky commented 8 years ago

c2 are acks

kgoderis commented 8 years ago

On any received L4 data_connected it will do so. That's actually expected and not surprising.

So, just to for my understanding, in L4, each time a piece of data is sent to the remote actor, it will by default disconnect? e.g. it is my side that is the trigger of the first 81 being send.

Does that also effectively means that when doing multiple operations, you have to first wait until the Destination is finally in its disconnected state, before starting a new operation altogether?

It is a bit strange that after a "TL 192.168.0.10:3671: disconnected from 1.1.11", there are still disconnects being issued from the TL

I am currently delving into the calimero code base itself in to understand all you write, but I sense the easiest solution, despite the overhead, is not to cache destinations, and on each operation create a new destination, and then destroy it to clean up things

[I just tried my code with the keep alive and it does not change the outcome]

bmalinowsky commented 8 years ago

On any received L4 data_connected it will do so. That's actually expected and not surprising.

So, just to for my understanding, in L4, each time a piece of data is sent to the remote actor, it will by default disconnect? e.g. it is my side that is the trigger of the first 81 being send.

no. if, and only if, a L4 connection was established, and for some reason, a disconnect happened (which is allowed to happen, btw), the L4 state machine will enforce that disconnect. Meaning, all subsequent received connected_data will be answered with a disconnect.

It is a bit strange that after a "TL 192.168.0.10:3671: disconnected from 1.1.11", there are still disconnects being issued from the TL

no, that's expected to happen. it would be wrong otherwise.

[I just tried my code with the keep alive and it does not change the outcome]

There are only so many ways a disconnect is triggered, being 1) failure to send data, 2) timeout, 3) reception from remote endpoint 4) receiving ack/nack/data while disconnected (we can ignore that). keep-alive eliminates the timeout (easy to verify), leaving send data or remote. in such case, send data always terminates with disconnect exception, also easy to verify.

[...] on each operation create a new destination, and then destroy it to clean up things yes, that's the easiest way

kgoderis commented 8 years ago

Btw, I tried to read out PID.IO_LIST on the Device Object Index, but it fails due to insufficient access rights. Probably due to the actor, but there is no way to instruct PropertyClient to authenticate before doing a getProperty()?. My quick solution is to assume that all Object Indexes are the same on all devices (they are probably), so the Address Table sits at index 1 and so forth.

kgoderis commented 8 years ago

It is a bit strange that after a "TL 192.168.0.10:3671: disconnected from 1.1.11", there are still disconnects being issued from the TL no, that's expected to happen. it would be wrong otherwise.

My mistake, I was mixing TL and the L layers. However, TL is still sending disconnects after it received a c2 ack (at least, it shows like that in the logs)

kgoderis commented 8 years ago

Btw, I tried to read out PID.IO_LIST on the Device Object Index, but it fails due to insufficient access rights. Probably due to the actor, but there is no way to instruct PropertyClient to authenticate before doing a getProperty()?. My quick solution is to assume that all Object Indexes are the same on all devices (they are probably), so the Address Table sits at index 1 and so forth.

Ok - found public RemotePropertyServiceAdapter(final KNXNetworkLink link, final IndividualAddress remote, final PropertyAdapterListener l, final byte[] authorizeKey) throws KNXException, InterruptedException in the code

kgoderis commented 8 years ago

@bmalinowsky Are there any concurrency limitations on the classes and methods in Calimero? When I trie to unleash my code on two different actors in parallel, and even when I secure the calls to Calimero by making the methods of the "communications hub at my side" synchronized, after a while, for example, calls to PropertyClient.getProperty() trigger KNXDisconnectedExceptions, even as the sequence of tpdus is correct. e.g. tpdu 80 executes well, and so a connection is supposed to be established.

kgoderis commented 8 years ago

@bmalinowsky Just a few questions for you, as I presume you have seen many KNX actor implementations: It seems that not all Actors adhere to the KNX specs as they should. I see some devices here in my network that do not implement the IO_LIST PID at all, but yet expose Address tables. Also, I have not a single device that implements Object Index 9 (mandatory, normally) that contains the configuration of the Group Objects. Have you seen that as well in your network?

bmalinowsky commented 8 years ago

Both of that is not mandatory, think older devices.

bmalinowsky commented 8 years ago

Closing, as it derailed completely into off-topic.

kgoderis commented 8 years ago

@bmalinowsky Are there any concurrency limitations on the classes and methods in Calimero? When I trie to unleash my code on two different actors in parallel, and even when I secure the calls to Calimero by making the methods of the "communications hub at my side" synchronized, after a while, for example, calls to PropertyClient.getProperty() trigger KNXDisconnectedExceptions, even as the sequence of tpdus is correct. e.g. tpdu 80 executes well, and so a connection is supposed to be established.

fine, but this issue is a real one. Reverting back to "directly" access to the underlying managementclient resolves this issue. There is something ongoing in the upper (PropertyClient, ...) classes with respect to concurrency.