eclipse-leshan / leshan

Java Library for LWM2M
https://www.eclipse.org/leshan/
BSD 3-Clause "New" or "Revised" License
647 stars 407 forks source link

DTLS connection/sessions lost after 10 minutes #542

Closed bernhard-seifert closed 6 years ago

bernhard-seifert commented 6 years ago

I've been using snapshot version 1.0.0 for a while and also compared the behavior to 1.0.0-M8. My embedded system does the following:

  1. connect to leshan server
  2. does the handshake
  3. registers device

After that, the device sends one byte (\0) every minute. This is important because otherwise the GPRS NAT would delete port information and the leshan server would not be able anymore to send data to the device. As these packets are invalid in terms of properly defined DTLS encrypted packets, these are silently discarded. It has to be noted, that intentionally just one byte is sent, because data is expensive (in particular during roaming) and sums up if being transferred every minute. Snapshot version 1.0.0 also behaved like that, and once, when data has to be read from the device (pressing the "read" button in the leshan html interface), the corresponding request was sent to the device. In version 1.0.0-M8 this behavior is different: if the server receives less than 10 keep-alive packets, the behavior is the same but once more than 10 keep-alive packets have been received by the server, the next time a "read" command is triggered, the server does not send the request to the device, but sends a "ClientHello" message. It seems that the server believes that the communication is broken (due to 10 "malformated" packets) and requests a new handshake. In this case it should send a "HelloRequest" message according to rfc6347, page 23. Either case, this would require a new handshake which in turn consumes also a large amount of data. Can this behavior be changed so that malformated packets are just silently discarded, like before? If not, how would you propose to make proper keep-alive packets. As I already pointed out, theses are essential for GPRS NAT-based systems.

sbernard31 commented 6 years ago

Did you consider to use LWM2M Queue Mode ? This is adapted for Nat Environment. By the past, we tried using keep alive to maintain NAT route but AFAIK that was not a big success. Did it work well for you until now ?

Anyway, I don't see any recent changes which could explain this new behavior ... :thinking:

Could you tell us the commit linked to this snapshot release or at least the californium version used (as this is probably a change in californium more than in Leshan) ?

Can this behavior be changed so that malformated packets are just silently discarded, like before?

I think discarding should be the right behavior else breaking open DTLS session is too easy with ip address spoofing method.

I tried this on my side sending empty packet or packet with only 1 byte \0, and it works for me ... I'm not able to reproduce this. Do you have any logs or capture ?

In this case it should send a "HelloRequest" message according to rfc6347, page 23.

The rfc5246 explains that helloRequest is about renegociation which is not supported by scandium and renegociation will not help in this case. So I don't think HelloRequest can be a solution.

About the fact LWM2M server could send CLIENT_HELLO and so act as a DTLS client, this is the expected behavior.

how would you propose to make proper keep-alive packets

I know there is a DTLS heartbeat extension. This is not implemented in Scandium but we could consider to add it if this makes senses.

I also know there is a ping in CoAP but it would be more expensive... (and not sure this is really appropriate..)

bernhard-seifert commented 6 years ago

Queue mode is not really an option since the device needs to be online all the time but due to the 3G modem it is in a NAT environment. Tests have shown (depending on the operator), a keep-alive packet once in a minute is required to keep connection.

I will check in the next days which version causes this behavior, this could show when / what was changed either in Leshan or Californium.

Attached is a WireShark log file. After successful registration (packets 7 and 8), data is requested from the device (packets 9 and 10). Then keep-alive messages are sent by the device to the server every minute. Packets 16 and 17 are another successful read request. Then keep-alive messages follow. Packet 28 should be a read request, but instead the server sends a ClientHello. When the device does not respond to the ClientHello message, the server sends it again!

The log was recorded with "leshan-server-demo-1.0.0-M8-jar-with-dependencies.jar".

I also agree that the server should silently discard invalid DTLS packets to reduce the risk of DoS attacks.

Yes DTLS Heartbeat could be a solution, but if this is sent every minute, the amount of data strongly increases!

Server ClientHello.zip

sbernard31 commented 6 years ago

I tested again on my side and I'm still not able to reproduce this. (Sending 0 bytes packets or 1 bytes (\0) packets) Did you try to send 0 length packet ? The capture is very similar but without the CLIENT-HELLO at server side...

Reading/Debugging the DTLSConnector class I can't see anything which could explain this behavior...

At this point I would bet that the issue is not the keepAlive but something else. :thinking:

Just to be sure, the capture is recorded at server, isn't it ? And so, you test with only one device ?

Could you active logs at server side ? maybe we will see something ... To do that you can add a logback-config.xml file in your working directory.

<?xml version="1.0" encoding="UTF-8"?>
<configuration>
    <appender name="STDOUT" class="ch.qos.logback.core.ConsoleAppender">
        <encoder>
            <pattern>%d %p %C{0} - %m%n</pattern>
        </encoder>
    </appender>

    <root level="WARN">
        <appender-ref ref="STDOUT" />
    </root>

    <logger name="org.eclipse.leshan" level="INFO"/>
    <logger name="org.eclipse.leshan.server.security.SecurityCheck" level="DEBUG"/>
    <logger name="org.eclipse.leshan.core.model.LwM2mModel" level="TRACE"/>

        <!-- All above is the default config, the line below is to search something in DTLS stack -->
        <logger name="org.eclipse.californium.scandium" level="TRACE"/>
</configuration>
bernhard-seifert commented 6 years ago

I ran version 1.0.0-M8 with the XML file you attached on a raspberry pi. The server receives 10 times a single \0 byte and discards it correctly (see debug message below). After receiving 10 such packets, I'm requesting a read via the web interface. It writes "DEBUG Connection - Handshake with [/212.95.5.252:32455] has been started". So this is where the "ClientHello" message comes from, the question remaining, is why? Currently I cannot send 0 bytes since the modem does not allow empty UDP / TCP packets to be sent. I only have one device to be tested with the server at the moment, but since Wireshark logging (at server side via port mirroring) looks properly, it seems to be a server issue!

server logs ``` > 2018-07-25 20:35:31,630 DEBUG DTLSSession - Updated receive window with sequence number [2]: new upper boundary [63], new bit vector [111] > 2018-07-25 20:36:31,660 DEBUG Record - Received truncated DTLS record(s). Discarding ... > 2018-07-25 20:36:31,662 DEBUG DTLSConnector - Received 0 DTLS records using a 16474 byte datagram buffer > 2018-07-25 20:37:31,895 DEBUG Record - Received truncated DTLS record(s). Discarding ... > 2018-07-25 20:37:31,897 DEBUG DTLSConnector - Received 0 DTLS records using a 16474 byte datagram buffer > 2018-07-25 20:38:40,195 DEBUG Record - Received truncated DTLS record(s). Discarding ... > 2018-07-25 20:38:40,196 DEBUG DTLSConnector - Received 0 DTLS records using a 16474 byte datagram buffer > 2018-07-25 20:39:32,302 DEBUG Record - Received truncated DTLS record(s). Discarding ... > 2018-07-25 20:39:32,303 DEBUG DTLSConnector - Received 0 DTLS records using a 16474 byte datagram buffer > 2018-07-25 20:40:32,449 DEBUG Record - Received truncated DTLS record(s). Discarding ... > 2018-07-25 20:40:32,450 DEBUG DTLSConnector - Received 0 DTLS records using a 16474 byte datagram buffer > 2018-07-25 20:41:32,621 DEBUG Record - Received truncated DTLS record(s). Discarding ... > 2018-07-25 20:41:32,623 DEBUG DTLSConnector - Received 0 DTLS records using a 16474 byte datagram buffer > 2018-07-25 20:42:32,799 DEBUG Record - Received truncated DTLS record(s). Discarding ... > 2018-07-25 20:42:32,801 DEBUG DTLSConnector - Received 0 DTLS records using a 16474 byte datagram buffer > 2018-07-25 20:43:33,022 DEBUG Record - Received truncated DTLS record(s). Discarding ... > 2018-07-25 20:43:33,024 DEBUG DTLSConnector - Received 0 DTLS records using a 16474 byte datagram buffer > 2018-07-25 20:44:33,104 DEBUG Record - Received truncated DTLS record(s). Discarding ... > 2018-07-25 20:44:33,105 DEBUG DTLSConnector - Received 0 DTLS records using a 16474 byte datagram buffer > 2018-07-25 20:45:33,237 DEBUG Record - Received truncated DTLS record(s). Discarding ... > 2018-07-25 20:45:33,238 DEBUG DTLSConnector - Received 0 DTLS records using a 16474 byte datagram buffer > 2018-07-25 20:45:44,372 DEBUG DTLSConnector - Sending application layer message to peer [/212.95.5.252:32455] > 2018-07-25 20:45:44,385 DEBUG DTLSSession - Setting MTU for peer [/212.95.5.252:32455] to 1280 bytes > 2018-07-25 20:45:44,388 DEBUG DTLSSession - Setting maximum fragment length for peer [/212.95.5.252:32455] to 1227 bytes > 2018-07-25 20:45:44,397 DEBUG Connection - Handshake with [/212.95.5.252:32455] has been started > 2018-07-25 20:45:44,412 TRACE Record - Encrypting record fragment using current write state > DTLSConnectionState: > Cipher suite: TLS_NULL_WITH_NULL_NULL > Compression method: NULL > IV: null > MAC key: null > Encryption key: null > 2018-07-25 20:45:44,421 TRACE DTLSConnector - Sending record of 107 bytes to peer [/212.95.5.252:32455]: > ==[ DTLS Record ]============================================== > Content Type: Handshake (22) > Peer address: /212.95.5.252:32455 > Version: 254, 253 > Epoch: 0 > Sequence Number: 0 > Length: 94 > Fragment: > Handshake Protocol > Type: CLIENT_HELLO (1) > Peer: /212.95.5.252:32455 > Message Sequence No: 0 > Fragment Offset: 0 > Fragment Length: 82 > Length: 82 > Version: 254, 253 > Random: > GMT Unix Time: Wed Jul 25 20:45:44 CEST 2018 > Random Bytes: 65 F9 46 F2 C6 F9 4D D9 4B CD 78 2E 08 35 B9 3A 5E C9 53 29 11 EA F9 E1 FC E3 CD 70 > Session ID Length: 0 > Cookie Length: 0 > Cipher Suites Length: 10 > Cipher Suites (5 suites) > Cipher Suite: TLS_ECDHE_ECDSA_WITH_AES_128_CCM_8 > Cipher Suite: TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256 > Cipher Suite: TLS_PSK_WITH_AES_128_CCM_8 > Cipher Suite: TLS_PSK_WITH_AES_128_CBC_SHA256 > Cipher Suite: TLS_ECDHE_PSK_WITH_AES_128_CBC_SHA256 > Compression Methods Length: 1 > Compression Methods (1 method) > Compression Method: NULL > Extensions Length: 30 > Extension: elliptic_curves (10) > Length: 8 > Elliptic Curves Length: 6 > Elliptic Curves (3 curves): > Elliptic Curve: secp256r1 (23) > Elliptic Curve: secp384r1 (24) > Elliptic Curve: secp521r1 (25) > Extension: ec_point_formats (11) > Length: 2 > EC point formats length: 1 > Elliptic Curves Point Formats (1): > EC point format: uncompressed (0) > Extension: client_certificate_type (19) > Client certificate type: RAW_PUBLIC_KEY > > Extension: server_certificate_type (20) > Server certificate type: RAW_PUBLIC_KEY > > =============================================================== > 2018-07-25 20:45:44,447 DEBUG DTLSConnector - Sending flight of 1 message(s) to peer [/212.95.5.252:32455] using 1 datagram(s) of max. 1280 bytes > 2018-07-25 20:45:45,422 DEBUG DTLSConnector - Re-transmitting flight for [/212.95.5.252:32455], [3] retransmissions left > 2018-07-25 20:45:45,426 TRACE DTLSConnector - Sending record of 107 bytes to peer [/212.95.5.252:32455]: > ==[ DTLS Record ]============================================== > Content Type: Handshake (22) > Peer address: /212.95.5.252:32455 > Version: 254, 253 > Epoch: 0 > Sequence Number: 1 > Length: 94 > Fragment: > Handshake Protocol > Type: CLIENT_HELLO (1) > Peer: /212.95.5.252:32455 > Message Sequence No: 0 > Fragment Offset: 0 > Fragment Length: 82 > Length: 82 > Version: 254, 253 > Random: > GMT Unix Time: Wed Jul 25 20:45:44 CEST 2018 > Random Bytes: 65 F9 46 F2 C6 F9 4D D9 4B CD 78 2E 08 35 B9 3A 5E C9 53 29 11 EA F9 E1 FC E3 CD 70 > Session ID Length: 0 > Cookie Length: 0 > Cipher Suites Length: 10 > Cipher Suites (5 suites) > Cipher Suite: TLS_ECDHE_ECDSA_WITH_AES_128_CCM_8 > Cipher Suite: TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256 > Cipher Suite: TLS_PSK_WITH_AES_128_CCM_8 > Cipher Suite: TLS_PSK_WITH_AES_128_CBC_SHA256 > Cipher Suite: TLS_ECDHE_PSK_WITH_AES_128_CBC_SHA256 > Compression Methods Length: 1 > Compression Methods (1 method) > Compression Method: NULL > Extensions Length: 30 > Extension: elliptic_curves (10) > Length: 8 > Elliptic Curves Length: 6 > Elliptic Curves (3 curves): > Elliptic Curve: secp256r1 (23) > Elliptic Curve: secp384r1 (24) > Elliptic Curve: secp521r1 (25) > Extension: ec_point_formats (11) > Length: 2 > EC point formats length: 1 > Elliptic Curves Point Formats (1): > EC point format: uncompressed (0) > Extension: client_certificate_type (19) > Client certificate type: RAW_PUBLIC_KEY > > Extension: server_certificate_type (20) > Server certificate type: RAW_PUBLIC_KEY > > =============================================================== > 2018-07-25 20:45:45,440 DEBUG DTLSConnector - Sending flight of 1 message(s) to peer [/212.95.5.252:32455] using 1 datagram(s) of max. 1280 bytes > 2018-07-25 20:45:47,444 DEBUG DTLSConnector - Re-transmitting flight for [/212.95.5.252:32455], [2] retransmissions left > 2018-07-25 20:45:47,447 TRACE DTLSConnector - Sending record of 107 bytes to peer [/212.95.5.252:32455]: > ==[ DTLS Record ]============================================== > Content Type: Handshake (22) > Peer address: /212.95.5.252:32455 > Version: 254, 253 > Epoch: 0 > Sequence Number: 2 > Length: 94 > Fragment: > Handshake Protocol > Type: CLIENT_HELLO (1) > Peer: /212.95.5.252:32455 > Message Sequence No: 0 > Fragment Offset: 0 > Fragment Length: 82 > Length: 82 > Version: 254, 253 > Random: > GMT Unix Time: Wed Jul 25 20:45:44 CEST 2018 > Random Bytes: 65 F9 46 F2 C6 F9 4D D9 4B CD 78 2E 08 35 B9 3A 5E C9 53 29 11 EA F9 E1 FC E3 CD 70 > Session ID Length: 0 > Cookie Length: 0 > Cipher Suites Length: 10 > Cipher Suites (5 suites) > Cipher Suite: TLS_ECDHE_ECDSA_WITH_AES_128_CCM_8 > Cipher Suite: TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256 > Cipher Suite: TLS_PSK_WITH_AES_128_CCM_8 > Cipher Suite: TLS_PSK_WITH_AES_128_CBC_SHA256 > Cipher Suite: TLS_ECDHE_PSK_WITH_AES_128_CBC_SHA256 > Compression Methods Length: 1 > Compression Methods (1 method) > Compression Method: NULL > Extensions Length: 30 > Extension: elliptic_curves (10) > Length: 8 > Elliptic Curves Length: 6 > Elliptic Curves (3 curves): > Elliptic Curve: secp256r1 (23) > Elliptic Curve: secp384r1 (24) > Elliptic Curve: secp521r1 (25) > Extension: ec_point_formats (11) > Length: 2 > EC point formats length: 1 > Elliptic Curves Point Formats (1): > EC point format: uncompressed (0) > Extension: client_certificate_type (19) > Client certificate type: RAW_PUBLIC_KEY > > Extension: server_certificate_type (20) > Server certificate type: RAW_PUBLIC_KEY > > =============================================================== > 2018-07-25 20:45:47,466 DEBUG DTLSConnector - Sending flight of 1 message(s) to peer [/212.95.5.252:32455] using 1 datagram(s) of max. 1280 bytes > 2018-07-25 20:45:49,394 WARN ClientServlet - Request /api/clients/urn:imei:359785020877737/1/0 timed out. > 2018-07-25 20:45:51,470 DEBUG DTLSConnector - Re-transmitting flight for [/212.95.5.252:32455], [1] retransmissions left > 2018-07-25 20:45:51,473 TRACE DTLSConnector - Sending record of 107 bytes to peer [/212.95.5.252:32455]: > ==[ DTLS Record ]============================================== > Content Type: Handshake (22) > Peer address: /212.95.5.252:32455 > Version: 254, 253 > Epoch: 0 > Sequence Number: 3 > Length: 94 > Fragment: > Handshake Protocol > Type: CLIENT_HELLO (1) > Peer: /212.95.5.252:32455 > Message Sequence No: 0 > Fragment Offset: 0 > Fragment Length: 82 > Length: 82 > Version: 254, 253 > Random: > GMT Unix Time: Wed Jul 25 20:45:44 CEST 2018 > Random Bytes: 65 F9 46 F2 C6 F9 4D D9 4B CD 78 2E 08 35 B9 3A 5E C9 53 29 11 EA F9 E1 FC E3 CD 70 > Session ID Length: 0 > Cookie Length: 0 > Cipher Suites Length: 10 > Cipher Suites (5 suites) > Cipher Suite: TLS_ECDHE_ECDSA_WITH_AES_128_CCM_8 > Cipher Suite: TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256 > Cipher Suite: TLS_PSK_WITH_AES_128_CCM_8 > Cipher Suite: TLS_PSK_WITH_AES_128_CBC_SHA256 > Cipher Suite: TLS_ECDHE_PSK_WITH_AES_128_CBC_SHA256 > Compression Methods Length: 1 > Compression Methods (1 method) > Compression Method: NULL > Extensions Length: 30 > Extension: elliptic_curves (10) > Length: 8 > Elliptic Curves Length: 6 > Elliptic Curves (3 curves): > Elliptic Curve: secp256r1 (23) > Elliptic Curve: secp384r1 (24) > Elliptic Curve: secp521r1 (25) > Extension: ec_point_formats (11) > Length: 2 > EC point formats length: 1 > Elliptic Curves Point Formats (1): > EC point format: uncompressed (0) > Extension: client_certificate_type (19) > Client certificate type: RAW_PUBLIC_KEY > > Extension: server_certificate_type (20) > Server certificate type: RAW_PUBLIC_KEY > > =============================================================== > 2018-07-25 20:45:51,494 DEBUG DTLSConnector - Sending flight of 1 message(s) to peer [/212.95.5.252:32455] using 1 datagram(s) of max. 1280 bytes > 2018-07-25 20:45:59,498 DEBUG DTLSConnector - Re-transmitting flight for [/212.95.5.252:32455], [0] retransmissions left > 2018-07-25 20:45:59,499 TRACE DTLSConnector - Sending record of 107 bytes to peer [/212.95.5.252:32455]: > ==[ DTLS Record ]============================================== > Content Type: Handshake (22) > Peer address: /212.95.5.252:32455 > Version: 254, 253 > Epoch: 0 > Sequence Number: 4 > Length: 94 > Fragment: > Handshake Protocol > Type: CLIENT_HELLO (1) > Peer: /212.95.5.252:32455 > Message Sequence No: 0 > Fragment Offset: 0 > Fragment Length: 82 > Length: 82 > Version: 254, 253 > Random: > GMT Unix Time: Wed Jul 25 20:45:44 CEST 2018 > Random Bytes: 65 F9 46 F2 C6 F9 4D D9 4B CD 78 2E 08 35 B9 3A 5E C9 53 29 11 EA F9 E1 FC E3 CD 70 > Session ID Length: 0 > Cookie Length: 0 > Cipher Suites Length: 10 > Cipher Suites (5 suites) > Cipher Suite: TLS_ECDHE_ECDSA_WITH_AES_128_CCM_8 > Cipher Suite: TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256 > Cipher Suite: TLS_PSK_WITH_AES_128_CCM_8 > Cipher Suite: TLS_PSK_WITH_AES_128_CBC_SHA256 > Cipher Suite: TLS_ECDHE_PSK_WITH_AES_128_CBC_SHA256 > Compression Methods Length: 1 > Compression Methods (1 method) > Compression Method: NULL > Extensions Length: 30 > Extension: elliptic_curves (10) > Length: 8 > Elliptic Curves Length: 6 > Elliptic Curves (3 curves): > Elliptic Curve: secp256r1 (23) > Elliptic Curve: secp384r1 (24) > Elliptic Curve: secp521r1 (25) > Extension: ec_point_formats (11) > Length: 2 > EC point formats length: 1 > Elliptic Curves Point Formats (1): > EC point format: uncompressed (0) > Extension: client_certificate_type (19) > Client certificate type: RAW_PUBLIC_KEY > > Extension: server_certificate_type (20) > Server certificate type: RAW_PUBLIC_KEY > > =============================================================== ```
bernhard-seifert commented 6 years ago

in version 1.0.0-M2 the behavior is normal in version 1.0.0-M4 the behavior is normal in version 1.0.0-M5 the behavior is normal version 1.0.0-M6 sends the "ClientHello" message

sbernard31 commented 6 years ago

I retried to reproduce it on my side and still not able ...

leshan 1.0.0-M5 uses californium 2.0.0-M6 leshan 1.0.0-M6 uses californium 2.0.0-M9

I took a look at the code to see if I found something suspicious .. I saw nothing...

If the server try to send a Client_Hello that means that the connection/session was lost or can not be retrieved. I do not understand what could cause that in your case.

Maybe @boaks would have an idea ?

boaks commented 6 years ago

Maybe, the "DtlsConnectorConfig.Builder.setAutoResumptionTimeoutMillis" is used?

sbernard31 commented 6 years ago

I don't think we use it in Leshan. (I suppose the default value is no auto resumption?)

If autoResumption was involved we should see SESSIONID in Client_Hello, right ?

boaks commented 6 years ago

OK, the CLIENT_HELLO is no resumption, so the "AutoResumptionTimeout" is not the cause. Which value is used for the MAX_PEER_INACTIVITY_PERIOD (or StaleConnectionThreshold)?

boaks commented 6 years ago

The default would be

DEFAULT_MAX_PEER_INACTIVITY_PERIOD = 10 * 60; // 10 minutes

bernhard-seifert commented 6 years ago

It seems that the cause of this ClientHello messages is the 10 minutes timeout and not the 10 malformatted keep-alive packets!

bernhard-seifert commented 6 years ago

There must be a change in the behavior between M5 and M6! It does not make sens to set the inactivity period lower than the lifetime (which is specified by the client during registration process).

boaks commented 6 years ago

If the leshan demo server still uses a "Californium.properties", you may easily adjust the value of MAX_PEER_INACTIVITY_PERIOD to a proper value. This should help as short time workaround.

sbernard31 commented 6 years ago

Registration lifetime is dynamic and different for each device. DEFAULT_MAX_PEER_INACTIVITY_PERIOD is set 1 time for all dtls connection.

Registration lifetime and dtls connection lifetime is not exactly linked.

I must confess that I totally miss-understood the way this was used in californium. I thought that DTLS connection was only removed if the store was full and the peer_inactivity_period was reached...

boaks commented 6 years ago

I must confess that I totally miss-understood the way this was used in californium. I thought that DTLS connection was only removed if the store was full and the peer_inactivity_period was reached...

8 months ago, I "fixed" the behaviour of get() according the documentation

"If the cache contains the key but the value is stale the entry is removed from the cache."

Fix "stale" check in get().

Add test for get() of expired cache entry. Use nano time to decouple from system time changes. Adjust BLOCKWISE_STATUS_LIFETIME from 1500 to 1000, because 1500 will be truncated to 1000 and the difference causes a failure. Increase the sleep time to ensure, 1s is really elapsed.

abd27de9a27ceba5483d7398abfd0364be102a19

boaks commented 6 years ago
       // Create CoAP Config
        NetworkConfig coapConfig;
        File configFile = new File(NetworkConfig.DEFAULT_FILE_NAME);
        if (configFile.isFile()) {
            coapConfig = new NetworkConfig();
            coapConfig.load(configFile);
        } else {
            coapConfig = LeshanServerBuilder.createDefaultNetworkConfig();
            coapConfig.store(configFile);
        }

consider to use the NetworkConfigDefaultHandler

NetworkConfig config = NetworkConfig.createWithFile(CONFIG_FILE, CONFIG_HEADER, DEFAULTS);

To set the inactive period to a more proper default value.

sbernard31 commented 6 years ago

@bernhard-seifert, thx a lot to reporting that !

We go back to the previous behavior in californium(see https://github.com/eclipse/californium/pull/709) So this will be available in Leshan as soon as we integrate the next californium version.

Waiting, for leshan-server-demo, you can use an higher value for MAX_PEER_INACTIVITY_PERIOD in Californium.properties.

If you are you using Leshan as library, you can change the coapConfig using and higher value for MAX_PEER_INACTIVITY_PERIOD too.

coapConfig = LeshanServerBuilder.createDefaultNetworkConfig();
coapConfig.setLong(Keys.MAX_PEER_INACTIVITY_PERIOD,yourHigherValue);

LeshanServerBuilder builder = new LeshanServerBuilder();
builder.setCoapConfig(coapConfig);

I let this opened waiting we integrate the new Californium version.

bernhard-seifert commented 6 years ago

thanks for the note. Once you have integrated the new californium version, I'll carefully test it!

sbernard31 commented 6 years ago

567 should fix this issue. @bernhard-seifert, could you retest with it ?

bernhard-seifert commented 6 years ago

I compiled the leshan-server-demo and tested it. I waited more than 15 min - no strange behavior or disconnects! Each minute a keep-alive packet is sent to the server to keep NAT open. These packets are silently ignored by the server. After more than 15min reading from the device is still possible!

image

sbernard31 commented 6 years ago

So I can integrate #567 and close this bug, right ?

bernhard-seifert commented 6 years ago

yes, for be the bug is fixed with your update! thanks for the fast fix!

sbernard31 commented 5 years ago

I integrated the fix in master (#567)

Thx a lot for reporting this and double check it works now !