FreeRTOS / Lab-Project-FreeRTOS-LoRaWAN

Reference implementation of LoRaWAN connectivity on FreeRTOS.
MIT License
52 stars 22 forks source link

NRF52840DK with sx1262mb2xAS shield -- timeout using ADR while sending confirmed messages #6

Closed wabryan closed 2 years ago

wabryan commented 3 years ago

Hello, so I've been trying to get LoRaWAN working on the NRF52840-DK platform with the sx1262mb2xAS shield. What I've observed is that the join is successful, then it sends the first confirmed uplink on using SF10BW125, which is successfully received by the gateway, network server, and application server, and then the confirmation is received by the sensor. Since I have ADR turned on, it then tries to send the next message on SF7BW125, which is seen by the gateway, network server, and application server. The confirmation is then sent to the gateway, and the gateway tries to send it to the sensor, but it is missed.

After doing quite a bit of trial and error and analyzing the SPI bus, I believe there is a problem in how the window timeout and offset are calculated in RegionCommonComputeRxWindowParameters(), but I don't quite understand the equations in that function to say for sure. However, it is calculating the RxWindow1Delay and RxWindow2Delay (in ComputeRxWindowParameters() ) to be 951 and 1961, respectively. If I just hardcode those values to 980 and 1980, respectively, ADR suddenly works for at least spreading factors 7-10.

This was reproduced in both:

1.) An office with many gateways around (of different brands as well) on one NRF52840-DK+sx1262mb2xAS package. 2.) A home office with a different, single gateway and a separate NRF52840-DK+sx1262mb2xAS package.

Any ideas on what might be causing this?

A few things to note is that the gateways we are primarily using only listen on sub band 1, so I have had to: -- Change RegionUS915LinkAdrReq() to ignore any channel mask changes except for sub band 1. -- Change NvmCtx.ChannelsDefaultMask[] in RegionUS915InitDefaults() to be:

NvmCtx.ChannelsDefaultMask[0] = 0x00FF;
NvmCtx.ChannelsDefaultMask[1] = 0x0000;
NvmCtx.ChannelsDefaultMask[2] = 0x0000;
NvmCtx.ChannelsDefaultMask[3] = 0x0000;
NvmCtx.ChannelsDefaultMask[4] = 0x0001;
NvmCtx.ChannelsDefaultMask[5] = 0x0000;

-- Disable RegionUS915ApplyCFList() entirely.

Thank you!

ravibhagavandas commented 3 years ago

Hi,

Apologies for a late response.

From the RX1 and RX2 delays of 951 and 1961, it seems the calculated offset values are slightly off. Did you take a look at the Window offset and timeout calculations as described in LoraMAC document here? http://stackforce.github.io/LoRaMac-doc/LoRaMac-doc-v4.4.5/_p_o_r_t_i_n_g__g_u_i_d_e.html#RXWINDOWS

As per the document, the offset are calculated based on a mid point of the possible early and late Rx slots inorder to detect minimum of 5 symbols for preamble. The calculations depends on factors like minimum RX symbols (DEFAULT_MIN_RX_SYMBOLS) and the system timing error tolerance(DEFAULT_SYSTEM_MAX_RX_ERROR). I see that the default error tolerance value is set to 50 as per the config lorawanConfigRX_MAX_TIMING_ERROR for the project. Could you try adjusting the parameters to see if the calculations improves?

wabryan commented 2 years ago

Thank you for the response. I'm working on developing a deeper understanding of the window calculations, but in the code comments (and your response here), both mention DEFAULT_SYSTEM_MAX_RX_ERROR and DEFAULT_MIN_RX_SYMBOLS, but neither are actually defined or used in the project. At least as far as I can tell. I'm a little confused on that point.

ravibhagavandas commented 2 years ago

Hello,

DEFAULT_SYSTEM_MAX_RX_ERROR and DEFAULT_MIN_RX_SYMBOLS are parameters defined in the LoraMAC stack.

The config lorawanConfigRX_MAX_TIMING_ERROR maps directly to DEFAULT_SYSTEM_MAX_RX_ERROR in LoraMAC stack. If you adjust lorawanConfigRX_MAX_TIMING_ERROR then FreeRTOS LoraWAN API automatically configures this value as MIB_SYSTEM_MAX_RX_ERROR during initialization. DEFAULT_MIN_RX_SYMBOLS is not directly initialized through the FreeRTOS lorawan APIS. Alternatively, you can directly call the LoraMAC stack API to initialize both parameters as below:

 MibRequestConfirm_t mibReq = { 0 };
 LoRaMacStatus_t status = LORAMAC_STATUS_OK;

 mibReq.Type = MIB_SYSTEM_MAX_RX_ERROR;
 mibReq.Param.SystemMaxRxError = <value>;
 status = LoRaMacMibSetRequestConfirm( &mibReq );

 mibReq.Type = MIB_MIN_RX_SYMBOLS;
 mibReq.Param.SystemMaxRxError = <value>;
 status = LoRaMacMibSetRequestConfirm( &mibReq );
ravibhagavandas commented 2 years ago

Hi @wabryan

Where you able to apply the parameters as mentioned above? Are you still having issues?

wabryan commented 2 years ago

I was able to find a small 10ms window for which I can change lorawanConfigRX_MAX_TIMING_ERROR to be within seems to fix the issue. Is that small of a window typical?

ravibhagavandas commented 2 years ago

Hi,

Good to know that the issue is resolved.

Yes the 10ms RX timing error is typical and in fact is the default configuration recommended by LoraMAC stack: https://github.com/Lora-net/LoRaMac-node/blob/ba17382bd5109513937afad07f068a781a503ef6/src/mac/LoRaMac.h#L377

I will resolve this issue. Please feel free to open a new one if you have further questions.

Thanks.