Closed reissjason closed 3 years ago
The Data Buffer of the SX1301 has a size of 1024 bytes. Each packet stored in this buffer has a 16-bytes metadata overhead.
So for 4 end-points, it means: 1024 - (16*4) = 960bytes available for payload. Which means 3 packets of 255 bytes maximum.
So if you send 4 packets at the same time, I would expect you to receive 3 of them.
But from what you say, it seems you don't receive anything anymore?
Each call to lgw_reg_rb(LGW_RX_PACKET_DATA_FIFO_NUM_STORED, buff, 5); Results in buff[0] == 0 until two end-devices stop sending.
In my test I would see only two 255-byte packets ever reported, never three.
For the buffer to be full, the messages should be sent exactly at the same time. If they have more than 10ms difference (which I think is the polling period for the packet forwarder), there should be no problems.
Anyway, the problem here is that the FIFO reports no messages up until you lower the traffic. I am experiencing something similar: I send with more than 10 devices concurrently in random frequencies and SFs (22 bytes/msg). My gateway makes several-seconds-pauses (no Rx message at all) during the process. Could it be related to the same cause? I am going to try and replicate your situation
Could you please share your logs?
I already opened issue #77 under lora_gateway but no response until now
@reissjason in order to try reproducing your issue I've configured my gateway by setting 4 channels on the same frequency, and send 255-bytes packets from one device.
I constantly receive 3 packets, as expected.
JSON up: {"rxpk":[{"tmst":133908676,"chan":6,"rfch":0,"freq":867.100000,"stat":1,"modu":"LORA","datr":"SF7BW125","codr":"4/5","lsnr":9.8,"rssi":-71,"size":255,"data":"QCAA/soAAAABAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA"},{"tmst":133908675,"chan":3,"rfch":0,"freq":867.100000,"stat":1,"modu":"LORA","datr":"SF7BW125","codr":"4/5","lsnr":9.8,"rssi":-72,"size":255,"data":"QCAA/soAAAABAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA"},{"tmst":133908684,"chan":5,"rfch":0,"freq":867.100000,"stat":1,"modu":"LORA","datr":"SF7BW125","codr":"4/5","lsnr":10.0,"rssi":-71,"size":255,"data":"QCAA/soAAAABAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA"}]}
Could you try the same on your side?
Regards
I have successfully reproduced the issue incrementing and decrementing the traffic with 4 end devices. Result: when 4 simultaneus packets on the air -> no more packets are received
Test Description: all Transmissions with SF7, Payload: 255 Bytes, Period: around 403ms, ToA 400ms
The packet logger stops logging when node 8 starts tx during more than 2 minutes. It returns to normality only 30 seconds after node 8 has ended Transmissions. Here the logs during the pause
AAAAAAAAAAAAAAAA,"","2017-09-20 12:38:40.492Z",1306398619, 867700000,0, 3,"CRC_OK" ,255,"LORA",125000,"SF7" ,"4/5",-37, +9.8,"00000000-10050061-...-000000" AAAAAAAAAAAAAAAA,"","2017-09-20 12:40:45.417Z",1431325755, 867700000,0, 3,"CRC_OK" ,255,"LORA",125000,"SF7" ,"4/5",-36,+10.0,"00000000-10050061-...-000000"
With similar configuration I am also able to receive 3 packets at 255 bytes. JSON up: { "rxpk": [{ "tmst": 72993972, "chan": 2, "rfch": 0, "freq": 868.300000, "stat": 1, "modu": "LORA", "datr": "SF8BW125", "codr": "4/5", "lsnr": 9.0, "rssi": -70, "size": 255, "data": "gEMu3wEAEwABWARNph/9hYgM+/KY272nC7QrkRVYbiy5o6zGS4KghcecxOrRTTje359dSV9a32Q+/2hBoczEGTXt7wHaYGbUah1yWgm5U293kPcj5IUZq4idKHi2WI5nRY55QLUPtKXQkkNpRtR8qY1GZ1GozJKwL9jaZTaW5Qfs8qwWslVVyVk6IyLFoMUYn8wq8kNz+PwzTATBEJ7A5zk8Z7yA0fHxMiLi5goV1L6W/fhLPXJqPRbTl8sXMP3haGE4RcAl+aGat7HBdtJdXY1YaEWLjeFIfeo3pf8bzYVr3KIwBUd/uVOfwgOnHSfPAmOb1wVGWhmLLmOKpNW9" }, { "tmst": 72993980, "chan": 1, "rfch": 0, "freq": 868.300000, "stat": 1, "modu": "LORA", "datr": "SF8BW125", "codr": "4/5", "lsnr": 9.5, "rssi": -68, "size": 255, "data": "gEMu3wEAEwABWARNph/9hYgM+/KY272nC7QrkRVYbiy5o6zGS4KghcecxOrRTTje359dSV9a32Q+/2hBoczEGTXt7wHaYGbUah1yWgm5U293kPcj5IUZq4idKHi2WI5nRY55QLUPtKXQkkNpRtR8qY1GZ1GozJKwL9jaZTaW5Qfs8qwWslVVyVk6IyLFoMUYn8wq8kNz+PwzTATBEJ7A5zk8Z7yA0fHxMiLi5goV1L6W/fhLPXJqPRbTl8sXMP3haGE4RcAl+aGat7HBdtJdXY1YaEWLjeFIfeo3pf8bzYVr3KIwBUd/uVOfwgOnHSfPAmOb1wVGWhmLLmOKpNW9" }, { "tmst": 72993980, "chan": 0, "rfch": 0, "freq": 868.300000, "stat": 1, "modu": "LORA", "datr": "SF8BW125", "codr": "4/5", "lsnr": 10.5, "rssi": -66, "size": 255, "data": "gEMu3wEAEwABWARNph/9hYgM+/KY272nC7QrkRVYbiy5o6zGS4KghcecxOrRTTje359dSV9a32Q+/2hBoczEGTXt7wHaYGbUah1yWgm5U293kPcj5IUZq4idKHi2WI5nRY55QLUPtKXQkkNpRtR8qY1GZ1GozJKwL9jaZTaW5Qfs8qwWslVVyVk6IyLFoMUYn8wq8kNz+PwzTATBEJ7A5zk8Z7yA0fHxMiLi5goV1L6W/fhLPXJqPRbTl8sXMP3haGE4RcAl+aGat7HBdtJdXY1YaEWLjeFIfeo3pf8bzYVr3KIwBUd/uVOfwgOnHSfPAmOb1wVGWhmLLmOKpNW9" }] }
In an effort to improve our customer support experience and in recognition that our support backlog on GitHub has historically exceeded the capacity of our engineering team, we have taken the difficult decision to focus on the most contemporary issues reported and to close all others without confirmation of resolution.
Our belief is that issues which have remained unresolved and unaltered for extended periods of time are less likely to continue to pose a significant problem to the user than when they were originally filed. More contemporary issues however may still be relevant and hence are more appropriate to prioritize.
For those users who remain interested in resolution of a reported issue that was closed, we are encouraging usage of our developer portal forums [https://forum.lora-developers.semtech.com/] and commercial support portal [https://semtech.force.com/ldp/ldp_support?__hstc=212684107.579a13689e43099691e328c9248e6ecc.1623103335314.1624470656297.1624635514661.6&__hssc=212684107.6.1624635514661&__hsfp=4176385453] as the preferred avenues to receive support. We will continue to monitor the GitHub issue trackers as well, but want to encourage all users to take advantage of the increased community presence on the developer portal. For commercial customers, we highly recommend using the commercial support portal which is uniquely tailored to service such support requests.
How many 255 byte packets can be received at a single time?
It seems when four end-points send max size packets at SF7BW125, the concentrator stops reporting packets received. When a few end-points are turned off packets are reported again. Is there be FIFO overflow happening in this scenario?
Occasionally I would receive this warning, with random numbers reported as IF_CHAIN. lgw_receive:1140: WARNING: 76 NOT A VALID IF_CHAIN NUMBER, ABORTING