gotthardp / lorawan-server

Compact server for private LoRaWAN networks
https://gotthardp.github.io/lorawan-server
MIT License
946 stars 327 forks source link

Control Mnesia database size #601

Open rkdm91 opened 5 years ago

rkdm91 commented 5 years ago

Hello,

I would like to put the database in Ram, for that i need to control the size of Mnesia database. I suppose that the database size depends on system parameters ( number of device, gateway, profile ,etc) which can be controlled, and it depends also on uplink/downlink and event saved.

As my uplinks and events are transferred to my own server, i don't need them to be store on the lorawan server , i would like to limit the number of uplink/downlink and event saved in the database (the fact that i can miss some uplink/event is not a problem for me), do you think it is possible ? I know that we can configure :

% amount of rxframes retained for each device/node
{retained_rxframes, 50},

form the sys.config file but more than 50 uplinks are available on my lora-server, I don't understand the use of this parameter.

Thank you in advance

gotthardp commented 5 years ago

Hello. You should have no more than 50 frames for each device. If you have 100 different devices, you will have no more than 5000 stored frames. You should be able to decrease this to 0, but it will impact the timeline and RX quality graphs for your devices. (You will not see anything there.)

rkdm91 commented 5 years ago

Hello, I confirmed that I have only one device commissioned and i have more than 50 uplinks frame saved :

image

image

image

gotthardp commented 5 years ago

and did you change the retained_rxframes settings or not?

rkdm91 commented 5 years ago

I didn't change the parameter, if i am correct the default setting is :

% amount of rxframes retained for each device/node
{retained_rxframes, 50}

which is the case in my /lorawan-server/releases/0.6.2/sys.config file

hallard commented 5 years ago

I confirm the issue,

My config is as follow

    % amount of rxframes retained for each device/node
    {retained_rxframes, 10},
    % events duration on interface
    {event_lifetime, 3600},

Looks like got 148 frames (from same device), since got plenty of devices on multitech GW it really slow down server. Of course service and GW has been restarted

image

gotthardp commented 5 years ago

The trimming works for me. Please note that:

@hallard, could you please double check you really have more than 100 frames that older than 1 hour?

hallard commented 5 years ago

@gotthardp My setup is 10 frames retained not 100, did I missed something?

gotthardp commented 5 years ago

@hallard How old the frames are and for how long was the server running?

hallard commented 5 years ago

server is rebooted every day, also I purged all so can't answer.

anyway, recompiled installed, and setup your debug config, all seems good on config, need to check after time

mtcdt:~$ cat /var/log/lorawan-server/debug.log | grep Using
2019-03-27 17:35:13.238 [debug] <0.313.0>@lorawan_app:start:17 Using config: [{retained_rxframes,10},{http_admin_redirect_ssl,true},{gateway_delay,200},{deduplication_delay,200},{trim_interval,1800},{event_lifetime,1800},{packet_forwarder_listen,[{port,1680}]},{ssl_options,[]},{websocket_timeout,infinity},{http_custom_web,[{"/",file,<<"/home/mtadm/html/index.html">>,[{<<"anonymous">>,'*'}]},{"/[...]",dir,<<"/home/mtadm/html">>,[{<<"anonymous">>,'*'}]}]},{devstat_gap,{432000,96}},{http_admin_credentials,{<<"admin">>,<<"admin">>}},{slack_server,{"slack.com",443}},{http_extra_headers,#{}},{connectors,[{lorawan_connector_amqp,[<<"amqp">>,<<"amqps">>]},{lorawan_connector_mqtt,[<<"mqtt">>,<<"mqtts">>]},{lorawan_connector_http,[<<"http">>,<<"https">>]},{lorawan_connector_mongodb,[<<"mongodb">>]},{lorawan_connector_ws,[<<"ws">>]}]},{frames_before_adr,50},{http_admin_listen_ssl,[{port,8443},{certfile,"cert.pem"},{cacertfile,"cacert.pem"},{keyfile,"key.pem"}]},{http_admin_path,<<"/admin">>},{max_lost_after_reset,10},{applications,[{<<"semtech-mote">>,lorawan_application_semtech_mote}]},{http_admin_listen,[{port,80}]},{server_stats_interval,60}]