While trying to publish a file as a client, the file gets stored in the memory(RAM) and then offloaded into the network. If the file is larger than the RAM size, the program gets stuck. Memory thrashing starts to happen, I think.
I am able to recreate the issue:
ndn hydra version:0.3.15
create a 10Gb file and upload publish the file ndn-hydra-client insert -r /hydra -f /10gb_file -p 10gb_file
After a few minutes the program stopped responding. I monitored the client node's RAM utilization. It is almost full (client is using 8Gb of memory). Good news is I am able to fix this issue, by memory mapping the pages to the hard disk. But the same problem is there while fetching a file. If the file size is larger then the RAM size, memory gets exhausted. Not able to solve this problem.
Hope the issue gets resolved. happy to answer any further questions.
While trying to publish a file as a client, the file gets stored in the memory(RAM) and then offloaded into the network. If the file is larger than the RAM size, the program gets stuck. Memory thrashing starts to happen, I think.
I am able to recreate the issue:
ndn hydra version:0.3.15 create a 10Gb file and upload publish the file ndn-hydra-client insert -r /hydra -f /10gb_file -p 10gb_file
After a few minutes the program stopped responding. I monitored the client node's RAM utilization. It is almost full (client is using 8Gb of memory). Good news is I am able to fix this issue, by memory mapping the pages to the hard disk. But the same problem is there while fetching a file. If the file size is larger then the RAM size, memory gets exhausted. Not able to solve this problem.
Hope the issue gets resolved. happy to answer any further questions.