EdgeURID currently does aggregation and commp and these 2 functional aspect of the system are resource-heavy processes.
To define why these are resource heavy
Aggregation is a functional aspect of Edgeurid that groups small files into large collections. It uses the abstracted feature called buckets to collect the files, aggregate and generate a CAR for all of them.
Commp is the process of generating piece information and is the main unit of negotiation for data that users store on the Filecoin network. Generating a commp requires generating proof that can consume significant RAM relative to the size of the CAR file.
When there's too much aggregation and commp being in done parallel, edgeurid demands more resource which, if it doesn't get, will terminate on it's own (OOM).
Stats/ Metrics
Running a 64GB RAM Linux OS seems can only accommodate 5 parallel (4GB to 6GB CAR size) Commp
*TBA
Solution
My proposal is to separate the COMMP from the aggregator. We can create a COMMP Node which can PULL CAR files from a given edge node, run the piece commitment logic and return the piece information back.
Problem
EdgeURID currently does aggregation and commp and these 2 functional aspect of the system are resource-heavy processes.
To define why these are resource heavy
When there's too much aggregation and commp being in done parallel, edgeurid demands more resource which, if it doesn't get, will terminate on it's own (OOM).
Stats/ Metrics
Solution
My proposal is to separate the COMMP from the aggregator. We can create a COMMP Node which can PULL CAR files from a given edge node, run the piece commitment logic and return the piece information back.