Earlier I've documented how it is possible to setup and use a ramDrive to support the CSV test results data.
However, when you are using a SQL driven backend, you can not as easily do this, the most trustworthy way is using a RAM-driven DB like mailisearch (not elasticSearch, it is to heavy for most systems) or RedisServer as they only take up the space that is needed.
Current load over hours, after restarting @pyfunceble after broken connection to MariaDB
Cost:
Possible Solution
By using a no-sql backend we can:
Read out the full db to the middleman (~550.000 records is about ~35MB)
1.2. This will speed up the "--continue" and read in from source to first test
for every X test results, it will keep trying writing these data to SQL backend until succees
Description
Earlier I've documented how it is possible to setup and use a ramDrive to support the CSV test results data.
However, when you are using a SQL driven backend, you can not as easily do this, the most trustworthy way is using a RAM-driven DB like mailisearch (not elasticSearch, it is to heavy for most systems) or RedisServer as they only take up the space that is needed.
Current load over hours, after restarting @pyfunceble after broken connection to MariaDB
Cost:![image](https://github.com/funilrys/PyFunceble/assets/44526987/e548904c-cca7-433b-a757-c32bdb6c3d3f)
Possible Solution
By using a no-sql backend we can:
Considered Alternative
Not for now
Additional context
Nope, be happy