Closed WweiL closed 2 months ago
I'll defer for specifics to @michaelsutton but generally (from the performance engineering standpoint), you do want the data in the cache to be available while the insert (and serialization) take place. This is a heavily threaded environment, so the cache might get multiple hits while the database is doing its thing. Also, a database error is unacceptable, as such a db error should cause a complete daemon teardown. (recovery from a halt is a separate subject).
Once again, this is not something I work on but my brief peek and understanding of what this does is as I described. I might be wrong tho. :)
What you are saying is true for a typical "server" that uses a typical database to provide a service and can encounter errors, perhaps recover from them, and typically would need to maintain a db-cache consistency. In the case of the p2p node, the data integrity should be maintained by panicking or halting. At the same time, in an ad-hoc p2p environment, data relay or its use via in-memory cache has no ill side effects.
Also, please take a look at the tests as the code in your PR doesn't build. I would recommend running /check
in the root of the repository, followed by /test
before the PR.
@aspect I see, what you said totally make sense to me. Maybe I should not be too eager to submit PRs without a holistic understanding. I'll take a look at the error tonight
Feel free to jump into Discord on #development and ask any questions.
Hi : )
I'm new to this project and slowly reading up the code and see if there are some contributions I can make.
Here in general it is a good practice to first add the data to the permanent store before storing it in the cache. In case somewhere the insertion throws an error and this error is caught somewhere, then you'd have inconsistency between the cache and the store.