Closed guillaume-chervet closed 1 week ago
Have you set UseLegacyBinaryFormat
property to true?
Yes, It is working when I set it back to the old mode. I always errase the previous data storage when I test. @sakno
Do you mean that it crashes on empty WAL with a new format?
It happen with empty WAL sometime and sometime after an amount of time with the existing WAL @sakno .
Do you have a stable repro? I see that the second stack trace is from the tests in your repository.
It is a kind of random behavior. But when it start to happen it does not stop.
First logs comes from our production. Second comes from dev environment from one of 3 nodes at startup. @sakno
It could happen if you trying to open WAL produced by version < 5.4.0
with a new version >=5.4.0
without UseLegacyBinaryFormat
set to true. Are you sure that dev environment starts with clean environment without older WAL files?
Yes I am sure. My store for testing was completly errased. Idem for the updated production. @sakno
SlimFaas is compiled in AOT.
The second stack trace indicates that WAL is trying to read existing files:
at System.IO.RandomAccess.ValidateInput(SafeFileHandle, Int64, Boolean) + 0x5f
at DotNext.Net.Cluster.Consensus.Raft.PersistentState.Table.Initialize() + 0x2a2
at DotNext.Net.Cluster.Consensus.Raft.PersistentState.<.ctor>g__CreateTables|28_1(SortedSet`1, DirectoryInfo, Int32, Int32, PersistentState.BufferManager&, Int32, PersistentState.WriteMode, Int64) + 0x14f
There is a code for Initialize
:
https://github.com/dotnet/dotNext/blob/cacf3e573b460469786314428617c4ce43387194/src/cluster/DotNext.Net.Cluster/Net/Cluster/Consensus/Raft/PersistentState.Partition.cs#L507-L552
To get an exception like in your stack trace the program needs to go to the second or third if
branch. It is possible only if there is a file in the file system.
forgot the latest logs @sakno I may made a mistake in our dev kubernetes environment.
Here the logs my collegues sent to me from the crash in production. Occur with the new protocol (in random laps of time near 48 hours and do not happen with the old one). I think it manage near 400 000 writes operation by day. slimfaas-1-slimfaas.log slimfaas-2-slimfaas.log slimfaas-0-slimfaas.log
I do no kown where can come from the negative number.
How WAL is configured? How many records per partition, parallel IO, etc? What's the target architecture, x86_64?
Target architecture is x86 64. The other options I do not know what it is. Here is the SlimData persistent constructor https://github.com/AxaFrance/SlimFaas/blob/2ca3a8c7589b87dcd560164d7ed643f8f17aa89b/src/SlimData/SlimPersistentState.cs#L19
Thank you @sakno for your help
It's hard to say what's the root cause of the problem because there is no stable repro. I can only guess. Possibly it happens because of network timeouts leading to cancellation of the token used by WAL internally to perform I/O. Some I/O were done in a way not safe for cancellation, I've prepared the potential fix. I can't release it right now.
Did you have a chance to check the fix?
Hi @sakno do you have a way to publish an alpha?
My level in c# is not the best 😜 It my favorite one but i do not code a lot with (unfortunately).
You can reference a project explicitly from your csproj file without published alpha.
Release 5.7.0 has been published.
Thank you @sakno I test it today and tell you if it fix the problem
hi @sakno,
We have crashed in production that lock nodes. The slimfaas code did not change things like to This part, we only update libraries : https://github.com/AxaFrance/SlimFaas/commit/b26e3bdb8a05bbd41140e24a34ebc9598f799661 I'am not sure but I think it is link to these changes :
DotNext.Net.Cluster 5.4.0
Changed binary file format for WAL for more efficient I/O. A new format is incompatible with all previous versions. To enable legacy format, set PersistentState.Options.UseLegacyBinaryFormat property to true Introduced a new experimental binary format for WAL based on sparse files. Can be enabled with PersistentState.Options.MaxLogEntrySize property
We took the new default system and we have this new error that happen sometime and crash the node 👍
or like this