Open robojumper opened 5 years ago
Hi there. Any update on this? I am also trying to write my own version of Darkest Dungeon Save Editor, and encountered this issue on persist.progression.json
. It was something like this, where slay_a_squiffy_with_jester
gets mentioned twice as a field. Is it safe if I ignore the duplicated data?
{
"__revision_dont_touch": 1683488768,
"base_root": {
"version": 2,
"dungeon": {
...
},
"completed_plot_quests_data": {
...
},
...
"achievements": {
...
},
"real_achievements": {
...
"slay_a_squiffy_with_jester": {
"rtti": 1935132924,
"id": "slay_a_squiffy_with_jester",
"completed": false,
"awarded": false,
"conditions": {
"0": {
"enemies_killed": 0
},
"1": {
"enemies_killed": 0
}
}
},
...
},
"infestation": {
...
},
"flashback_completion_counts": {
...
}
}
}
By the way, thanks for the awesome tool and documentation!
I can't give you an authoritative answer on how to deal with these. I'm not aware of any issues caused by dropping these duplicates, but the scope of this project was the binary encoding of the save data, not the actual semantics of the data, so might be lots of issues I don't know about.
I'm glad you're finding this project useful though!
https://github.com/robojumper/DarkestDungeonSaveEditor/blob/50c28e9058a18896235fa9c1bc310f70c64a2652/src/main/java/de/robojumper/ddsavereader/file/DsonFile.java#L359-L390
https://github.com/robojumper/DarkestDungeonSaveEditor/blob/50c28e9058a18896235fa9c1bc310f70c64a2652/src/test/java/de/robojumper/ddsavereader/file/ConverterTests.java#L82-L87
The decoder eats duplicated fields.
This was an issue before; I simply excluded the failing file name from the byte size equality tests. In #9, a new duplicated field wound up; excluding that entire file name would regress test coverage. Thus, files now have a method that can be used to check whether discrepancies between original file size and re-encoded file size are expected. This is still not ideal.
Here's where the "Fun:" part comes in: One could find a reasonable upper bound for allowed differences, as in "we know field X is duplicated with name of length Y and data size of Z -- the file is allowed to have as many as
12+16+Y+Z+3
more bytes" (12+16 for header, 3 for alignment).One could drive the point even further and derive a number of allowed different bytes for files without duplicated fields: Assuming that one or two bytes per
Meta2
block have garbage bits and that the header has a bunch of garbage, we could have a more fine-grained test.