Closed eltonfss closed 1 year ago
Could this be an indexing bug within TDB2?
Highly unlikely. (Much more likely that host environment is reporting sizes inconsistently which happens.)
Answered on https://lists.apache.org/thread/jxcfhkly7781k8hnw2qdy09fbj3xych8
The solution is to run compaction occasionally then your files are 3.5GB to 4GB.
All the indexes contain the same information, in a different order. The size variation is down to how the B+trees split.
An external process interfering with the files is a more likely cause. The TDB file locking can not ensure that the host has not had a process that has messed with the files.
Should it be solved by upgrading to Jena 4.7.0? Asking the same question (and not incorporating the answers) will not help you.
4.7.0 wouldn't change the growth situation - it does make compaction in a live server more reliable.
@afs should we move this to a discussion?
Whatever. I don't have anything to add.
I hope the OP expectation is not that there is some support team to respond to users.
Dear @afs and @kinow,
The intention of creating this issue (after posting on StackOverflow and Mailing List) was to make the documentation of this case as accessible as possible, in case someone else has the same issue or a different perspective on the why it occurred and how it could be solved.
That is why I also added the StackOverflow and Mailing List links right at the beginning, so anyone looking at this could have the full picture of what was discussed.
The hypothesis of the index being corrupted by an external process could be true, if someone tried to attach the same volume to another container for backup purposes for example. I'll try to investigate if that occurred in this particular case.
Nonetheless, if there was some other possible cause for the increased OSPG.dat growth, such as a particular triple update pattern, we would be able to investigate ways in which we could change our system to avoid that.
My apologies if this issue sounded flooding or implied that we were expecting some kind of support. All we seek is shared understanding and finding the best solution.
Many thanks for your help!
@eltonfss The reports so far haven't described your usage.
This discussion seems to have concluded. The advice on the email was to run a compaction.
Version
4.4.0
Question
This question has been also published at:
Scenario Description (Context)
I'm running Jena Fuseki Version 4.4.0 as a container on an OpenShift Cluster.
Hardware Info (from Jena Fuseki initialization log):
Disk Info (df -h):
My dataset is built using TDB2, and currently has the following RDF Stats: · Triples: 65KK (Approximately 65 million) · Subjects: ~20KK (Aproximately 20 million) · Objects: ~8KK (Aproximately 8 million) · Graphs: ~213K (Aproximately 213 thousand) · Predicates: 153
The files corresponding to this dataset alone on disk sum up to approximately 671GB (measured with du -h). From these, the largest files are:
Main Questions
Appendix
Assembler configuration for my dataset:
My Dataset Compression experiment
After getting some feedbacks from Jena support through the Mailing List, I've tried to run two compression strategies on this dataset to see which one would work best. The one I'm referring to as "official" is the one that uses the "/$/compact" endpoint and the one I'm referring to as "unofficial" is the one where I create an NQuads backup and upload it to a new dataset using the TDBLoader. The reason I attempted this second strategy is because a StackOverflow post suggested that it could be significantly more efficient than the "official" strategy (https://stackoverflow.com/questions/60501386/compacting-a-dataset-in-apache-jena-fuseki/60631699#60631699).
Here is a summary of the results I've obtained with both compression strategies (in markdown notation):
Original Dataset
RDF Stats:
Disk Stats:
Dataset Replica ("unofficial" compression strategy)
Description: Backed up dataset as NQuads and Restore it as a new dataset with TDBLoader.
References:
RDF Stats:
Disk Stats:
Compressed Dataset ("official" compression strategy)
Description: Compressed using
/$/compact/
endpoint generating a new Data-NNNN folder within the same dataset.References:
RDF Stats:
Disk Stats:
Comparison
RDF Stats:
Disk Stats:
Queries used to obtain the RDF Stats
Triples
Graphs
Subjects
Predicates
Objects
Comands used to measure the Disk Stats
File Sizes
Directory Sizes