Open xwkuang5 opened 4 years ago
If I understand short_read_dissipation
correctly, it is the delta in the random walk model. Larger short_read_dissipation
means a shorter walk, e.g., in the extreme case where short_read_dissipation=1
, there should be no short reads after the complex read. Is this the reason why the number of operations can be different across different runs at the end?
If the above is true, is there a way to set the random seed in the test driver to make sure that the workload of a particular benchmark can be replayed?
Hi @xwkuang5
Sorry for the delay in replying.
I was getting three different final operations counts (2473, 2532, 2584) across 3 different runs. Is this the expected result? I will discuss this with task force when we talk next
I've just ran the cypher implementation a few times with your configuration and can reproduce the issue. Which scale factor are you using to generate the data?
Best,
Jack
Hi Jack, thanks for your reply
I believed it's SF1 (or SF3)
Hi,
I am reposting an open issue in the ldbc_snb_implementations repo here.
I am trying to use the cypher benchmark to evaluate the performance of Neo4j under different configurations. I set operation_count=2500 and run interactive-benchmark.sh script multiple times. However, I was getting three different final operations counts (2473, 2532, 2584) across 3 different runs. Is this the expected result?
Thanks for any help in advance!
Here is my configuration