Open kuberxy opened 2 weeks ago
Hi @kuberxy,
It seems we've exhausted the maximum allowed value for the uint64
type in Go, which is decimal 18,446,744,073,709,551,615
or hex 0xFFFFFFFFFFFFFFFF
which is a very large value and as such, would require us to create approx. 18.5 quintillion nodes on a backend to breach that threshold and reproduce that error.
So unless that is indeed the case (your RDF indeed contains triples pertaining to 18,446,744,073,709,551,615 nodes) or that the starting UID for nodes in your RDF is already a very large value, which doesn't leave much room for any further incremental assignment for more nodes, leading to the error seen.
curl -s localhost:6080/state | jq | grep '"max'
If the start UID in your RDF is indeed a very large value, you may want to retry the bulk-load with --new_uids
to ignore the UIDs in the RDF and freshly assign new UIDs for all nodes. Alternately, you could also replace the UIDs with blank-node identifiers (check docs) instead. Start a new Zero and re-run the bulk-loader, but with either the --new_uids
flag OR using blank-node identifiers instead of hardcoded UIDs.
Thanks!
What version of Dgraph are you using?
Tell us a little more about your go-environment?
No response
Have you tried reproducing the issue with the latest release?
None
What is the hardware spec (RAM, CPU, OS)?
N/A
What steps will reproduce the bug?
dgraph bulk
Expected behavior and actual result.
No response
Additional information
Every week I use the command below to export the data from the production environment, and I use the dgraph bulk to import the data into the test environment
Previously, everything was normal. But this week, the execution of dgraph bulk failed, and the error log kept outputting the following
Zero's limit options are the default values, i.e. "uid-lease=0; refill-interval=30s; disable-admin-http=false; "