Closed DirkJia closed 1 year ago
is this a server problem?(seems not a toolchain pic)
and do u try the latest version(apache 1.0.0)?
is this a server problem?(seems not a toolchain pic)
and do u try the latest version(apache 1.0.0)?
yes,it happened too suddenly, can I have a temporary solution instead of upgrading now?
is this a server problem?(seems not a toolchain pic)
and do u try the latest version(apache 1.0.0)?
Is it necessary to raise another issue for project of incubator-hugegraph?
is this a server problem?(seems not a toolchain pic)
and do u try the latest version(apache 1.0.0)?
Do I need to delete the data in the the table of pk ? Also, in what scenarios would this situation typically occur?
is this a server problem?(seems not a toolchain pic)
and do u try the latest version(apache 1.0.0)?
Is it necessary to raise another issue for project of incubator-hugegraph?
nope,and the issue could transfer directly (but we need more time for the outdated version)
also u could add some backend info(edit the top comment)
is this a server problem?(seems not a toolchain pic) and do u try the latest version(apache 1.0.0)?
Is it necessary to raise another issue for project of incubator-hugegraph?
nope,and the issue could transfer directly (but we need more time for the outdated version)
also u could add some backend info(edit the top comment)
done. and i am I deleting data by tracing the code. If it doesn't work, I shall switch to another Cassandra database to try it out
is this a server problem?(seems not a toolchain pic) and do u try the latest version(apache 1.0.0)?
Is it necessary to raise another issue for project of incubator-hugegraph?
nope,and the issue could transfer directly (but we need more time for the outdated version) also u could add some backend info(edit the top comment)
done. and i am I deleting data by tracing the code. If it doesn't work, I shall switch to another Cassandra database to try it out
I deleted some duplicate data from the tables like vl,il,el,pk,c etc,and now it's working fine。
I suspect that Cassandra has three nodes in total, and two of them were restarted, causing HugeGraph to not read the metadata. Therefore, the metadata was written again. When Cassandra is back to normal, the data became duplicated, causing the HugeGraph verification to fail. I am not sure if the new version has fixed this issue?
Bug Type (问题类型)
exception / error (异常报错)
Before submit
Environment (环境信息)
Expected & Actual behavior (期望与实际表现)
运行很长一段时间后出现报错,提示两个backend info
Vertex/Edge example (问题点 / 边数据举例)
No response
Schema [VertexLabel, EdgeLabel, IndexLabel] (元数据结构)
No response