JanusGraph / janusgraph

JanusGraph: an open-source, distributed graph database
https://janusgraph.org
Other
5.34k stars 1.18k forks source link

Traversal binding of dynamically created graphs are not propagated in multi-node cluster #2558

Open vtslab opened 3 years ago

vtslab commented 3 years ago

Following the discussion on the user list, we report two probably related issues:

  1. Using a single Gremlin Server, for a new dynamic graph "graph1" created from a Gremlin Console with ConfiguredGraphFactory.createConfiguration(map), the binding graph1 and graph1_traversal do not appear for a new connection. On the new connection, the new graph is visible from ConfiguredGraphFactory.getGraphNames(). Expected behaviour only occurs if the new dynamic graph is created with ConfiguredGraphFactory.createTemplateConfiguration(map), followed by ConfiguredGraphFactory.create("graph1").

  2. If two Gremlin Server instances share the same ConfigurationManagementGraph on a distributed storage backend, if a new dynamic graph is created from a Gremlin Console connected to the first server using the TemplateConfiguration, bindings for this new graph do not appear on a new connection (after 20 seconds) to the second Gremlin Server, even though the new graph is visible from ConfiguredGraphFactory.getGraphNames() in both Gremlin Consoles. This second issue has no easy workaround if the Gremlin Servers are accessed via a load balancer.

This can be reproduced on:

The first issue can be reproduced with a single Gremlin Server and single Gremlin Console by playing the bindingShouldExistAfterGraphIsCreated(). This test fails if a new configuration is created with ConfiguredGraphFactory.createConfiguration(map). This can be done manually by using the config files: gremlin-server-configuration-inmemory.yaml.txt janusgraph-inmemory-configurationgraph.properties.txt

The second issue can be reproduced with two Gremlin Servers and two Gremlin Consoles, using the following config files: gremlin-server-configuration8185.yaml.txt gremlin-server-configuration8186.yaml.txt janusgraph-cql-configurationgraph.properties.txt remote8185.yaml.txt remote8186.yaml.txt The cql backend was started using the bin/janusgraph.sh script (ignoring its gremlin server on port 8182 and the ES service). The TemplateConfiguration used was:

map = new HashMap<String, Object>();
map.put("storage.backend", "cql");
map.put("storage.hostname", "127.0.0.1");
suesunss commented 3 years ago

I am new to JanusGraph, I failed with figuring out the principles and usable tutorials for deploying a multi-node JanusGraph cluster. As far as I am concerned, there're two discussions about this scenario.

Both of them are not clear for making it in production. Could anyone point out the principles and, ideally, a tutorial for creating a multi-node JanusGraph cluster?

li-boxuan commented 3 years ago

@suesunss We have been running Titan/JanusGraph in a multi-node cluster for several years. From a user standpoint, we don't see anything special to consider except this one: https://docs.janusgraph.org/advanced-topics/recovery/#janusgraph-instance-failure

That being said, we only have a single gigantic graph, which makes our life easier. What @vtslab reported in this issue applies to multi-graphs in multi-node cluster, so if you only need one graph, you don't need to worry about it.

suesunss commented 3 years ago

@li-boxuan Thanks for the reply! I come up with two scenarios per your clues.

li-boxuan commented 3 years ago

each node in the cluster is a standalone server that do not communicate with each other. Is that correct?

No. Having a gigantic graph means you don't need to worry about dynamically created graphs and propagation issue, but it doesn't mean JanusGraph instances don't communicate at all. They do "communicate" with each other via the underlying storage backend. For example, "systemlog" table is used to synchronize schema updates among JanusGraph instances.

if we insert/delete/update a vertex from one node, can other nodes in the cluster be aware of the latest change given that each node has a cache layer.

You could disable DB cache (which is a shared cache used by all transactions per JVM). See https://docs.janusgraph.org/basics/cache/#database-level-caching

We are off-topic here, so if you have further questions, feel free to ask in the user mailing list, or GitHub Discussions.

luxianlong commented 1 year ago

Hi, @li-boxuan I also meet such problem in our scenario. I suppose that 20 seconds is too long for customer. Can we add some logic like "try-to-bind in runtime"? I've implemented as below and it works. Is there any concern for this solution?

In "org.janusgraph.graphdb.management.JanusGraphManager", override the beforeQueryStart method:

@Override
public void beforeQueryStart(RequestMessage msg) {
    if (msg.getArgs().containsKey("aliases")) {
        Map<String, Object> aliases = (Map<String, Object>) msg.getArgs().get("aliases");
        String graphName = (String) aliases.get("graph");
        if (!StringUtils.isEmpty(graphName) && !graphs.containsKey(graphName)) {
            ConfiguredGraphFactory.open(graphName);
        }
    }
}

Another option is to change "org.janusgraph.graphdb.management.JanusGraphManager#getGraph" with similar logic as below, and that also works.

image

Thanks in advance!

li-boxuan commented 1 year ago

@luxianlong First off, I agree with your statement that "20 seconds is too long for customer." - at least for some customers.

Regarding your two approaches, I am really not sure as I am not very familiar with JanusGraph/gremlin server model. My doubt about both approaches is that you seem to rely on some undocumented ways of getting the graph name.

In your first approach you check "graph" from "aliases". I am not sure where "graph" literal comes from. Are you sure it's not "g"? Another nitpick is please use Tokens.ARGS_ALIASES instead of string literal aliases.

In your second approach you check if the graph name does not end with "_traversal". I think you are doing so because of this, specifically,

// first check if the alias refers to a Graph instance
final Graph graph = context.getGraphManager().getGraph(aliasKv.getValue());

This approach seems less favorable because what if the graph name actually ends with the string "_traversal"? Also, JanusGraphManager#getGraph could be called by ConfiguredGraphFactory#drop method. If the drop method is called twice, then instead of doing a no-op for the second call, your approach would instead re-open the graph and close it again.

In all, I prefer your first approach, even though I don't fully understand it. I think you could proceed with creating a PR and we can have more people review it. Would be nice to have a new integration test, which wouldn't be easy to write.

luxianlong commented 1 year ago

@li-boxuan Thanks for the response. Will use Tokens.ARGS_ALIASES instead of string literal aliases. I think that I forget to talk about an assumption that in my scenario, we put a "graph" field into the alias map on client side, and the value of this field is the graph name. Because the caller know the exact graph name, while the server side is hard to infer from the aliases values (e.g. graph name actually ends with the string "_traversal").

This assumption is not general and it has a special ask to client side. It's more like an ad-hoc solution.

li-boxuan commented 1 year ago

@luxianlong I see, that makes sense! I was having trouble understanding where these magic string literals are from. If this adhoc solution works for you, that's great!

I am not sure if there is a general solution. Perhaps we could make the 20 seconds configurable, and say, your use case may want to set the internal to 1 second.

luxianlong commented 1 year ago

@li-boxuan yes. This solution works. For the 20 seconds period, I suppose that's because the binder will enumerate all tables one by one, which will cost some time. In my scenario, the latency may be longer.