JanusGraph / janusgraph

JanusGraph: an open-source, distributed graph database
https://janusgraph.org
Other
5.29k stars 1.17k forks source link

Error running JanusGraph 0.3.0 OLAP Traversal (Spark 2.2 and Hadoop 2.7.5) on Scylla #1228

Open wuesty opened 6 years ago

wuesty commented 6 years ago

I have followed the direction in https://docs.janusgraph.org/latest/hadoop-tp3.html.

Verified the using hdfs commands on the gremlin console that Hadoop is connecting to fine.

Getting this error (wondering if anybody can comment).

[Stage 0:> 16:41:50 WARN org.apache.spark.scheduler.TaskSetManager - Lost task 0.0 in stage 0.0 (TID 0, 172.20.0.8, executor 0): java.lang.IllegalArgumentException: Could not instantiate implementation: org.janusgraph.hadoop.formats.util.input.current.JanusGraphHadoopSetupImpl at org.janusgraph.util.system.ConfigurationUtil.instantiate(ConfigurationUtil.java:64) at org.janusgraph.hadoop.formats.util.GiraphInputFormat.lambda$static$0(GiraphInputFormat.java:48) at org.janusgraph.hadoop.formats.util.GiraphInputFormat$RefCountedCloseable.acquire(GiraphInputFormat.java:107) at org.janusgraph.hadoop.formats.util.GiraphRecordReader.(GiraphRecordReader.java:55) at org.janusgraph.hadoop.formats.util.GiraphInputFormat.createRecordReader(GiraphInputFormat.java:69) at org.apache.spark.rdd.NewHadoopRDD$$anon$1.liftedTree1$1(NewHadoopRDD.scala:180) at org.apache.spark.rdd.NewHadoopRDD$$anon$1.(NewHadoopRDD.scala:179) at org.apache.spark.rdd.NewHadoopRDD.compute(NewHadoopRDD.scala:134) at org.apache.spark.rdd.NewHadoopRDD.compute(NewHadoopRDD.scala:69) at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:323) at org.apache.spark.rdd.RDD.iterator(RDD.scala:287) at org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:38) at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:323) at org.apache.spark.rdd.RDD.iterator(RDD.scala:287) at org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:38) at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:323) at org.apache.spark.rdd.RDD.iterator(RDD.scala:287) at org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:38) at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:323) at org.apache.spark.rdd.RDD.iterator(RDD.scala:287) at org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:38) at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:323) at org.apache.spark.rdd.RDD.iterator(RDD.scala:287) at org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:38) at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:323) at org.apache.spark.rdd.RDD.iterator(RDD.scala:287) at org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:96) at org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:53) at org.apache.spark.scheduler.Task.run(Task.scala:108) at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:335) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) at java.lang.Thread.run(Thread.java:745) Caused by: java.lang.reflect.InvocationTargetException at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method) at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) at java.lang.reflect.Constructor.newInstance(Constructor.java:423) at org.janusgraph.util.system.ConfigurationUtil.instantiate(ConfigurationUtil.java:58) ... 32 more Caused by: org.janusgraph.core.JanusGraphException: Could not open global configuration at org.janusgraph.diskstorage.Backend.getStandaloneGlobalConfiguration(Backend.java:454) at org.janusgraph.graphdb.configuration.GraphDatabaseConfiguration.(GraphDatabaseConfiguration.java:1257) at org.janusgraph.core.JanusGraphFactory.open(JanusGraphFactory.java:160) at org.janusgraph.core.JanusGraphFactory.open(JanusGraphFactory.java:131) at org.janusgraph.core.JanusGraphFactory.open(JanusGraphFactory.java:121) at org.janusgraph.hadoop.formats.util.input.current.JanusGraphHadoopSetupImpl.(JanusGraphHadoopSetupImpl.java:52) ... 37 more Caused by: org.janusgraph.diskstorage.TemporaryBackendException: Temporary failure in storage backend at org.janusgraph.diskstorage.cassandra.astyanax.AstyanaxStoreManager.ensureColumnFamilyExists(AstyanaxStoreManager.java:490) at org.janusgraph.diskstorage.cassandra.astyanax.AstyanaxStoreManager.openDatabase(AstyanaxStoreManager.java:349) at org.janusgraph.diskstorage.cassandra.astyanax.AstyanaxStoreManager.openDatabase(AstyanaxStoreManager.java:71) at org.janusgraph.diskstorage.keycolumnvalue.KeyColumnValueStoreManager.openDatabase(KeyColumnValueStoreManager.java:43) at org.janusgraph.diskstorage.Backend.getStandaloneGlobalConfiguration(Backend.java:452) ... 42 more Caused by: com.netflix.astyanax.connectionpool.exceptions.BadRequestException: BadRequestException: [host=localdatastore_db(172.20.0.3):9160, latency=26(26), attempts=1]InvalidRequestException(why:Column family system_properties already exists) at com.netflix.astyanax.thrift.ThriftConverter.ToConnectionPoolException(ThriftConverter.java:159) at com.netflix.astyanax.thrift.AbstractOperationImpl.execute(AbstractOperationImpl.java:65) at com.netflix.astyanax.thrift.AbstractOperationImpl.execute(AbstractOperationImpl.java:28) at com.netflix.astyanax.thrift.ThriftSyncConnectionFactoryImpl$ThriftConnection.execute(ThriftSyncConnectionFactoryImpl.java:153) at com.netflix.astyanax.connectionpool.impl.AbstractExecuteWithFailoverImpl.tryOperation(AbstractExecuteWithFailoverImpl.java:119) at com.netflix.astyanax.connectionpool.impl.AbstractHostPartitionConnectionPool.executeWithFailover(AbstractHostPartitionConnectionPool.java:352) at com.netflix.astyanax.thrift.ThriftClusterImpl.executeSchemaChangeOperation(ThriftClusterImpl.java:146) at com.netflix.astyanax.thrift.ThriftClusterImpl.internalCreateColumnFamily(ThriftClusterImpl.java:240) at com.netflix.astyanax.thrift.ThriftClusterImpl.addColumnFamily(ThriftClusterImpl.java:215) at org.janusgraph.diskstorage.cassandra.astyanax.AstyanaxStoreManager.ensureColumnFamilyExists(AstyanaxStoreManager.java:487) ... 46 more Caused by: InvalidRequestException(why:Column family system_properties already exists) at org.apache.cassandra.thrift.Cassandra$system_add_column_family_result$system_add_column_family_resultStandardScheme.read(Cassandra.java:43019) at org.apache.cassandra.thrift.Cassandra$system_add_column_family_result$system_add_column_family_resultStandardScheme.read(Cassandra.java:42997) at org.apache.cassandra.thrift.Cassandra$system_add_column_family_result.read(Cassandra.java:42931) at org.apache.thrift.TServiceClient.receiveBase(TServiceClient.java:78) at org.apache.cassandra.thrift.Cassandra$Client.recv_system_add_column_family(Cassandra.java:1522) at org.apache.cassandra.thrift.Cassandra$Client.system_add_column_family(Cassandra.java:1509) at com.netflix.astyanax.thrift.ThriftClusterImpl$7.internalExecute(ThriftClusterImpl.java:245) at com.netflix.astyanax.thrift.ThriftClusterImpl$7.internalExecute(ThriftClusterImpl.java:241) at com.netflix.astyanax.thrift.AbstractOperationImpl.execute(AbstractOperationImpl.java:60) ... 54 more

16:41:50 ERROR org.apache.spark.scheduler.TaskSetManager - Task 1 in stage 0.0 failed 4 times; aborting job 16:41:51 WARN org.apache.spark.scheduler.TaskSetManager - Lost task 3.3 in stage 0.0 (TID 11, 172.20.0.8, executor 0): TaskKilled (stage cancelled) (0 + 2) / 257]org.apache.spark.SparkException: Job aborted due to stage failure: Task 1 in stage 0.0 failed 4 times, most recent failure: Lost task 1.3 in stage 0.0 (TID 10, 172.20.0.8, executor 0): java.lang.IllegalArgumentException: Could not instantiate implementation: org.janusgraph.hadoop.formats.util.input.current.JanusGraphHadoopSetupImpl at org.janusgraph.util.system.ConfigurationUtil.instantiate(ConfigurationUtil.java:64) at org.janusgraph.hadoop.formats.util.GiraphInputFormat.lambda$static$0(GiraphInputFormat.java:48) at org.janusgraph.hadoop.formats.util.GiraphInputFormat$RefCountedCloseable.acquire(GiraphInputFormat.java:107) at org.janusgraph.hadoop.formats.util.GiraphRecordReader.(GiraphRecordReader.java:55) at org.janusgraph.hadoop.formats.util.GiraphInputFormat.createRecordReader(GiraphInputFormat.java:69) at org.apache.spark.rdd.NewHadoopRDD$$anon$1.liftedTree1$1(NewHadoopRDD.scala:180) at org.apache.spark.rdd.NewHadoopRDD$$anon$1.(NewHadoopRDD.scala:179) at org.apache.spark.rdd.NewHadoopRDD.compute(NewHadoopRDD.scala:134) at org.apache.spark.rdd.NewHadoopRDD.compute(NewHadoopRDD.scala:69) at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:323) at org.apache.spark.rdd.RDD.iterator(RDD.scala:287) at org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:38) at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:323) at org.apache.spark.rdd.RDD.iterator(RDD.scala:287) at org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:38) at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:323) at org.apache.spark.rdd.RDD.iterator(RDD.scala:287) at org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:38) at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:323) at org.apache.spark.rdd.RDD.iterator(RDD.scala:287) at org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:38) at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:323) at org.apache.spark.rdd.RDD.iterator(RDD.scala:287) at org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:38) at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:323) at org.apache.spark.rdd.RDD.iterator(RDD.scala:287) at org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:96) at org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:53) at org.apache.spark.scheduler.Task.run(Task.scala:108) at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:335) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) at java.lang.Thread.run(Thread.java:745) Caused by: java.lang.reflect.InvocationTargetException at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method) at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) at java.lang.reflect.Constructor.newInstance(Constructor.java:423) at org.janusgraph.util.system.ConfigurationUtil.instantiate(ConfigurationUtil.java:58) ... 32 more Caused by: org.janusgraph.core.JanusGraphException: Could not open global configuration at org.janusgraph.diskstorage.Backend.getStandaloneGlobalConfiguration(Backend.java:454) at org.janusgraph.graphdb.configuration.GraphDatabaseConfiguration.(GraphDatabaseConfiguration.java:1257) at org.janusgraph.core.JanusGraphFactory.open(JanusGraphFactory.java:160) at org.janusgraph.core.JanusGraphFactory.open(JanusGraphFactory.java:131) at org.janusgraph.core.JanusGraphFactory.open(JanusGraphFactory.java:121) at org.janusgraph.hadoop.formats.util.input.current.JanusGraphHadoopSetupImpl.(JanusGraphHadoopSetupImpl.java:52) ... 37 more Caused by: org.janusgraph.diskstorage.TemporaryBackendException: Temporary failure in storage backend at org.janusgraph.diskstorage.cassandra.astyanax.AstyanaxStoreManager.ensureColumnFamilyExists(AstyanaxStoreManager.java:490) at org.janusgraph.diskstorage.cassandra.astyanax.AstyanaxStoreManager.openDatabase(AstyanaxStoreManager.java:349) at org.janusgraph.diskstorage.cassandra.astyanax.AstyanaxStoreManager.openDatabase(AstyanaxStoreManager.java:71) at org.janusgraph.diskstorage.keycolumnvalue.KeyColumnValueStoreManager.openDatabase(KeyColumnValueStoreManager.java:43) at org.janusgraph.diskstorage.Backend.getStandaloneGlobalConfiguration(Backend.java:452) ... 42 more Caused by: com.netflix.astyanax.connectionpool.exceptions.BadRequestException: BadRequestException: [host=localdatastore_db(172.20.0.3):9160, latency=3(3), attempts=1]InvalidRequestException(why:Column family system_properties already exists) at com.netflix.astyanax.thrift.ThriftConverter.ToConnectionPoolException(ThriftConverter.java:159) at com.netflix.astyanax.thrift.AbstractOperationImpl.execute(AbstractOperationImpl.java:65) at com.netflix.astyanax.thrift.AbstractOperationImpl.execute(AbstractOperationImpl.java:28) at com.netflix.astyanax.thrift.ThriftSyncConnectionFactoryImpl$ThriftConnection.execute(ThriftSyncConnectionFactoryImpl.java:153) at com.netflix.astyanax.connectionpool.impl.AbstractExecuteWithFailoverImpl.tryOperation(AbstractExecuteWithFailoverImpl.java:119) at com.netflix.astyanax.connectionpool.impl.AbstractHostPartitionConnectionPool.executeWithFailover(AbstractHostPartitionConnectionPool.java:352) at com.netflix.astyanax.thrift.ThriftClusterImpl.executeSchemaChangeOperation(ThriftClusterImpl.java:146) at com.netflix.astyanax.thrift.ThriftClusterImpl.internalCreateColumnFamily(ThriftClusterImpl.java:240) at com.netflix.astyanax.thrift.ThriftClusterImpl.addColumnFamily(ThriftClusterImpl.java:215) at org.janusgraph.diskstorage.cassandra.astyanax.AstyanaxStoreManager.ensureColumnFamilyExists(AstyanaxStoreManager.java:487) ... 46 more Caused by: InvalidRequestException(why:Column family system_properties already exists) at org.apache.cassandra.thrift.Cassandra$system_add_column_family_result$system_add_column_family_resultStandardScheme.read(Cassandra.java:43019) at org.apache.cassandra.thrift.Cassandra$system_add_column_family_result$system_add_column_family_resultStandardScheme.read(Cassandra.java:42997) at org.apache.cassandra.thrift.Cassandra$system_add_column_family_result.read(Cassandra.java:42931) at org.apache.thrift.TServiceClient.receiveBase(TServiceClient.java:78) at org.apache.cassandra.thrift.Cassandra$Client.recv_system_add_column_family(Cassandra.java:1522) at org.apache.cassandra.thrift.Cassandra$Client.system_add_column_family(Cassandra.java:1509) at com.netflix.astyanax.thrift.ThriftClusterImpl$7.internalExecute(ThriftClusterImpl.java:245) at com.netflix.astyanax.thrift.ThriftClusterImpl$7.internalExecute(ThriftClusterImpl.java:241) at com.netflix.astyanax.thrift.AbstractOperationImpl.execute(AbstractOperationImpl.java:60) ... 54 more

farodin91 commented 6 years ago

I running against the same problem. It seems the thrift driver tries to generate a new system_properties column family.

In our setup, the default janusgraph instance using the cql backend.

wuesty commented 5 years ago

We did get to the bottom of this and have a work around that works although the internal bits that are happening inside the Janus / Tinkerpop engine haven't been traced. Our requirement is to run OLTP and OLAP on JanusGraph. I will also preface everything by saying we are running JanusGraph 0.3.0 with a Scylla backend.

Initially started with OLTP using the ConfiguredGraphFactory. Looking at the configuration for one of these graphs (from a gremlin prompt) would look something like:

gremlin> ConfiguredGraphFactory.configurations[0].sort() ==>Template_Configuration=false ==>cache.db-cache=true ==>cache.db-cache-size=0.55 ==>cache.db-cache-time=301000 ==>cache.tx-cache-size=250002 ==>graph.graphname= ==>gremlin.graph=org.janusgraph.core.JanusGraphFactory ==>ids.block-size=500000 ==>query.batch=true ==>query.smart-limit=false ==>storage.backend=cql ==>storage.batch-loading=true ==>storage.buffer-size=10240 ==>storage.cql.batch-statement-size=30 ==>storage.cql.keyspace=mvwxpvuptrtxupusssxp ==>storage.cql.local-max-requests-per-connection=32767 ==>storage.cql.remote-max-requests-per-connection=256 ==>storage.cql.replication-factor=3 ==>storage.hostname=localdatastore_db

When we ventured into the OLAP world, we discovered that this error above when we ran simple OLAP traversals using just spark locally or on a remote cluster. A configuration for opening of these graphs as a Hadoop Graph looks something like:

gremlin> hgraph.configuration().sort() ==>[Template_Configuration,false] ==>[cache.db-cache,true] ==>[cache.db-cache-size,0.55] ==>[cache.db-cache-time,301000] ==>[cache.tx-cache-size,250002] ==>[cassandra.input.partitioner.class,org.apache.cassandra.dht.Murmur3Partitioner] ==>[gremlin.graph,org.apache.tinkerpop.gremlin.hadoop.structure.HadoopGraph] ==>[gremlin.hadoop.graphReader,org.janusgraph.hadoop.formats.cassandra.Cassandra3InputFormat] ==>[gremlin.hadoop.graphWriter,org.apache.tinkerpop.gremlin.hadoop.structure.io.gryo.GryoOutputFormat] ==>[gremlin.hadoop.inputLocation,none] ==>[gremlin.hadoop.jarsInDistributedCache,true] ==>[gremlin.hadoop.outputLocation,output] ==>[gremlin.spark.persistContext,true] ==>[ids.block-size,500000] ==>[janusgraphmr.ioformat.conf.storage.backend,cassandra] ==>[janusgraphmr.ioformat.conf.storage.cassandra.keyspace,mvwxpvuptrtxupusssxp] ==>[janusgraphmr.ioformat.conf.storage.cassandra.replication-factor,3] ==>[janusgraphmr.ioformat.conf.storage.hostname,localdatastore_db] ==>[janusgraphmr.ioformat.conf.storage.port,9160] ==>[query.batch,true] ==>[query.smart-limit,false] ==>[spark.driver.memory,1g] ==>[spark.executor.extraClassPath,/opt/lib/janusgraph/*] ==>[spark.executor.memory,1g] ==>[spark.master,spark://sonrai-spark-master:7077] ==>[spark.serializer,org.apache.spark.serializer.KryoSerializer] ==>[storage.backend,cassandra] ==>[storage.batch-loading,true] ==>[storage.buffer-size,10240] ==>[storage.hostname,localdatastore_db]

What we discovered is that the problem is in the initialization. We switched the initialization of our graphs to do the following:

Step 1:

Use the ConfiguredGraphFactory create a configuration using all the cassandra settings. That configuration would look similar to the above with exception that all the cassandra settings are set (in addition) to the cql ones but the key difference is the storage.backend=cassandra

gremlin> ConfiguredGraphFactory.configurations[0].sort() ==>Template_Configuration=false ==>cache.db-cache=true ==>cache.db-cache-size=0.55 ==>cache.db-cache-time=301000 ==>cache.tx-cache-size=250002 ==>graph.graphname= ==>gremlin.graph=org.janusgraph.core.JanusGraphFactory ==>ids.block-size=500000 ==>query.batch=true ==>query.smart-limit=false ==>storage.backend=cassandra ==>storage.batch-loading=true ==>storage.buffer-size=10240 ==>storage.cassandra.batch-statement-size=30 ==>storage.cassandra.keyspace=mvwxpvuptrtxupusssxp ==>storage.cassandra.local-max-requests-per-connection=32767 ==>storage.cassandra.remote-max-requests-per-connection=256 ==>storage.cassandra.replication-factor=3 ==>storage.cql.batch-statement-size=30 ==>storage.cql.keyspace=mvwxpvuptrtxupusssxp ==>storage.cql.local-max-requests-per-connection=32767 ==>storage.cql.remote-max-requests-per-connection=256 ==>storage.cql.replication-factor=3 ==>storage.hostname=localdatastore_db

STEP 2:

Open and Close the Graph (using the ConfigurationManagementGraph)

STEP 3:

Get the configuration and flip the storage.backend=cql and update configuration. We do this because in most situations we want to be using OLTP and the recommended CQL driver.

Outcome:

For whatever reason when the graph is initially opened by the internal Janus / Tinker engine using the cassandra settings it is putting enough information into the keyspace that the SparkComputer / OLAP traversals do not run into the error above. The engine "seems" to be ok with the initial open with a "cassandra" backend to support both cassandra and cql access but not ok with the reverse.

This was our experience - maybe we missed something along the way but we figured we would share.

ghost commented 5 years ago

I hit the same issue. I wonder whether the issue is resolved?

riyueshiwang commented 5 years ago

I hit the same issue too.

FlorianHockmann commented 5 years ago

Per my understanding, this seems to be caused by a bug in the Thrift implementation of Scylla. Since Thrift is already deprecated in Apache Cassandra, will be removed with Cassandra 4.0 completely, and Scylla plans to do the same in future versions (scylladb/scylla#3811), it's probably not that likely that this will be fixed in Scylla. So, the better way for us is probably just to support CQL for OLAP (#985) and then switch from Thrift to CQL completely.

wuesty commented 5 years ago

Thanks Florian. Even though there isn't a smooth solution here - I can see where this will potentially work itself out in future releases. We have had to do a number of work arounds to balance between connecting with CQL for OLTP and Cassandra for OLAP.

fengfst commented 5 years ago

Per my understanding, this seems to be caused by a bug in the Thrift implementation of Scylla. Since Thrift is already deprecated in Apache Cassandra, will be removed with Cassandra 4.0 completely, and Scylla plans to do the same in future versions (scylladb/scylla#3811), it's probably not that likely that this will be fixed in Scylla. So, the better way for us is probably just to support CQL for OLAP (#985) and then switch from Thrift to CQL completely.

JanusGraph currently supports following graphReader classes:

Cassandra3InputFormat for use with Cassandra 3 CassandraInputFormat for use with Cassandra 2 HBaseInputFormat and HBaseSnapshotInputFormat for use with HBase

How can i switch to CQL? Can give ideas or examples?

FlorianHockmann commented 5 years ago

How can i switch to CQL? Can give ideas or examples?

You can't right now, I pointed to the open issue above which is about adding a new CQL input format to JanusGraph and we now have an open pull request for that which should be included in version 0.4.0: #1436

porunov commented 4 years ago

CQL support was added in JanusGraph version 0.4.x. Is this issue is still relevant? If migrating to CQL from Thrift, are you able to reproduce the issue?

Thrift support was deprecated in 0.4.1 version of JanusGraph. Thrift is going to be completely removed in 0.6.0 version of JanusGraph due to deprecation. The PR is #1800

wuestinc commented 3 years ago

Agreed - it's probably not relevant anymore - now that it is all over to CQL. It may still be an issue using the Thrift setting but we have since moved over to CQL I cannot comment on that in the latest versions of Janus.