Open vijaydodla opened 8 years ago
Sorry for the confusion on issue status ...still this is open issue .Trying to find solution ..any help is greatly appreciated.
Please look at
https://code.google.com/a/apache-extras.org/p/cassandra-jdbc/source/browse/pom.xml
You need also the dependency jars of
<dependency>
<groupId>org.apache.cassandra</groupId>
<artifactId>cassandra-clientutil</artifactId>
<version>1.2.5</version>
</dependency>
<dependency>
<groupId>org.apache.cassandra</groupId>
<artifactId>cassandra-thrift</artifactId>
<version>1.2.5</version>
</dependency>
in the class path.
Hi Jorge Thanks for the response . I made sure i included all the dependency jars .The issue i'm facing is different one . Please check attached log . And below is the list of jars in my lib folder . jdbc.log.zip
Hi Jorge Can you please comment on the issues i'm facing.. not sure if any one did Cassandra DB integration with ES.
cassandra-clientutil 1.2.5 uses Guava 13. This conflicts with Elasticsearch.
If you agree, I can try to build an up-to-date Cassandra JDBC driver, maybe with shadowed dependencies.
Great ...that will be awesome . Thanks much Jorge !!! .Please send up-to-date Cassandra JDBC driver once you build it .
On Wed, Jan 6, 2016 at 11:36 AM, Jörg Prante notifications@github.com wrote:
cassandra-clientutil 1.2.5 uses Guava 13. This conflicts with Elasticsearch.
If you agree, I can try to build an up-to-date Cassandra JDBC driver, maybe with shadowed dependencies.
— Reply to this email directly or view it on GitHub https://github.com/jprante/elasticsearch-jdbc/issues/749#issuecomment-169398895 .
Thanks, Vijay Dodla.
What Cassandra version do you use?
Cassandra 2.2.4 .
On Wed, Jan 6, 2016 at 12:14 PM, Jörg Prante notifications@github.com wrote:
What Cassandra version do you use?
— Reply to this email directly or view it on GitHub https://github.com/jprante/elasticsearch-jdbc/issues/749#issuecomment-169408192 .
Thanks, Vijay Dodla.
I'm not sure if it works but I pushed an updated driver version to https://github.com/jprante/cassandra-jdbc
It can connect to a thrift-enabled Cassandra 3.1.1, some tests fail though. Maybe also to Cassandra 2.2.4
Jars can be found at http://xbib.org/repository/org/xbib/cassandra-jdbc/3.1.1/
The uberjar contains Guava 18 which is also in JDBC importer and Elasticsearch 2.x
Thanks Much Jorge!!! ..will give a try and let you know how it goes .
On Wed, Jan 6, 2016 at 1:51 PM, Jörg Prante notifications@github.com wrote:
I'm not sure if it works but I pushed an updated driver version to https://github.com/jprante/cassandra-jdbc
It can connect to a thrift-enabled Cassandra 3.1.1, some tests fail though. Maybe also to Cassandra 2.2.4
Jars can be found at http://xbib.org/repository/org/xbib/cassandra-jdbc/3.1.1/
The uberjar contains Guava 18 which is also in JDBC importer and Elasticsearch 2.x
— Reply to this email directly or view it on GitHub https://github.com/jprante/elasticsearch-jdbc/issues/749#issuecomment-169439416 .
Thanks, Vijay Dodla.
Hi Jorge , i included the updated driver @ https://github.com/jprante/cassandra-jdbc in to lib folder of elasticsearch-jdbc and got following errors ..check below log
Form log i see it is not able to find suitable JDBC driver for Cassandra .Not sure if 'm doing something wrong .
I have uploaded a new jar.
cat ./bin/cassandra-simple.sh
#!/bin/sh
DIR="$( cd "$( dirname "${BASH_SOURCE[0]}" )" && pwd )"
bin=${DIR}/../bin
lib=${DIR}/../lib
echo '
{
"type" : "jdbc",
"jdbc" : {
"url" : "jdbc:cassandra://localhost:9160/system?version=3.0.0",
"user" : "",
"password" : "",
"sql" : "..."
}
}
' | java \
-cp "${lib}/*" \
-Dlog4j.configurationFile=${bin}/log4j2.xml \
org.xbib.tools.Runner \
org.xbib.tools.JDBCImporter
It worked Jorge !!! Thanks again so much for helping to get it work.
Below is the script i used .
DIR="$( cd "$( dirname "${BASH_SOURCE[0]}" )" && pwd )" bin=${DIR}/../bin lib=${DIR}/../lib
echo ' { "type" : "jdbc", "jdbc" : { "url" : "jdbc:cassandra://localhost:9160/example", "user" : "cassandra", "locale" : "en_US", "sql" : "SELECT * FROM example.news", "elasticsearch" : { "hosts" : "localhost", "port" : 9300 }, "index" : "myjdbc", "type" : "mytype" } } ' | java \ -cp "${lib}/*" \ -Dlog4j.configurationFile=${bin}/log4j2.xml \ org.xbib.tools.Runner \ org.xbib.tools.JDBCImporter
I'm using elasticsearch 2.3.5, elasticsearch-jdbc-2.3.4.0 and cassandra 3.6 (CQL spec 3.4.2). My script: `#!/bin/sh
bin=/opt/elasticsearch-jdbc-2.3.4.0/bin lib=/opt/elasticsearch-jdbc-2.3.4.0/lib
echo ' { "type" : "jdbc", "jdbc" : { "url" : "jdbc:cassandra://172.16.10.254:9042/hocvalam", "user" : "cassandra", "password" : "cassandra", "sql" : "SELECT * FROM hocvalam.post", "elasticsearch" : { "cluster" : "production", "host" : "localhost", "port" : 9300 }, "index" : "myjdbc", "type" : "mytype" } } ' | java \ -cp "${lib}/*" \ -Dlog4j.configurationFile=${bin}/log4j2.xml \ org.xbib.tools.Runner \ org.xbib.tools.JDBCImporter `
I've tried with both two cassandra-jdbc as mentioned in this thread but not yet successfully.
[08:42:39,541][ERROR][importer.jdbc.context.standard][pool-3-thread-1] at fetch: com.datastax.driver.core.exceptions.AuthenticationException: **Authentication error on host /172.16.10.254:9042: Host /172.16.10.254:9042 requires authentication, but no authenticator found in Cluster configuration** java.io.IOException: com.datastax.driver.core.exceptions.AuthenticationException: Authentication error on host /172.16.10.254:9042: Host /172.16.10.254:9042 requires authentication, but no authenticator found in Cluster configuration at org.xbib.elasticsearch.jdbc.strategy.standard.StandardSource.fetch(StandardSource.java:631) ~[elasticsearch-jdbc-2.3.4.0.jar:?] at org.xbib.elasticsearch.jdbc.strategy.standard.StandardContext.fetch(StandardContext.java:191) [elasticsearch-jdbc-2.3.4.0.jar:?] at org.xbib.elasticsearch.jdbc.strategy.standard.StandardContext.execute(StandardContext.java:166) [elasticsearch-jdbc-2.3.4.0.jar:?] at org.xbib.tools.JDBCImporter.process(JDBCImporter.java:199) [elasticsearch-jdbc-2.3.4.0.jar:?] at org.xbib.tools.JDBCImporter.newRequest(JDBCImporter.java:185) [elasticsearch-jdbc-2.3.4.0.jar:?] at org.xbib.tools.JDBCImporter.newRequest(JDBCImporter.java:51) [elasticsearch-jdbc-2.3.4.0.jar:?] at org.xbib.pipeline.AbstractPipeline.call(AbstractPipeline.java:50) [elasticsearch-jdbc-2.3.4.0.jar:?] at org.xbib.pipeline.AbstractPipeline.call(AbstractPipeline.java:16) [elasticsearch-jdbc-2.3.4.0.jar:?] at java.util.concurrent.FutureTask.run(FutureTask.java:266) [?:1.8.0_91] at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) [?:1.8.0_91] at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) [?:1.8.0_91] at java.lang.Thread.run(Thread.java:745) [?:1.8.0_91] Caused by: com.datastax.driver.core.exceptions.AuthenticationException: Authentication error on host /172.16.10.254:9042: Host /172.16.10.254:9042 requires authentication, but no authenticator found in Cluster configuration at com.datastax.driver.core.AuthProvider$1.newAuthenticator(AuthProvider.java:40) ~[cassandra-driver-core-3.0.0.jar:?] at com.datastax.driver.core.Connection$5.apply(Connection.java:250) ~[cassandra-driver-core-3.0.0.jar:?] at com.datastax.driver.core.Connection$5.apply(Connection.java:234) ~[cassandra-driver-core-3.0.0.jar:?] at com.google.common.util.concurrent.Futures$ChainingListenableFuture.run(Futures.java:906) ~[guava-18.0.jar:?] at com.google.common.util.concurrent.Futures$1$1.run(Futures.java:635) ~[guava-18.0.jar:?] at com.google.common.util.concurrent.MoreExecutors$DirectExecutorService.execute(MoreExecutors.java:299) ~[guava-18.0.jar:?] at com.google.common.util.concurrent.Futures$1.run(Futures.java:632) ~[guava-18.0.jar:?] at com.google.common.util.concurrent.MoreExecutors$DirectExecutor.execute(MoreExecutors.java:457) ~[guava-18.0.jar:?] at com.google.common.util.concurrent.ExecutionList.executeListener(ExecutionList.java:156) ~[guava-18.0.jar:?] at com.google.common.util.concurrent.ExecutionList.execute(ExecutionList.java:145) ~[guava-18.0.jar:?] at com.google.common.util.concurrent.AbstractFuture.set(AbstractFuture.java:185) ~[guava-18.0.jar:?] at com.datastax.driver.core.Connection$Future.onSet(Connection.java:1174) ~[cassandra-driver-core-3.0.0.jar:?] at com.datastax.driver.core.Connection$Dispatcher.channelRead0(Connection.java:1005) ~[cassandra-driver-core-3.0.0.jar:?] at com.datastax.driver.core.Connection$Dispatcher.channelRead0(Connection.java:928) ~[cassandra-driver-core-3.0.0.jar:?] at io.netty.channel.SimpleChannelInboundHandler.channelRead(SimpleChannelInboundHandler.java:105) ~[netty-transport-4.0.33.Final.jar:4.0.33.Final] at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:318) ~[netty-transport-4.0.33.Final.jar:4.0.33.Final] at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:304) ~[netty-transport-4.0.33.Final.jar:4.0.33.Final] at io.netty.handler.timeout.IdleStateHandler.channelRead(IdleStateHandler.java:266) ~[netty-handler-4.0.33.Final.jar:4.0.33.Final] at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:318) ~[netty-transport-4.0.33.Final.jar:4.0.33.Final] at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:304) ~[netty-transport-4.0.33.Final.jar:4.0.33.Final] at io.netty.handler.codec.MessageToMessageDecoder.channelRead(MessageToMessageDecoder.java:103) ~[netty-codec-4.0.33.Final.jar:4.0.33.Final] at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:318) ~[netty-transport-4.0.33.Final.jar:4.0.33.Final] at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:304) ~[netty-transport-4.0.33.Final.jar:4.0.33.Final] at io.netty.handler.codec.ByteToMessageDecoder.fireChannelRead(ByteToMessageDecoder.java:276) ~[netty-codec-4.0.33.Final.jar:4.0.33.Final] at io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:263) ~[netty-codec-4.0.33.Final.jar:4.0.33.Final] at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:318) ~[netty-transport-4.0.33.Final.jar:4.0.33.Final] at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:304) ~[netty-transport-4.0.33.Final.jar:4.0.33.Final] at io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:846) ~[netty-transport-4.0.33.Final.jar:4.0.33.Final] at io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:131) ~[netty-transport-4.0.33.Final.jar:4.0.33.Final] at io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:511) ~[netty-transport-4.0.33.Final.jar:4.0.33.Final] at io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:468) ~[netty-transport-4.0.33.Final.jar:4.0.33.Final] at io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:382) ~[netty-transport-4.0.33.Final.jar:4.0.33.Final] at io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:354) ~[netty-transport-4.0.33.Final.jar:4.0.33.Final] at io.netty.util.concurrent.SingleThreadEventExecutor$2.run(SingleThreadEventExecutor.java:112) ~[netty-common-4.0.33.Final.jar:4.0.33.Final] ... 1 more [08:42:39,594][ERROR][importer.jdbc.context.standard][pool-3-thread-1] after fetch: no such index org.elasticsearch.index.IndexNotFoundException: no such index at org.elasticsearch.cluster.metadata.IndexNameExpressionResolver$WildcardExpressionResolver.resolve(IndexNameExpressionResolver.java:585) ~[elasticsearch-2.3.4.jar:2.3.4] at org.elasticsearch.cluster.metadata.IndexNameExpressionResolver.concreteIndices(IndexNameExpressionResolver.java:133) ~[elasticsearch-2.3.4.jar:2.3.4] at org.elasticsearch.cluster.metadata.IndexNameExpressionResolver.concreteIndices(IndexNameExpressionResolver.java:77) ~[elasticsearch-2.3.4.jar:2.3.4] at org.elasticsearch.action.support.replication.TransportBroadcastReplicationAction.shards(TransportBroadcastReplicationAction.java:131) ~[elasticsearch-2.3.4.jar:2.3.4] at org.elasticsearch.action.support.replication.TransportBroadcastReplicationAction.doExecute(TransportBroadcastReplicationAction.java:78) ~[elasticsearch-2.3.4.jar:2.3.4] at org.elasticsearch.action.support.replication.TransportBroadcastReplicationAction.doExecute(TransportBroadcastReplicationAction.java:56) ~[elasticsearch-2.3.4.jar:2.3.4] at org.elasticsearch.action.support.TransportAction.execute(TransportAction.java:137) ~[elasticsearch-2.3.4.jar:2.3.4] at org.elasticsearch.action.support.HandledTransportAction$TransportHandler.messageReceived(HandledTransportAction.java:61) ~[elasticsearch-2.3.4.jar:2.3.4] at org.elasticsearch.action.support.HandledTransportAction$TransportHandler.messageReceived(HandledTransportAction.java:51) ~[elasticsearch-2.3.4.jar:2.3.4] at org.elasticsearch.transport.RequestHandlerRegistry.processMessageReceived(RequestHandlerRegistry.java:75) ~[elasticsearch-2.3.4.jar:2.3.4] at org.elasticsearch.transport.netty.MessageChannelHandler.handleRequest(MessageChannelHandler.java:245) ~[elasticsearch-2.3.4.jar:2.3.4] at org.elasticsearch.transport.netty.MessageChannelHandler.messageReceived(MessageChannelHandler.java:114) ~[elasticsearch-2.3.4.jar:2.3.4] at org.jboss.netty.channel.SimpleChannelUpstreamHandler.handleUpstream(SimpleChannelUpstreamHandler.java:70) ~[netty-3.10.5.Final.jar:?] at org.jboss.netty.channel.DefaultChannelPipeline.sendUpstream(DefaultChannelPipeline.java:564) ~[netty-3.10.5.Final.jar:?] at org.jboss.netty.channel.DefaultChannelPipeline$DefaultChannelHandlerContext.sendUpstream(DefaultChannelPipeline.java:791) ~[netty-3.10.5.Final.jar:?] at org.jboss.netty.channel.Channels.fireMessageReceived(Channels.java:296) ~[netty-3.10.5.Final.jar:?] at org.jboss.netty.handler.codec.frame.FrameDecoder.unfoldAndFireMessageReceived(FrameDecoder.java:462) ~[netty-3.10.5.Final.jar:?] at org.jboss.netty.handler.codec.frame.FrameDecoder.callDecode(FrameDecoder.java:443) ~[netty-3.10.5.Final.jar:?] at org.jboss.netty.handler.codec.frame.FrameDecoder.messageReceived(FrameDecoder.java:303) ~[netty-3.10.5.Final.jar:?] at org.jboss.netty.channel.SimpleChannelUpstreamHandler.handleUpstream(SimpleChannelUpstreamHandler.java:70) ~[netty-3.10.5.Final.jar:?] at org.jboss.netty.channel.DefaultChannelPipeline.sendUpstream(DefaultChannelPipeline.java:564) ~[netty-3.10.5.Final.jar:?] at org.jboss.netty.channel.DefaultChannelPipeline$DefaultChannelHandlerContext.sendUpstream(DefaultChannelPipeline.java:791) ~[netty-3.10.5.Final.jar:?] at org.elasticsearch.common.netty.OpenChannelsHandler.handleUpstream(OpenChannelsHandler.java:75) ~[elasticsearch-2.3.4.jar:2.3.4] at org.jboss.netty.channel.DefaultChannelPipeline.sendUpstream(DefaultChannelPipeline.java:564) ~[netty-3.10.5.Final.jar:?] at org.jboss.netty.channel.DefaultChannelPipeline.sendUpstream(DefaultChannelPipeline.java:559) ~[netty-3.10.5.Final.jar:?] at org.jboss.netty.channel.Channels.fireMessageReceived(Channels.java:268) ~[netty-3.10.5.Final.jar:?] at org.jboss.netty.channel.Channels.fireMessageReceived(Channels.java:255) ~[netty-3.10.5.Final.jar:?] at org.jboss.netty.channel.socket.nio.NioWorker.read(NioWorker.java:88) ~[netty-3.10.5.Final.jar:?] at org.jboss.netty.channel.socket.nio.AbstractNioWorker.process(AbstractNioWorker.java:108) ~[netty-3.10.5.Final.jar:?] at org.jboss.netty.channel.socket.nio.AbstractNioSelector.run(AbstractNioSelector.java:337) ~[netty-3.10.5.Final.jar:?] at org.jboss.netty.channel.socket.nio.AbstractNioWorker.run(AbstractNioWorker.java:89) ~[netty-3.10.5.Final.jar:?] at org.jboss.netty.channel.socket.nio.NioWorker.run(NioWorker.java:178) ~[netty-3.10.5.Final.jar:?] at org.jboss.netty.util.ThreadRenamingRunnable.run(ThreadRenamingRunnable.java:108) ~[netty-3.10.5.Final.jar:?] at org.jboss.netty.util.internal.DeadLockProofWorker$1.run(DeadLockProofWorker.java:42) ~[netty-3.10.5.Final.jar:?] at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) [?:1.8.0_91] at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) [?:1.8.0_91] at java.lang.Thread.run(Thread.java:745) [?:1.8.0_91]
[09:01:45,266][ERROR][importer.jdbc.source.standard][pool-3-thread-1] while opening read connection: jdbc:cassandra://172.16.10.254:9042/hocvalam org.apache.thrift.transport.TTransportException: Read a negative frame size (-2080374784)! java.sql.SQLNonTransientConnectionException: org.apache.thrift.transport.TTransportException: Read a negative frame size (-2080374784)! at org.apache.cassandra.cql.jdbc.CassandraConnection.<init>(CassandraConnection.java:243) ~[cassandra-jdbc-3.1.1-uberjar.jar:?] at org.apache.cassandra.cql.jdbc.CassandraDriver.connect(CassandraDriver.java:86) ~[cassandra-jdbc-3.1.1-uberjar.jar:?] at java.sql.DriverManager.getConnection(DriverManager.java:664) ~[?:1.8.0_91] at java.sql.DriverManager.getConnection(DriverManager.java:208) ~[?:1.8.0_91] at org.xbib.elasticsearch.jdbc.strategy.standard.StandardSource.getConnectionForReading(StandardSource.java:485) [elasticsearch-jdbc-2.3.4.0.jar:?] at org.xbib.elasticsearch.jdbc.strategy.standard.StandardSource.execute(StandardSource.java:670) [elasticsearch-jdbc-2.3.4.0.jar:?] at org.xbib.elasticsearch.jdbc.strategy.standard.StandardSource.fetch(StandardSource.java:597) [elasticsearch-jdbc-2.3.4.0.jar:?] at org.xbib.elasticsearch.jdbc.strategy.standard.StandardContext.fetch(StandardContext.java:191) [elasticsearch-jdbc-2.3.4.0.jar:?] at org.xbib.elasticsearch.jdbc.strategy.standard.StandardContext.execute(StandardContext.java:166) [elasticsearch-jdbc-2.3.4.0.jar:?] at org.xbib.tools.JDBCImporter.process(JDBCImporter.java:199) [elasticsearch-jdbc-2.3.4.0.jar:?] at org.xbib.tools.JDBCImporter.newRequest(JDBCImporter.java:185) [elasticsearch-jdbc-2.3.4.0.jar:?] at org.xbib.tools.JDBCImporter.newRequest(JDBCImporter.java:51) [elasticsearch-jdbc-2.3.4.0.jar:?] at org.xbib.pipeline.AbstractPipeline.call(AbstractPipeline.java:50) [elasticsearch-jdbc-2.3.4.0.jar:?] at org.xbib.pipeline.AbstractPipeline.call(AbstractPipeline.java:16) [elasticsearch-jdbc-2.3.4.0.jar:?] at java.util.concurrent.FutureTask.run(FutureTask.java:266) [?:1.8.0_91] at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) [?:1.8.0_91] at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) [?:1.8.0_91] at java.lang.Thread.run(Thread.java:745) [?:1.8.0_91] Caused by: org.apache.thrift.transport.TTransportException: Read a negative frame size (-2080374784)! at org.apache.thrift.transport.TFramedTransport.readFrame(TFramedTransport.java:133) ~[cassandra-jdbc-3.1.1-uberjar.jar:?] at org.apache.thrift.transport.TFramedTransport.read(TFramedTransport.java:101) ~[cassandra-jdbc-3.1.1-uberjar.jar:?] at org.apache.thrift.transport.TTransport.readAll(TTransport.java:86) ~[cassandra-jdbc-3.1.1-uberjar.jar:?] at org.apache.thrift.protocol.TBinaryProtocol.readAll(TBinaryProtocol.java:429) ~[cassandra-jdbc-3.1.1-uberjar.jar:?] at org.apache.thrift.protocol.TBinaryProtocol.readI32(TBinaryProtocol.java:318) ~[cassandra-jdbc-3.1.1-uberjar.jar:?] at org.apache.thrift.protocol.TBinaryProtocol.readMessageBegin(TBinaryProtocol.java:219) ~[cassandra-jdbc-3.1.1-uberjar.jar:?] at org.apache.thrift.TServiceClient.receiveBase(TServiceClient.java:69) ~[cassandra-jdbc-3.1.1-uberjar.jar:?] at org.apache.cassandra.thrift.Cassandra$Client.recv_describe_cluster_name(Cassandra.java:1247) ~[cassandra-jdbc-3.1.1-uberjar.jar:?] at org.apache.cassandra.thrift.Cassandra$Client.describe_cluster_name(Cassandra.java:1235) ~[cassandra-jdbc-3.1.1-uberjar.jar:?] at org.apache.cassandra.cql.jdbc.CassandraConnection.<init>(CassandraConnection.java:185) ~[cassandra-jdbc-3.1.1-uberjar.jar:?] ... 17 more
Does anyone get the same issue? I don't know whether these error caused by the elasticsearch-jdbc or the cassandra-jdbc lib. Any help is appreciated, thanks.
Hi after configuring JDBC for Cassandra i'm getting following error in jdbc.log . Please check attached files for config and logs. jdbc.log.zip
cassandra-points.sh.zip
I' used http://www.dbschema.com/cassandra-jdbc-driver.html for cassandra jdbc and dropped it in lib folder and made sure classpath is set correctly .I add missing jars(libthrift-0.6.0.jar,apache-cassandra-thrift-0.8.5.jar) to lib folder based on errors in jdbc log .Now i see different issue "[10:03:39,941][ERROR][importer.jdbc.context.standard][pool-3-thread-1] at fetch: Implementing class java.lang.IncompatibleClassChangeError: Implementing class at java.lang.ClassLoader.defineClass1(Native Method) ~[?:1.8.0_65]" .Please check attached log for more info.
Kindly help me in fixing this issue .