Closed ricardmestre closed 9 years ago
Can you repaste your cyanite config looks weird in the output here? Also have you confirmed your ES instance is running on port 9300? (just bring it up in a URL).
Sorry for the wrong format, I've updated. And yes, I confirmed that ES is running on port 9300. Do you know what's wrong?
I have tried to change the configuration of Cyanite to connect to Elasticsearch and now it works. The configuration is the following:
index:
use: "io.cyanite.es_path/es-rest" # altra opció: org.spootnik.cyanite.es_path/es-native
index: "my_paths" #defaults to "cyanite_paths"
url: "http://localhost:9200" #defaults to http://localhost:9200
With the other configuration DOESN'T work:
index:
use: "io.cyanite.es_path/es-native"
index: "my_paths" #defaults to "cyanite_paths"
host: "localhost" # defaults to localhost
port: 9300 # defaults to 9300
cluster_name: "es_4_cyanite" #REQUIRED! this is specific to your cluster and has no sensible default
Can anyone explain me why is working with the first configuration and doesn't work with the second one?
Thank you!
I suspect es-native is not working or the port is enabled. Is 9300 open on the ES node?
What version of cyanite? On Oct 21, 2014 5:17 AM, "ricardmestre" notifications@github.com wrote:
I have tried to change the configuration of Cyanite to connect to Elasticsearch and now it works. The configuration is the following:
index: use: "io.cyanite.es_path/es-rest" # altra opció: org.spootnik.cyanite.es_path/es-native index: "my_paths" #defaults to "cyanite_paths" url: "http://localhost:9200" #defaults to http://localhost:9200
With the other configuration DOESN'T work:
index: use: "io.cyanite.es_path/es-native" index: "my_paths" #defaults to "cyanite_paths" host: "localhost" # defaults to localhost port: 9300 # defaults to 9300 cluster_name: "es_4_cyanite" #REQUIRED! this is specific to your cluster and has no sensible default
Can anyone explain me why is working with the first configuration and doesn't work with the second one?
Thank you!
— Reply to this email directly or view it on GitHub https://github.com/pyr/cyanite/issues/67#issuecomment-59900286.
Elasticsearch is listening the ports 9200 and 9300, and the version of Cyanite is 0.1.3
Cyanite can't connect to ES via REST API in our Setup:
WARNING: update already refers to: #'clojure.core/update in namespace: clj-http.client, being replaced by: #'clj-http.client/update
WARNING: update already refers to: #'clojure.core/update in namespace: clojurewerkz.elastisch.native, being replaced by: #'clojurewerkz.elastisch.native/update
Exception in thread "main" java.lang.IllegalStateException: Attempting to call unbound fn: #'clj-http.client/update
at clojure.lang.Var$Unbound.throwArity(Var.java:43) at clojure.lang.AFn.invoke(AFn.java:48) at clj_http.client$wrap_decompression$fn__11458.invoke(client.clj:283) at clj_http.client$wrap_input_coercion$fn__11542.invoke(client.clj:445) at clj_http.client$wrap_additional_header_parsing$fn__11563.invoke(client.clj:494) at clj_http.client$wrap_output_coercion$fn__11533.invoke(client.clj:398) at clj_http.client$wrap_exceptions$fn__11416.invoke(client.clj:164) at clj_http.client$wrap_accept$fn__11573.invoke(client.clj:521) at clj_http.client$wrap_accept_encoding$fn__11579.invoke(client.clj:536) at clj_http.client$wrap_content_type$fn__11568.invoke(client.clj:512) at clj_http.client$wrap_form_params$fn__11649.invoke(client.clj:683) at clj_http.client$wrap_nested_params$fn__11666.invoke(client.clj:707) at clj_http.client$wrap_method$fn__11619.invoke(client.clj:624) at clj_http.cookies$wrap_cookies$fn__9158.invoke(cookies.clj:121) at clj_http.links$wrap_links$fn__10650.invoke(links.clj:50) at clj_http.client$wrap_unknown_host$fn__11674.invoke(client.clj:726) at clj_http.client$get.doInvoke(client.clj:829) at clojure.lang.RestFn.invoke(RestFn.java:423) at clojurewerkz.elastisch.rest$get.invoke(rest.clj:47) at clojurewerkz.elastisch.rest$connect.invoke(rest.clj:286) at io.cyanite.es_path$es_rest.invoke(es_path.clj:129) at clojure.lang.Var.invoke(Var.java:379) at io.cyanite.config$instantiate.invoke(config.clj:94) at io.cyanite.config$get_instance.invoke(config.clj:102) at clojure.lang.AFn.applyToHelper(AFn.java:156) at clojure.lang.AFn.applyTo(AFn.java:144) at clojure.core$apply.invoke(core.clj:628) at clojure.core$update_in.doInvoke(core.clj:5853) at clojure.lang.RestFn.invoke(RestFn.java:467) at io.cyanite.config$init.invoke(config.clj:128) at io.cyanite$_main.doInvoke(cyanite.clj:31) at clojure.lang.RestFn.applyTo(RestFn.java:137) at io.cyanite.main(Unknown Source)
Configuration:
index:
use: "io.cyanite.es_path/es-rest"
index: "cyanite_paths"
url: "http://localhost:9200"
Cyanite was compiled from the current master branch using leiningen.
I have the same issue as MichaelHierweck
I can reproduce and will fix shortly, sorry about this !
Hi @pyr, in the meanwhile you fix the code, are there any available workarround to continue working with current version? ( 0.1.3 version )
I haven't put together a pull request yet (still testing), but I believe the issue is a dependency conflict with clj-http. I added this to the project.clj and rebuilt the jar:
[clj-http "1.0.1"
:exclusions [commons-codec]]
lein deps complains about a conflict with com.fasterxml.jackson.core/jackson-core but it builds ok and when I run it things seem to work correctly. I am very (!) new to clojure so I may be off base, but hopefully this will help out.
I fixed this and have a jar that works. The best way is to use clj-http "1.0.2"
and exclude the clj-http that elastische
brings in.
I can provide the working jar if anyone needs it ASAP.
/cc @wrathofchris
Can confirm that this works, running live in a 3-host cyanite cluster.
I submitted a pull request and @pyr merged it in - this should be fixed in master.
Awesome :)
@Chris: How do you cluster cyanite? Do you cluster Cassandra only and run multiple independenty cyanite instances against the Cassandra cluster? Or do you even cluster cyanite itself?
@MichaelHierweck multiple cyanite behind a load balancer, using ElasticSearch (http) as a shared path cache.
Before we added the shared path cache, we saw cyanite nodes that hadn't yet seen a metric path returned an empty set rather than reading from cassandra.
I have cyanite instances behind carbon-relay-ng and works quite well for routing of data to different instances. Some of my instances have different rollups. Since cyanite can't support rule based up rollups this was the quickest way to support it. On Nov 4, 2014 2:35 PM, "Chris" notifications@github.com wrote:
@MichaelHierweck https://github.com/MichaelHierweck multiple cyanite behind a load balancer, using ElasticSearch (http) as a shared path cache.
Before we added the shared path cache, we saw cyanite nodes that hadn't yet seen a metric path returned an empty set rather than reading from cassandra.
— Reply to this email directly or view it on GitHub https://github.com/pyr/cyanite/issues/67#issuecomment-61700270.
@AeroNotix Can you provide me with the working Jar you built please.
@hanynowsky master should work, could you try that first?
the issue persist with master branch, only memory path storing works for me.
Can you post what error you are getting along with your configuration?
I see no errors in cyanite logs, the index is created but doesn't get filled with paths, yet cyanite doesn't complain in logs DEBUG [2014-11-06 15:21:28,792] async-dispatch-23 - io.cyanite.store - Batch written
carbon: host: "0.0.0.0" port: 2003 rollups:
here is cynite startup logs is this a major probleme --> DEBUG [2014-11-06 15:38:54,229] main - org.elasticsearch.plugins - [Gatecrasher] [/plugins] directory does not exist. ??
DEBUG [2014-11-06 15:38:51,867] main - io.cyanite.config - building :store with io.cyanite.store/cassandra-metric-store INFO [2014-11-06 15:38:51,868] main - io.cyanite.store - creating cassandra metric store DEBUG [2014-11-06 15:38:51,908] main - com.datastax.driver.core.Cluster - Starting new cluster with contact points [localhost/127.0.0.1:9042] DEBUG [2014-11-06 15:38:51,970] main - com.datastax.driver.core.SystemProperties - com.datastax.driver.MAX_SCHEMA_AGREEMENT_WAIT_SECONDS is undefined, using default value 10 DEBUG [2014-11-06 15:38:52,135] main - com.datastax.driver.core.Connection - Connection[localhost/127.0.0.1:9042-1, inFlight=0, closed=false] Transport initialized and ready DEBUG [2014-11-06 15:38:52,136] main - com.datastax.driver.core.ControlConnection - [Control connection] Refreshing node list and token map DEBUG [2014-11-06 15:38:52,181] main - com.datastax.driver.core.ControlConnection - [Control connection] Refreshing schema DEBUG [2014-11-06 15:38:52,247] main - com.datastax.driver.core.ControlConnection - [Control connection] Refreshing node list and token map DEBUG [2014-11-06 15:38:52,325] main - com.datastax.driver.core.ControlConnection - [Control connection] Successfully connected to localhost/127.0.0.1:9042 INFO [2014-11-06 15:38:52,325] main - com.datastax.driver.core.policies.DCAwareRoundRobinPolicy - Using data-center name 'datacenter1' for DCAwareRoundRobinPolicy (if this is incorrect, please provide the correct datacenter name with DCAwareRoundRobinPolicy constructor) INFO [2014-11-06 15:38:52,328] Cassandra Java Driver worker-0 - com.datastax.driver.core.Cluster - New Cassandra host localhost/127.0.0.1:9042 added DEBUG [2014-11-06 15:38:52,355] Cassandra Java Driver worker-1 - com.datastax.driver.core.Connection - Connection[localhost/127.0.0.1:9042-2, inFlight=0, closed=false] Transport initialized and ready DEBUG [2014-11-06 15:38:52,359] Cassandra Java Driver worker-1 - com.datastax.driver.core.Connection - Connection[localhost/127.0.0.1:9042-3, inFlight=0, closed=false] Transport initialized and ready DEBUG [2014-11-06 15:38:52,360] Cassandra Java Driver worker-1 - com.datastax.driver.core.Session - Added connection pool for localhost/127.0.0.1:9042 DEBUG [2014-11-06 15:38:52,378] main - io.cyanite.config - building :index with io.cyanite.es_path/es-native DEBUG [2014-11-06 15:38:54,229] main - org.elasticsearch.plugins - [Gatecrasher] [/plugins] directory does not exist. INFO [2014-11-06 15:38:54,229] main - org.elasticsearch.plugins - [Gatecrasher] loaded [], sites [] DEBUG [2014-11-06 15:38:54,240] main - org.elasticsearch.common.compress.lzf - using [UnsafeChunkDecoder] decoder DEBUG [2014-11-06 15:38:54,649] main - org.elasticsearch.threadpool - [Gatecrasher] creating thread_pool [generic], type [cached], keep_alive [30s] DEBUG [2014-11-06 15:38:54,655] main - org.elasticsearch.threadpool - [Gatecrasher] creating thread_pool [index], type [fixed], size [2], queue_size [200] DEBUG [2014-11-06 15:38:54,656] main - org.elasticsearch.threadpool - [Gatecrasher] creating thread_pool [bulk], type [fixed], size [2], queue_size [50] DEBUG [2014-11-06 15:38:54,656] main - org.elasticsearch.threadpool - [Gatecrasher] creating thread_pool [get], type [fixed], size [2], queue_size [1k] DEBUG [2014-11-06 15:38:54,657] main - org.elasticsearch.threadpool - [Gatecrasher] creating thread_pool [search], type [fixed], size [6], queue_size [1k] DEBUG [2014-11-06 15:38:54,657] main - org.elasticsearch.threadpool - [Gatecrasher] creating thread_pool [suggest], type [fixed], size [2], queue_size [1k] DEBUG [2014-11-06 15:38:54,657] main - org.elasticsearch.threadpool - [Gatecrasher] creating thread_pool [percolate], type [fixed], size [2], queue_size [1k] DEBUG [2014-11-06 15:38:54,657] main - org.elasticsearch.threadpool - [Gatecrasher] creating thread_pool [management], type [scaling], min [1], size [5], keep_alive [5m] DEBUG [2014-11-06 15:38:54,658] main - org.elasticsearch.threadpool - [Gatecrasher] creating thread_pool [flush], type [scaling], min [1], size [1], keep_alive [5m] DEBUG [2014-11-06 15:38:54,658] main - org.elasticsearch.threadpool - [Gatecrasher] creating thread_pool [merge], type [scaling], min [1], size [1], keep_alive [5m] DEBUG [2014-11-06 15:38:54,658] main - org.elasticsearch.threadpool - [Gatecrasher] creating thread_pool [refresh], type [scaling], min [1], size [1], keep_alive [5m] DEBUG [2014-11-06 15:38:54,658] main - org.elasticsearch.threadpool - [Gatecrasher] creating thread_pool [warmer], type [scaling], min [1], size [1], keep_alive [5m] DEBUG [2014-11-06 15:38:54,659] main - org.elasticsearch.threadpool - [Gatecrasher] creating thread_pool [snapshot], type [scaling], min [1], size [1], keep_alive [5m] DEBUG [2014-11-06 15:38:54,659] main - org.elasticsearch.threadpool - [Gatecrasher] creating thread_pool [snapshot_data], type [scaling], min [1], size [5], keep_alive [5m] DEBUG [2014-11-06 15:38:54,659] main - org.elasticsearch.threadpool - [Gatecrasher] creating thread_pool [optimize], type [fixed], size [1], queue_size [null] DEBUG [2014-11-06 15:38:54,659] main - org.elasticsearch.threadpool - [Gatecrasher] creating thread_pool [bench], type [scaling], min [1], size [1], keep_alive [5m] DEBUG [2014-11-06 15:38:54,680] main - org.elasticsearch.transport.netty - [Gatecrasher] using worker_count[4], port[9300-9400], bind_host[null], publish_host[null], compress[false], connect_timeout[30s], connections_per_node[2/3/6/1/1], receive_predictor[512kb->512kb] DEBUG [2014-11-06 15:38:54,682] main - org.elasticsearch.client.transport - [Gatecrasher] node_sampler_interval[5s] DEBUG [2014-11-06 15:38:54,703] main - org.elasticsearch.netty.channel.socket.nio.SelectorUtil - Using select timeout of 500 DEBUG [2014-11-06 15:38:54,703] main - org.elasticsearch.netty.channel.socket.nio.SelectorUtil - Epoll-bug workaround enabled = false DEBUG [2014-11-06 15:38:54,728] main - org.elasticsearch.client.transport - [Gatecrasher] adding address [[#transport#-1][clone][inet[/4xxxxx:9300]]] DEBUG [2014-11-06 15:38:54,756] main - org.elasticsearch.transport.netty - [Gatecrasher] connected to node [[#transport#-1][clone][inet[/4xxxxx:9300]]] DEBUG [2014-11-06 15:38:54,899] main - org.elasticsearch.transport.netty - [Gatecrasher] connected to node [[Harold H. Harold][OD4SGDIjRGGwfvK9aClcXQ][clone][inet[/4xxxxx:9300]]] INFO [2014-11-06 15:38:54,977] main - io.cyanite.carbon - starting carbon handler: {:rollups ({:rollup-to #<config$assoc_rollup_to$fn8817$fn8819 io.cyanite.config$assoc_rollup_to$fn8817$fn8819@745cc8e7>, :rollup 10, :period 60480, :ttl 604800} {:rollup-to #<config$assoc_rollup_to$fn8817$fn8819 io.cyanite.config$assoc_rollup_to$fn8817$fn8819@7789f15f>, :rollup 60, :period 259200, :ttl 15552000} {:rollup-to #<config$assoc_rollup_to$fn8817$fn8819 io.cyanite.config$assoc_rollup_to$fn8817$fn8819@145d149>, :rollup 600, :period 52560, :ttl 31536000}), :readtimeout 30, :port 2003, :host 0.0.0.0, :enabled true} DEBUG [2014-11-06 15:38:54,981] main - io.netty.util.internal.JavassistTypeParameterMatcherGenerator - Generated: io.netty.util.internal.matchers.io.netty.buffer.ByteBufMatcher DEBUG [2014-11-06 15:38:54,995] main - io.netty.channel.nio.NioEventLoop - -Dio.netty.noKeySetOptimization: false DEBUG [2014-11-06 15:38:54,995] main - io.netty.channel.nio.NioEventLoop - -Dio.netty.selectorAutoRebuildThreshold: 512 DEBUG [2014-11-06 15:38:55,003] main - io.netty.util.internal.ThreadLocalRandom - -Dio.netty.initialSeedUniquifier: 0xb9825fcc32ac342d DEBUG [2014-11-06 15:38:55,010] main - io.netty.channel.ChannelOutboundBuffer - -Dio.netty.threadLocalDirectBufferSize: 65536 DEBUG [2014-11-06 15:38:55,011] main - io.netty.util.Recycler - -Dio.netty.recycler.maxCapacity.default: 262144 DEBUG [2014-11-06 15:38:55,028] main - io.netty.buffer.ByteBufUtil - -Dio.netty.allocator.type: unpooled DEBUG [2014-11-06 15:38:55,030] main - io.netty.util.NetUtil - Loopback interface: lo (lo, 0:0:0:0:0:0:0:1%1) DEBUG [2014-11-06 15:38:55,031] main - io.netty.util.NetUtil - /proc/sys/net/core/somaxconn: 128 DEBUG [2014-11-06 15:38:55,043] main - org.eclipse.jetty.util.component.ContainerLifeCycle - org.eclipse.jetty.server.Server@4f7c26b4 added {qtp581817677{STOPPED,8<=0<=50,i=0,q=0},AUTO} DEBUG [2014-11-06 15:38:55,044] main - org.eclipse.jetty.util.component.ContainerLifeCycle - org.eclipse.jetty.server.Server@4f7c26b4 added {org.eclipse.jetty.util.thread.ScheduledExecutorScheduler@5fd11c30,AUTO} DEBUG [2014-11-06 15:38:55,047] main - org.eclipse.jetty.util.component.ContainerLifeCycle - HttpConnectionFactory@76541506{HTTP/1.1} added {HttpConfiguration@a30799b{32768,8192/8192,https://:443,[]},POJO} DEBUG [2014-11-06 15:38:55,048] main - org.eclipse.jetty.util.component.ContainerLifeCycle - ServerConnector@27b786f2{null}{0.0.0.0:0} added {org.eclipse.jetty.server.Server@4f7c26b4,UNMANAGED} DEBUG [2014-11-06 15:38:55,049] main - org.eclipse.jetty.util.component.ContainerLifeCycle - ServerConnector@27b786f2{null}{0.0.0.0:0} added {qtp581817677{STOPPED,8<=0<=50,i=0,q=0},AUTO} DEBUG [2014-11-06 15:38:55,049] main - org.eclipse.jetty.util.component.ContainerLifeCycle - ServerConnector@27b786f2{null}{0.0.0.0:0} added {org.eclipse.jetty.util.thread.ScheduledExecutorScheduler@5fd11c30,AUTO} DEBUG [2014-11-06 15:38:55,049] main - org.eclipse.jetty.util.component.ContainerLifeCycle - ServerConnector@27b786f2{null}{0.0.0.0:0} added {org.eclipse.jetty.io.ArrayByteBufferPool@1fc8e3d,POJO} DEBUG [2014-11-06 15:38:55,050] main - org.eclipse.jetty.util.component.ContainerLifeCycle - ServerConnector@27b786f2{null}{0.0.0.0:0} added {HttpConnectionFactory@76541506{HTTP/1.1},AUTO} DEBUG [2014-11-06 15:38:55,051] main - org.eclipse.jetty.util.component.ContainerLifeCycle - ServerConnector@27b786f2{HTTP/1.1}{0.0.0.0:0} added {org.eclipse.jetty.server.ServerConnector$ServerConnectorManager@3cb74a47,MANAGED} DEBUG [2014-11-06 15:38:55,052] main - org.eclipse.jetty.util.component.ContainerLifeCycle - org.eclipse.jetty.server.Server@4f7c26b4 added {ServerConnector@27b786f2{HTTP/1.1}{0.0.0.0:8080},AUTO} DEBUG [2014-11-06 15:38:55,054] main - org.eclipse.jetty.util.component.ContainerLifeCycle - org.eclipse.jetty.server.handler.HandlerList@650f0bc0[qbits.jet.server.proxy$org.eclipse.jetty.server.handler.AbstractHandler$ff19274a@3ecc9e15] added {qbits.jet.server.proxy$org.eclipse.jetty.server.handler.AbstractHandler$ff19274a@3ecc9e15,AUTO} DEBUG [2014-11-06 15:38:55,054] main - org.eclipse.jetty.util.component.ContainerLifeCycle - org.eclipse.jetty.server.Server@4f7c26b4 added {org.eclipse.jetty.server.handler.HandlerList@650f0bc0[qbits.jet.server.proxy$org.eclipse.jetty.server.handler.AbstractHandler$ff19274a@3ecc9e15],AUTO} DEBUG [2014-11-06 15:38:55,054] main - org.eclipse.jetty.util.component.AbstractLifeCycle - starting org.eclipse.jetty.server.Server@4f7c26b4 INFO [2014-11-06 15:38:55,056] main - org.eclipse.jetty.server.Server - jetty-9.2.z-SNAPSHOT DEBUG [2014-11-06 15:38:55,066] main - org.eclipse.jetty.server.handler.AbstractHandler - starting org.eclipse.jetty.server.Server@4f7c26b4 DEBUG [2014-11-06 15:38:55,066] main - org.eclipse.jetty.util.component.AbstractLifeCycle - starting qtp581817677{STOPPED,8<=0<=50,i=0,q=0} DEBUG [2014-11-06 15:38:55,068] main - org.eclipse.jetty.util.component.AbstractLifeCycle - STARTED @7346ms qtp581817677{STARTED,8<=8<=50,i=6,q=0} DEBUG [2014-11-06 15:38:55,068] main - org.eclipse.jetty.util.component.AbstractLifeCycle - starting org.eclipse.jetty.util.thread.ScheduledExecutorScheduler@5fd11c30 DEBUG [2014-11-06 15:38:55,068] main - org.eclipse.jetty.util.component.AbstractLifeCycle - STARTED @7347ms org.eclipse.jetty.util.thread.ScheduledExecutorScheduler@5fd11c30 DEBUG [2014-11-06 15:38:55,068] main - org.eclipse.jetty.util.component.AbstractLifeCycle - starting org.eclipse.jetty.server.handler.HandlerList@650f0bc0[qbits.jet.server.proxy$org.eclipse.jetty.server.handler.AbstractHandler$ff19274a@3ecc9e15] DEBUG [2014-11-06 15:38:55,068] main - org.eclipse.jetty.server.handler.AbstractHandler - starting org.eclipse.jetty.server.handler.HandlerList@650f0bc0[qbits.jet.server.proxy$org.eclipse.jetty.server.handler.AbstractHandler$ff19274a@3ecc9e15] DEBUG [2014-11-06 15:38:55,068] main - org.eclipse.jetty.util.component.AbstractLifeCycle - starting qbits.jet.server.proxy$org.eclipse.jetty.server.handler.AbstractHandler$ff19274a@3ecc9e15 DEBUG [2014-11-06 15:38:55,069] main - org.eclipse.jetty.server.handler.AbstractHandler - starting qbits.jet.server.proxy$org.eclipse.jetty.server.handler.AbstractHandler$ff19274a@3ecc9e15 DEBUG [2014-11-06 15:38:55,069] main - org.eclipse.jetty.util.component.AbstractLifeCycle - STARTED @7348ms qbits.jet.server.proxy$org.eclipse.jetty.server.handler.AbstractHandler$ff19274a@3ecc9e15 DEBUG [2014-11-06 15:38:55,069] main - org.eclipse.jetty.util.component.AbstractLifeCycle - STARTED @7348ms org.eclipse.jetty.server.handler.HandlerList@650f0bc0[qbits.jet.server.proxy$org.eclipse.jetty.server.handler.AbstractHandler$ff19274a@3ecc9e15] DEBUG [2014-11-06 15:38:55,069] main - org.eclipse.jetty.util.component.AbstractLifeCycle - starting ServerConnector@27b786f2{HTTP/1.1}{0.0.0.0:8080} DEBUG [2014-11-06 15:38:55,070] main - org.eclipse.jetty.util.component.ContainerLifeCycle - ServerConnector@27b786f2{HTTP/1.1}{0.0.0.0:8080} added {sun.nio.ch.ServerSocketChannelImpl[/0:0:0:0:0:0:0:0:8080],POJO} DEBUG [2014-11-06 15:38:55,070] main - org.eclipse.jetty.util.component.AbstractLifeCycle - starting HttpConnectionFactory@76541506{HTTP/1.1} DEBUG [2014-11-06 15:38:55,070] main - org.eclipse.jetty.util.component.AbstractLifeCycle - STARTED @7349ms HttpConnectionFactory@76541506{HTTP/1.1} DEBUG [2014-11-06 15:38:55,070] main - org.eclipse.jetty.util.component.AbstractLifeCycle - starting org.eclipse.jetty.server.ServerConnector$ServerConnectorManager@3cb74a47 DEBUG [2014-11-06 15:38:55,073] main - org.eclipse.jetty.util.component.AbstractLifeCycle - starting org.eclipse.jetty.io.SelectorManager$ManagedSelector@23c9e065 keys=-1 selected=-1 DEBUG [2014-11-06 15:38:55,073] main - org.eclipse.jetty.util.component.AbstractLifeCycle - STARTED @7352ms org.eclipse.jetty.io.SelectorManager$ManagedSelector@23c9e065 keys=0 selected=0 DEBUG [2014-11-06 15:38:55,074] main - org.eclipse.jetty.util.component.AbstractLifeCycle - STARTED @7353ms org.eclipse.jetty.server.ServerConnector$ServerConnectorManager@3cb74a47 DEBUG [2014-11-06 15:38:55,074] qtp581817677-45-selector-ServerConnectorManager@3cb74a47/0 - org.eclipse.jetty.io.SelectorManager - Starting Thread[qtp581817677-45-selector-ServerConnectorManager@3cb74a47/0,5,main] on org.eclipse.jetty.io.SelectorManager$ManagedSelector@23c9e065 keys=0 selected=0 DEBUG [2014-11-06 15:38:55,075] main - org.eclipse.jetty.util.component.ContainerLifeCycle - ServerConnector@27b786f2{HTTP/1.1}{0.0.0.0:8080} added {acceptor-0@71a4d49f,POJO} DEBUG [2014-11-06 15:38:55,075] qtp581817677-45-selector-ServerConnectorManager@3cb74a47/0 - org.eclipse.jetty.io.SelectorManager - Selector loop waiting on select INFO [2014-11-06 15:38:55,076] main - org.eclipse.jetty.server.ServerConnector - Started ServerConnector@27b786f2{HTTP/1.1}{0.0.0.0:8080} DEBUG [2014-11-06 15:38:55,076] main - org.eclipse.jetty.util.component.AbstractLifeCycle - STARTED @7355ms ServerConnector@27b786f2{HTTP/1.1}{0.0.0.0:8080} INFO [2014-11-06 15:38:55,076] main - org.eclipse.jetty.server.Server - Started @7355ms DEBUG [2014-11-06 15:38:55,076] main - org.eclipse.jetty.util.component.AbstractLifeCycle - STARTED @7355ms org.eclipse.jetty.server.Server@4f7c26b4 DEBUG [2014-11-06 15:39:02,729] nioEventLoopGroup-2-2 - io.netty.util.ResourceLeakDetector - -Dio.netty.leakDetectionLevel: simple
it works with ES REST api.
On 06.11.2014 16:24, adnane- wrote:
it works with ES REST api.
Native API does not work here too. There seems to be binary communication between ES and cyanite though.
How should cyanite.yaml look like when accessing a multi node cassandra cluster?
I'm using a three node cluster and let cyanite connect to the local cassandra node.
store: cluster: 'localhost' keyspace: 'metric'
The keyspace is configured with REPLICATION = { 'class' : 'SimpleStrategy', 'replication_factor' : 2 }.
However cyanite fails to write/read metrics even when a single (remote) cassandra node is offline. Reading/Writing metrics via cqlsh works though.
Cyanite claims about NoHostAvailableException com.datastax.driver.core.exceptions.NoHostAvailableException while cqlsh is still able to write to/read from the metric table.
The native API will not be brought back in the next release.
Hi everyone,
We are working with Cyanite to store metrics in Cassandra, store a cache in Elasticsearch, and read them through Graphite-web, all of it in a multiple node cluster. After a upgrade of Cassandra to a 2.1 version, and Cyanite to the 0.1.3 version, we have problems with Cyanite configuration. When we want to view the metrics, the Graphite-web doesn't find them.
cyanite.yaml:
/var/log/cyanite.log:
ERROR [2014-10-13 11:06:36,034] async-dispatch-27 - io.cyanite.es_path - No node available org.elasticsearch.client.transport.NoNodeAvailableException: No node available at org.elasticsearch.client.transport.TransportClientNodesService.execute(TransportClientNodesService.java:196) at org.elasticsearch.client.transport.support.InternalTransportClient.execute(InternalTransportClient.java:94) at org.elasticsearch.client.support.AbstractClient.get(AbstractClient.java:172) at org.elasticsearch.client.transport.TransportClient.get(TransportClient.java:375) at clojurewerkz.elastisch.native$get.invoke(native.clj:63) at clojurewerkz.elastisch.native.document$get.invoke(document.clj:136) at clojurewerkz.elastisch.native.document$presentQMARK.invoke(document.clj:164) at clojure.core$partial$fn4328.invoke(core.clj:2503) at io.cyanite.es_path$es_native$reify5158$fn5302$state_machine4698auto__5303$fn5305.invoke(es_path.clj:219) at io.cyanite.es_path$es_native$reify5158$fn5302$state_machine4698auto____5303.invoke(es_path.clj:217) at clojure.core.async.impl.ioc_macros$run_state_machine.invoke(ioc_macros.clj:940) at clojure.core.async.impl.ioc_macros$run_state_machine_wrapped.invoke(ioc_macros.clj:944) at clojure.core.async.impl.ioc_macros$takeBANG$fn4714.invoke(ioc_macros.clj:953) at clojure.core.async.impl.channels.ManyToManyChannel$fn__1714.invoke(channels.clj:102) at clojure.lang.AFn.run(AFn.java:22) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) at java.lang.Thread.run(Thread.java:745)
Do you have an idea of what is going wrong? Is the configuration correct? At least in the previous version this works fine.
Thank you very much!