ChronixDB / chronix.server

The Chronix Server implementation that is based on Apache Solr.
Apache License 2.0
263 stars 29 forks source link

Opentsdb Oddness -- #129

Closed devaudio closed 6 years ago

devaudio commented 7 years ago

So i am attempting to setup Chronix to ingest mostly opentsdb feeds while waiting to update our legacy applications to use chronix libs natively for imports. One thing i noticed is opentsdb.ingest doesn't support gzipped data incoming (Actual opentsdb api does) but that's not a deal breaker or anything - can prolly have a pull request to remedy that.

My issue is actually having it store/retrieve data --> i've tried several schema for metric tags

<?xml version="1.0" encoding="UTF-8" ?>

<schema name="Chronix" version="1.5">

    <types>
        <fieldType name="int" class="solr.TrieIntField" precisionStep="0" positionIncrementGap="0"/>
        <fieldType name="long" class="solr.TrieLongField" precisionStep="0" positionIncrementGap="0"/>
        <fieldType name="double" class="solr.TrieDoubleField" precisionStep="0" positionIncrementGap="0"/>
        <fieldType name="string" class="solr.StrField" sortMissingLast="true" omitNorms="true"/>
        <fieldType name="binary" class="solr.BinaryField"/>
    </types>

    <fields>
        <field name="id" type="string" indexed="true" stored="true" required="true"/>
        <field name="_version_" type="long" indexed="true" stored="true"/>
        <field name="start" type="long" indexed="true" stored="true" required="true"/>
        <field name="end" type="long" indexed="true" stored="true" required="true"/>
        <field name="data" type="binary" indexed="true" stored="true" required="false"/>
        <field name="metric" type="string" indexed="true" stored="true" required="true"/>
        <!-- Added these after it complained about unknown fields -unknown field 'hashKey'  -->
        <field name="hashKey" type="string" indexed="true" stored="true" required="false"/>
        <field name="nodeKey" type="string" indexed="true" stored="true" required="false"/>
        <field name="cmts" type="string" indexed="true" stored="true" required="false"/>
        <field name="upstream" type="string" indexed="true" stored="true" required="false"/>
        <field name="downstream" type="string" indexed="true" stored="true" required="false"/>
        <!-- Dynamic field for tags-->
        <dynamicField name="*_s" type="string" indexed="true" stored="true"/>
    </fields>
    <uniqueKey>id</uniqueKey>
    <solrQueryParser defaultOperator="OR"/>
</schema>

previously, i had tried to set it up like your promethus example - have opentsdb 'tags' stored in dynamic field.. but that did not work. Also, the example schema.xml that comes with 0.5.zip did not have the metric/string field.

The grafana plugin just hangs forever, and the java app for exploring doesn't return back info

The opentsdb PUT data looks like this:


[
  {
    "metric": "cablemodem.receive.modem",
    "timestamp": 1492613453,
    "value": "1.6",
    "tags": {
      "cmts": "192.168.0.254",
      "downstream": "cable-downstream-12/1/14",
      "hashKey": "57d076563a856e2ad4342a94d59d340ba31c7b8b",
      "nodeKey": "9a40b1104cca6375627af9b222898328993de5dd"
    }
  },
  {
    "metric": "cablemodem.snr.modem",
    "timestamp": 1492613453,
    "value": "36.8",
    "tags": {
      "cmts": "192.168.0.254",
      "downstream": "cable-downstream-12/1/14",
      "hashKey": "57d076563a856e2ad4342a94d59d340ba31c7b8b",
      "nodeKey": "9a40b1104cca6375627af9b222898328993de5dd"
    }
  }
]

I guess what i am looking for is more verbose documentation on grafana plugin, and more documentation/help on setting up opentsdb ingest/proper schema to have seamless transition from opentsdb --> chronix
FlorianLautenschlager commented 7 years ago

So i found some time to dig into it using the latest beta release (0.5). Well there is some work to do (documentation, etc.). But the good message is, it works.

With 0.5 we renamed the metric field to name as in Chronix' a time series could be anything.

So let's add some points:

curl -v -H 'Content-Type: application/json' -d '
[  
   {  
      "metric":"cablemodem.receive.modem",
      "timestamp":1492613453,
      "value":"1.6",
      "tags":{  
         "cmts":"192.168.0.254",
         "downstream":"cable-downstream-12/1/14",
         "hashKey":"57d076563a856e2ad4342a94d59d340ba31c7b8b",
         "nodeKey":"9a40b1104cca6375627af9b222898328993de5dd"
      }
   },
   {  
      "metric":"cablemodem.receive.modem",
      "timestamp":1492613853,
      "value":"3.2",
      "tags":{  
         "cmts":"192.168.0.254",
         "downstream":"cable-downstream-12/1/14",
         "hashKey":"57d076563a856e2ad4342a94d59d340ba31c7b8b",
         "nodeKey":"9a40b1104cca6375627af9b222898328993de5dd"
      }
   },
   {  
      "metric":"cablemodem.snr.modem",
      "timestamp":1492613453,
      "value":"36.8",
      "tags":{  
         "cmts":"192.168.0.254",
         "downstream":"cable-downstream-12/1/14",
         "hashKey":"57d076563a856e2ad4342a94d59d340ba31c7b8b",
         "nodeKey":"9a40b1104cca6375627af9b222898328993de5dd"
      }
   },
   {  
      "metric":"cablemodem.snr.modem",
      "timestamp":1492613853,
      "value":"39.8",
      "tags":{  
         "cmts":"192.168.0.254",
         "downstream":"cable-downstream-12/1/14",
         "hashKey":"57d076563a856e2ad4342a94d59d340ba31c7b8b",
         "nodeKey":"9a40b1104cca6375627af9b222898328993de5dd"
      }
   }
]' http://localhost:8983/solr/chronix/ingest/opentsdb/http/api/put

Should end up like this:

{"responseHeader":{"status":0,"QTime":113}}

Its important to compact documents in order to increase the performance and storage efficiency.

curl 'http://localhost:8983/solr/chronix/compact?joinKey=name,downstream'

Note the joinKey that is used to combine documents. In the example above all documents having the same values in the fields name and downstream are joined into one document. You can add more fields separated by ,.

Important: Currently the IngestionHandler does a commit per request. If you want to import lots of data, you should do large request. I will create an issue to make the commit optional.

With the chronix-timeseries-exploration-0.5-beta we can now ask for data

downstream:"cable-downstream-12/1/14"

and apply some functions like

metric{max;count;min}
devaudio commented 7 years ago

downstream is a tag though? the metric is cablemodem.snr.modem -- how would i query to get, say max;count;min of cablemodem.snr.modem where nodeKey= 97263e59f5da1b847a03c5df1873ea300b7a643a ?

devaudio commented 7 years ago

Data I have in Chronix looks like this btw:

http://localhost:8983/solr/chronix/select?indent=on&q={!term%20f=nodeKey_s}97263e59f5da1b847a03c5df1873ea300b7a643a&wt=json

{
  "responseHeader":{
    "zkConnected":true,
    "query_start_long":0,
    "query_end_long":9223372036854775807,
    "status":0,
    "QTime":30},
  "response":{"numFound":578186,"start":0,"docs":[
      {
        "cmts_s":"192.168.0.254",
        "data":"H4sIAAAAAAAA/+LiFmDQTAMDJwcBBkAAAAD//+NvyngPAAAA",
        "downstream_s":"cable-downstream-12/4/5",
        "end":1492706380,
        "hashKey_s":"e9cb9320ccd3fe5bd4da820cf15eabd9ca0e3e20",
        "name":"cablemodem.snr.modem",
        "nodeKey_s":"97263e59f5da1b847a03c5df1873ea300b7a643a",
        "start":1492706380,
        "type":"metric",
        "id":"b39a3ecf-7365-468d-8153-e48b10bdc6ca",
        "_version_":1565216177443045376},
      {
        "cmts_s":"192.168.0.254",
        "data":"H4sIAAAAAAAA/+LiFmDQZGBgYGiYcsRBgAEQAAD//1whc08PAAAA",
        "downstream_s":"cable-downstream-12/4/4",
        "end":1492706380,
        "hashKey_s":"e9cb9320ccd3fe5bd4da820cf15eabd9ca0e3e20",
        "name":"cablemodem.codewords.corrected.modem",
        "nodeKey_s":"97263e59f5da1b847a03c5df1873ea300b7a643a",
        "start":1492706380,
        "type":"metric",
        "id":"cddd23af-9aa7-46be-a3ab-462e2ba24f2f",
        "_version_":1565216177446191104},
      {
        "cmts_s":"192.168.0.254",
        "data":"H4sIAAAAAAAA/+LiFmDQZGBgYFDY+9RBgAEQAAD//5y1MIIPAAAA",
        "downstream_s":"cable-downstream-12/4/4",
        "end":1492706380,
        "hashKey_s":"e9cb9320ccd3fe5bd4da820cf15eabd9ca0e3e20",
        "name":"cablemodem.codewords.uncorrect.modem",
        "nodeKey_s":"97263e59f5da1b847a03c5df1873ea300b7a643a",
        "start":1492706380,
        "type":"metric",
        "id":"15471e43-5b78-4afa-a94c-6cb5b6fa0783",
        "_version_":1565216177447239680},
      {
        "cmts_s":"192.168.0.254",
        "data":"H4sIAAAAAAAA/+LiFmDQZGAwuFr2PdxJgAEQAAD//34zbYIPAAAA",
        "downstream_s":"cable-downstream-12/4/2",
        "end":1492706380,
        "hashKey_s":"e9cb9320ccd3fe5bd4da820cf15eabd9ca0e3e20",
        "name":"cablemodem.codewords.unerrored.modem",
        "nodeKey_s":"97263e59f5da1b847a03c5df1873ea300b7a643a",
        "start":1492706380,
        "type":"metric",
        "id":"25d89cae-e0b4-4d06-9124-90449809d33c",
        "_version_":1565216177452482560},
      {
        "cmts_s":"192.168.0.254",
        "data":"H4sIAAAAAAAA/+LiFmDQZDjwmJnxa7iTAAMgAAD//68ZzBYPAAAA",
        "downstream_s":"cable-downstream-12/4/7",
        "end":1492706380,
        "hashKey_s":"e9cb9320ccd3fe5bd4da820cf15eabd9ca0e3e20",
        "name":"cablemodem.codewords.unerrored.modem",
        "nodeKey_s":"97263e59f5da1b847a03c5df1873ea300b7a643a",
        "start":1492706380,
        "type":"metric",
        "id":"55e33e29-6962-4f29-9642-c97b43d14d19",
        "_version_":1565216177456676864},
      {
        "cmts_s":"192.168.0.254",
        "data":"H4sIAAAAAAAA/+LiFmDQVFeQnJyaZ3hAgAEQAAD//y/CV8kPAAAA",
        "end":1492706380,
        "hashKey_s":"e9cb9320ccd3fe5bd4da820cf15eabd9ca0e3e20",
        "name":"cablemodem.pnm.nmter",
        "nodeKey_s":"97263e59f5da1b847a03c5df1873ea300b7a643a",
        "start":1492706380,
        "type":"metric",
        "upstream_s":"cable-upstream-1/2/9.0",
        "id":"bc046f8a-2f9c-4158-b3f9-e64b87c1a766",
        "_version_":1565216177465065472},
      {
        "cmts_s":"192.168.0.254",
        "data":"H4sIAAAAAAAA/+LiFmDQvGkoKNEWzuEgwAAIAAD///qSanUPAAAA",
        "end":1492706380,
        "hashKey_s":"e9cb9320ccd3fe5bd4da820cf15eabd9ca0e3e20",
        "name":"cablemodem.pnm.icfrmag",
        "nodeKey_s":"97263e59f5da1b847a03c5df1873ea300b7a643a",
        "start":1492706380,
        "type":"metric",
        "upstream_s":"cable-upstream-1/2/9.0",
        "id":"15622a41-92d5-4bfb-946e-0aa63bad77a2",
        "_version_":1565216177467162624},
      {
        "cmts_s":"192.168.0.254",
        "data":"H4sIAAAAAAAA/+LiFmDQTAMDTwcBBkAAAAD//+Loy68PAAAA",
        "end":1492706380,
        "hashKey_s":"e9cb9320ccd3fe5bd4da820cf15eabd9ca0e3e20",
        "name":"cablemodem.transmit.modem",
        "nodeKey_s":"97263e59f5da1b847a03c5df1873ea300b7a643a",
        "start":1492706380,
        "type":"metric",
        "upstream_s":"cable-upstream-1/3/0.0",
        "id":"31696c18-cd3c-45e5-9122-263d51af1854",
        "_version_":1565216177476599808},
      {
        "cmts_s":"192.168.0.254",
        "data":"H4sIAAAAAAAA/+LiFmDQPDdV4f6jYyEOAgyAAAAA//+b2lW3DwAAAA==",
        "end":1492706380,
        "hashKey_s":"e9cb9320ccd3fe5bd4da820cf15eabd9ca0e3e20",
        "name":"cablemodem.pnm.tdr",
        "nodeKey_s":"97263e59f5da1b847a03c5df1873ea300b7a643a",
        "start":1492706380,
        "type":"metric",
        "upstream_s":"cable-upstream-1/3/0.0",
        "id":"78a410bc-ecc5-4ac3-8abc-e36e62a09df3",
        "_version_":1565216177480794112},
      {
        "cmts_s":"192.168.0.254",
        "data":"H4sIAAAAAAAA/+LiFmDQXFWU+23HJLMDAgyAAAAA//9SXy/JDwAAAA==",
        "end":1492706380,
        "hashKey_s":"e9cb9320ccd3fe5bd4da820cf15eabd9ca0e3e20",
        "name":"cablemodem.pnm.microreflection",
        "nodeKey_s":"97263e59f5da1b847a03c5df1873ea300b7a643a",
        "start":1492706380,
        "type":"metric",
        "upstream_s":"cable-upstream-1/3/0.0",
        "id":"db433f74-a7dd-43e8-871c-d70fd3df56fb",
        "_version_":1565216177481842688}]
  }}

ok

devaudio commented 7 years ago

I am ingesting these similar to how i did opentsdb -- ie.e put metric.value with 'tags' -- should that be different? using golang client, with metric: changed to name: because you guys didn't update golang for that yet either :-D

devaudio commented 7 years ago

Bah i know this issue is a mess now, not an actual issue - but what i need is a schema config to replace a 'standard' opentsdb install/dataset ---

2017-04-20T21:12:35.519Z INFO Query took: 2398133 ms for 250495 points

terrible performance doing name:cablemodem.snr.modem AND nodekey_s=blah cf=max,min

FlorianLautenschlager commented 7 years ago

downstream is a tag though? the metric is cablemodem.snr.modem -- how would i query to get, say max;count;min of cablemodem.snr.modem where nodeKey= 97263e59f5da1b847a03c5df1873ea300b7a643a ?

q=metric:cablemodem.snr.modem AND nodeKey:97263e59f5da1b847a03c5df1873ea300b7a643a&cq=metric{max;count;min}

2017-04-20T21:12:35.519Z INFO Query took: 2398133 ms for 250495 points

Did you compact the time series? Chronix is optimized to store large chunks (e.g. a few thousand) of points. curl 'http://localhost:8983/solr/chronix/compact?joinKey=name,downstream&ppc=10000' What is the memory configuration? This is definitely not the standard performance. Using the importer + dataset from the examples that puts around 7281 points per chunk i got the following figures on my laptop computer:

Query Points Time (ms) Memory
q=name:"java.lang:type=Memory/HeapMemoryUsage/init" 467341 74 512M
q=name:"java.lang:type=Memory/HeapMemoryUsage/init" 467341 29 2G
q=name:"java.lang:type=Memory/HeapMemoryUsage/init"
&cf=metric{min;max}
467341 339 512G
q=name:"java.lang:type=Memory/HeapMemoryUsage/init"
&cf=metric{min;max}
467341 300 2G
q=name:"java.lang:type=Memory/HeapMemoryUsage/init"
&cf=metric{min;max} only aggregated result
467341 89 512G
q=name:"java.lang:type=Memory/HeapMemoryUsage/init"
&cf=metric{min;max} only aggregated result
467341 75 2G

... using golang client, with metric: changed to name: because you guys didn't update golang for that yet either :-D Sorry. Will raise an issue. Well that is the reason why it is a .5 beta. ;-)

devaudio commented 7 years ago

so i have query time down when i run my own queries... seems mostly reasonable. Putting data in is still slow. I have 6 cores, with 8GB dedicated to solr, off heap 32GB. But that's not my current oddness/problem. First, to show I am not all a time suck, here is something useful :

Should add auto-gzip to Jetty -- so it works same as opentsdb endpoint when data comes in with Content-Encoding:gzip -- I added this to ${SOLR_HOME}/server/solr-webapp/webapp/WEB-INF/web.xml and it worked out ok:

<!-- trying to add gzip support to the Jetty -->
    <filter>
      <filter-name>GzipFilter</filter-name>
      <filter-class>org.eclipse.jetty.servlets.GzipFilter</filter-class>
      <init-param>
        <param-name>mimeTypes</param-name>
        <param-value>application/json,text/html,text/plain,text/xml,application/xhtml+xml,text/css,application/javascript,image/svg+xml</param-value>
      </init-param>
    </filter>
    <filter-mapping>
      <filter-name>GzipFilter</filter-name>
      <url-pattern>/*</url-pattern>
    </filter-mapping>

My current strangeness

Using Chronix go client, it properly adds dynamic strings properly (hashKey --> doc has hashKey_s, for example) -- but when i send things to the opentsdb endpoint, instead of adding fields dynamic -- it tells me:

2017-04-27 18:44:50.734 ERROR (qtp1348949648-13) [c:chronix s:shard1 r:core_node3 x:chronix_shard1_replica1] o.a.s.h.RequestHandlerBase org.apache.solr.common.SolrException: ERROR: [doc=0cad5c70-eb2f-45f0-a6db-e75ee4079ee9] unknown field 'hashKey'

is there some code i can change to fix that? ( I noticed that setTags in de.qaware.chronix.solr.ingestion.format.opentsdb is never called in anything, maybe this is area?)

FlorianLautenschlager commented 7 years ago

Hi sorry for my delayed response - i was off the past days.

Should add auto-gzip to Jetty -- so it works same as opentsdb endpoint when data comes in with Content-Encoding:gzip -- I added this to ${SOLR_HOME}/server/solr-webapp/webapp/WEB-INF/web.xml and it worked out ok:

Nice! Thanks a lot. Can you provide a PR?

Using Chronix go client, it properly adds dynamic strings properly (hashKey --> doc has hashKey_s, for example) -- but when i send things to the opentsdb endpoint, instead of adding fields dynamic --

Well that is a actually a problem. :/ The ingestors should add fields dynamically, too.

We could define common dynamic fields in the standard schema , i.e., \*_sfor string,*_i for int, *_d for double, etc. Then we can simply add "_{s|i|d}" to a tag name.

A good start for OpenTSDB is https://github.com/ChronixDB/chronix.server/blob/master/chronix-server-ingestion-handler/src/main/java/de/qaware/chronix/solr/ingestion/format/OpenTsdbHttpFormatParser.java#L58

But we should do this in a more general way, i.e. an abstract class.

devaudio commented 7 years ago

I can do a PR for the FormatParser (that's as simple as append _s for tags) -- but for the web.xml i don't see how you generate it, unless it's from the chronix-server-test-integration directory?

On Tue, May 2, 2017 at 2:56 AM, Florian Lautenschlager < notifications@github.com> wrote:

Hi sorry for my delayed response - i was off the past days.

Should add auto-gzip to Jetty -- so it works same as opentsdb endpoint when data comes in with Content-Encoding:gzip -- I added this to ${SOLR_HOME}/server/solr-webapp/webapp/WEB-INF/web.xml and it worked out ok:

Nice! Thanks a lot. Can you provide a PR?

Using Chronix go client, it properly adds dynamic strings properly (hashKey --> doc has hashKey_s, for example) -- but when i send things to the opentsdb endpoint, instead of adding fields dynamic --

Well that is a actually a problem. :/ The ingestors should add fields dynamically, too.

We could define common dynamic fields in the standard schema , i.e., *_sfor string,_i for int, d for double, etc. Then we can simply add "{s|i|d}" to a tag name.

A good start for OpenTSDB is https://github.com/ChronixDB/ chronix.server/blob/master/chronix-server-ingestion- handler/src/main/java/de/qaware/chronix/solr/ingestion/format/ OpenTsdbHttpFormatParser.java#L58

But we should do this in a more general way, i.e. an abstract class.

— You are receiving this because you authored the thread. Reply to this email directly, view it on GitHub https://github.com/ChronixDB/chronix.server/issues/129#issuecomment-298512231, or mute the thread https://github.com/notifications/unsubscribe-auth/ABYdI1JxVm507P6nfFdl2MqtmNUFzS0tks5r1tOXgaJpZM4NB00G .

FlorianLautenschlager commented 7 years ago

I can do a PR for the FormatParser (that's as simple as append _s for tags)

Great. :1st_place_medal:

but for the web.xml i don't see how you generate it, unless it's from the chronix-server-test-integration directory?

All static files (web.xml, schema.xml, ...) are copied fromchronix-server-test-integration/src/inttest/resources/de/qaware/chronix into the directory the downloaded and unzipped Apache Solr installation.

devaudio commented 7 years ago

so web.xml thing only worked on responses, not requests. added gzip to requests. Still having very slow 'updates' compared to opentsdb -- here is a current 'run':

opentsdb: Pushing 62207 metrics --> 22 seconds chronix: Pushing 62176 metrics --> 128 seconds

:/ i tried futzing with a bunch of vars in solrconfig.xml and couldn't get it much better

devaudio commented 7 years ago

same servers, with 12 cores, 32GB dedicated to solr/chronix, 32GB dedicated to opentsdb

devaudio commented 7 years ago

(128 GB RAM total for those dudes)

FlorianLautenschlager commented 7 years ago

What is the chunk size you send to the ingestion handler. Currently there is a commit after every request. Perhaps that is a point to dig into. Without the commit the data is added but not visible to queries. I will give it a try tomorrow.

devaudio commented 7 years ago

I want to send ~ 8MB chunks, but i can't figure out how to do that on the embedded server so they are 6kb chunks (opentsdb has chunk=8Mb etc etc)

FlorianLautenschlager commented 7 years ago

Ok that are huge chunks. 8MB of a single time series or multiple?

but i can't figure out how to do that on the embedded server so they are 6kb chunks Hence you are sending 6 KB of chunks?

I will later implement a example that imports the example data using the OpenTSDB ingestor.

devaudio commented 7 years ago

using chronix go client lib isn't much faster either.... maybe it's my solrconfig? I have a 40 node cluster (36 are 'data/region servers') that are all 12x128GB

FlorianLautenschlager commented 7 years ago

So i just made the "commit" after every request configurable. With the example data set i measure the following figures (6 GB RAM). The data is only added and with a single commit after the import is done.

Chronix-Protocol: Import done (Took: 40 sec). Imported 5836 time series with 76,439,668 (points/metrics) OpenTSDB Ingestion: Import done (Took: 182 sec). Imported 5836 time series with 76,439,668 (points/metrics)

That are 1,910,991 resp. 597,184 metrics per second. Yeah there is room for improvement but in comparison to

opentsdb: Pushing 62207 metrics --> 22 seconds

it is much faster than 2827 metrics / per second (your example with OpenTSDB).

devaudio commented 7 years ago

trying it out now

FlorianLautenschlager commented 7 years ago

Sounds good. I wanna see your problem fly away ;-)

I also updated the examples (importer) to send the data with OpenTSDB (configuration parameter)

Note, that the commit after every request is the default: https://github.com/ChronixDB/chronix.server/blob/master/chronix-server-ingestion-handler/src/main/java/de/qaware/chronix/solr/ingestion/AbstractIngestionHandler.java#L66 Use the request parameter commit=false to avoid this.

devaudio commented 7 years ago

so, still working on this... ramped up the # of cores to 36.... what i can't seem to do is get outlier to work?

I have a metric that has float values for hosts over time. Want to find ONLY hosts who are outliers (in this case, my host tag is hashKey_s, instead of Host_s)

I tried this, but it simply returned all the hashKey_s values, instead of only datapoints that were outliers:

select?df=hashKey_s&cf=metric{outlier}&q=name:(cablemodem.offset.modem) AND start:1494881295238 AND end:1494967695238&fl=hashKey_s

what is wrong with that query?

FlorianLautenschlager commented 7 years ago

ramped up the # of cores to 36....

oh my god. is it possible to share a part of your dataset? Or is it possible that we do a video call? Its hard to help from here without data.

I tried this, but it simply returned all the hashKey_s values, instead of only datapoints that were outliers:

Does it return all hashKey_s that time series data has outliers or is the result false?

The outlier detection is an analysis that returns true | false. In order to get the values too, you have to request the field +data. Then it returns the time series data (not only the outlier values). If it would help, we can extend the outlier function.

devaudio commented 7 years ago

I could do a video call, sure... have skype/googlehangouts and all that. Been busy moving to another state, that's why haven't been as responsive. will have a PR for chronix.server as well in a week or too. Dataset is ~587m objects at 10-15 metric/tags - so about 5 billion points a day or so

FlorianLautenschlager commented 7 years ago

Would be great. What about next week? I would prefer Skype as Hangouts sometimes is annoying..

devaudio commented 7 years ago

yeah that is good. I am in EST/EDT -- dunno what time works for ya? I can do any time from like 7:00EDT to 23:00EDT

FlorianLautenschlager commented 7 years ago

@devaudio could you please send me an email to chronix@qaware.de. Thus we can make an appointment. I am in CET. Any time from 7:00 CET to 23:00 CET ;-)

devaudio commented 6 years ago

yeah sorry about that - i moved from Virginia to South Carolina, so this project got put on back burner -- I was never able to get it to ingest as fast as tsdb proper, so it's abandoned for now... but i like the math functions at the edge instead of later... i'll keep watching here

FlorianLautenschlager commented 6 years ago

Thanks for your answer. Well that is a hard task to tune the ingestion of Solr/Chronix. I always had very good ingestion times when writing batches with the Chronix format compared to the http protocol of OpenTSDB. But not every scenario is an offline import ;-)