Open FilipJanitor opened 7 years ago
Hi, sorry for the delay - if have unwatched the project... damn :worried:
I will dig into it, if it still present.
Hi, it was long time ago but I remember I kept tinkering with it and found something out. The problem was in using UNIX timestamp despite specifying itt in config. After changing the date format to something like DD.MM.YYYY HH:MM:SSS the import was successful.
There were other problems however - after checking the source of importer I found out that it filters files according to their file extension and works only with .gz or .csv files. However, with the extension present, the imported appended it to the value of the last attribute specified in file name.
Hope it helps - maybe I missed/did not understand something in documentation when doing it.
It would be great if i can use a little amount of your csv data. It that possible?
Uh, sorry right now I am on vacation without work computer with all the data - I will be back in Sunday evening so I will try to find the samples then.
Ok, so I searched and found some data I used when tinkering with the system. XXX.zip
Original files were a lot bigger (cca 10000 lines each) but this one has been shortened for convinience. The format shows pretty well even on 10 lines. As you can see, the timestamp is in ms since epoch and only one attribute / field is defined. I kind of got lost in the attribute/field part of the documentation but the importer didn´t complain when only this one was defined.
I don´t remember much what I was doing, but from the terminal history it seems to me that I experimented with names like XXX_name_1356998400000_0.csv or XXX_name_1356998400000_1356998401000.csv (with appropiate config.yml changes) before I found out that the date format seems to be the culprit. My name experiments probably lead to other problems with correct querying, but that is a whole different story.
Thanks if will give it a try and let you know.
Hi, I am currently testing Chronix as timeseries database (because it looks really cool) and I need to import some old data into it. I followed the quickstart guide - I installed and started Chronix successfuly (I tried both 0.4 and 0.5beta versions). Then I downloaded chronix importer(again I tested both 0.3 and 0.5beta versions) modified config.yml (config.yml.zip), provided my data into data/ directory and run into this error:
main@debian:~/importer-0.5-beta$ ./import.sh 21:34:56.398 [main] INFO de.qaware.chronix.importer.CSVImporter - Start importing files to the Chronix. 21:34:56.410 [main] INFO de.qaware.chronix.importer.csv.FileImporter - Writing imported metrics to metrics.csv 21:34:56.411 [main] INFO de.qaware.chronix.importer.csv.FileImporter - Import supports csv files as well as gz compressed csv files. 21:34:56.480 [main] ERROR de.qaware.chronix.importer.csv.FileImporter - Exception occurred during reading points. 21:34:56.481 [main] INFO de.qaware.chronix.importer.CSVImporter - Done importing. Trigger commit. 21:34:56.795 [main] INFO de.qaware.chronix.importer.CSVImporter - Import done. Imported 0 time series with 0 points
As you can see, there is not much information, so I don't know how to proceed and what to try next. Regarding my data, there are 2000 metrics(that means the lines are pretty long). Whole dataset is split into ~100MB CSV files, each one of them having this format:
DATE;metric0;metric1;metric2; ... all the way to... ;metric1999 1356998400000;1194;3765;3727;1432;3220; ... and so on 1356998401000;4661;3986;1641;3638;2729; ... ...
the metrics.csv file is empty after running the script. I repeated this process with all possible combinations of Chronix and importer versions with no luck unfortunately. Thanks for help!