AgileWorksOrg / elasticsearch-river-csv

CSV river for ElasticSearch
Apache License 2.0
91 stars 45 forks source link

CSV River Plugin for ElasticSearch

Important notice

As from ES version 2 and above, rivers are not supported anymore. This said we've discontinued this repo for active development and let it be only for important fixes.

New ElasticSearch CSV application

We've staretd development on new standalone version of CSV uploader: https://github.com/AgileWorksOrg/elasticsearch-csv

Build Status Coverage Status Dependency Status

The CSV River plugin allows index CSV files in folder.

In order to install the plugin, simply run: bin/plugin -install river-csv -url https://github.com/AgileWorksOrg/elasticsearch-river-csv/releases/download/2.2.1/elasticsearch-river-csv-2.2.1.zip.

If it doesn't work, clone git repository and build plugin manually.

-------------------------------------
| CSV Plugin     | ElasticSearch    |
-------------------------------------
| master         | 1.7.x -> master  |
-------------------------------------
| 2.2.1          | 1.7.x -> master  |
-------------------------------------
| 2.2.0          | 1.5.x -> master  |
-------------------------------------
| 2.1.2          | 1.4.x -> master  |
-------------------------------------
| 2.0.2          | 1.0.x -> 1.2.x   |
-------------------------------------
| 2.0.1          | 1.0.x -> 1.2.x   |
-------------------------------------
| 2.0.0          | 1.0.0            |
-------------------------------------
| 1.0.1          | 0.19.x           |
-------------------------------------
| 1.0.0          | 0.19.x           |
-------------------------------------

The CSV river import data from CSV files and index it.

Changelog

2.2.2-SNAPSHOT

2.2.1

2.2.0

2.1.2

2.1.1

2.1.0

2.0.2

2.0.1

2.0.0

Creating the CSV river can be done using:

Minimal curl

curl -XPUT localhost:9200/_river/my_csv_river/_meta -d '
{
    "type" : "csv",
    "csv_file" : {
        "folder" : "/tmp",
        "first_line_is_header":"true"
    }
}'

Full request

curl -XPUT localhost:9200/_river/my_csv_river/_meta -d '
{
    "type" : "csv",
    "csv_file" : {
        "folder" : "/tmp",
        "filename_pattern" : ".*\\.csv$",
        "poll":"5m",
        "fields" : [
            "column1",
            "column2",
            "column3",
            "column4"
        ],
        "first_line_is_header" : "false",
        "field_separator" : ",",
        "escape_character" : ";",
        "quote_character" : "'",
        "field_id" : "id",
        "field_id_include" : "false",
        "field_timestamp" : "imported_at",
        "concurrent_requests" : "1",
        "charset" : "UTF-8",
        "script_before_all": "/path/to/before_all.sh",
        "script_after_all": "/path/to/after_all.sh",
        "script_before_file": "/path/to/before_file.sh",
        "script_after_file": "/path/to/after_file.sh"
    },
    "index" : {
        "index" : "my_csv_data",
        "type" : "csv_type",
        "bulk_size" : 100,
        "bulk_threshold" : 10
    }
}'

Examples how the files look like

    #!/bin/sh
    echo "greetings from shell before all, will process $*"

    #!/bin/bash
    echo "greetings from shell before file $1"

    #!/bin/bash
    echo "greetings from shell after file $1"

    #!/bin/bash
    echo "greetings from shell after all, processed $*"

Optional parameters:

fields = empty - MUST BE SET or first_line_is_header must be set to true

----------------------------------------
| Name              | Default value    |
----------------------------------------
| first_line_is_header   | false       |
----------------------------------------
| filename_pattern   | .*\\.csv$       |
----------------------------------------
| poll              |   60 minutes     |
----------------------------------------
| field_separator   | ,   (for tab separator use ```\t```     |
----------------------------------------
| charset           | UTF-8            |
----------------------------------------
| escape_character  | \                |
----------------------------------------
| quote_character   | "                |
----------------------------------------
| bulk_size         | 100              |
----------------------------------------
| bulk_threshold    | 10               |
----------------------------------------
| concurrent_requests | 1              |
----------------------------------------

Charset

Default charset is "UTF-8". If you need different, consider to use one of:

License

This software is licensed under the Apache 2 license, quoted below.

Copyright 2012-2013 Martin Bednar, Vitek Tajzich

Licensed under the Apache License, Version 2.0 (the "License"); you may not
use this file except in compliance with the License. You may obtain a copy of
the License at

    http://www.apache.org/licenses/LICENSE-2.0

Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
License for the specific language governing permissions and limitations under
the License.