oliver006 / elasticsearch-test-data

Generate and upload test data to Elasticsearch for performance and load testing
MIT License
257 stars 124 forks source link
data elasticsearch python test-data tornado

Elasticsearch For Beginners: Generate and Upload Randomized Test Data

Because everybody loves test data.

Ok, so what is this thing doing?

es_test_data.py lets you generate and upload randomized test data to your ES cluster so you can start running queries, see what performance is like, and verify your cluster is able to handle the load.

It allows for easy configuring of what the test documents look like, what kind of data types they include and what the field names are called.

Cool, how do I use this?

Run Python script

Let's assume you have an Elasticsearch cluster running.

Python and Tornado are used. Run pip install tornado to install Tornado if you don't have it already.

It's as simple as this:

$ python es_test_data.py --es_url=http://localhost:9200
[I 150604 15:43:19 es_test_data:42] Trying to create index http://localhost:9200/test_data
[I 150604 15:43:19 es_test_data:47] Guess the index exists already
[I 150604 15:43:19 es_test_data:184] Generating 10000 docs, upload batch size is 1000
[I 150604 15:43:19 es_test_data:62] Upload: OK - upload took:    25ms, total docs uploaded:    1000
[I 150604 15:43:20 es_test_data:62] Upload: OK - upload took:    25ms, total docs uploaded:    2000
[I 150604 15:43:20 es_test_data:62] Upload: OK - upload took:    19ms, total docs uploaded:    3000
[I 150604 15:43:20 es_test_data:62] Upload: OK - upload took:    18ms, total docs uploaded:    4000
[I 150604 15:43:20 es_test_data:62] Upload: OK - upload took:    27ms, total docs uploaded:    5000
[I 150604 15:43:20 es_test_data:62] Upload: OK - upload took:    19ms, total docs uploaded:    6000
[I 150604 15:43:20 es_test_data:62] Upload: OK - upload took:    15ms, total docs uploaded:    7000
[I 150604 15:43:20 es_test_data:62] Upload: OK - upload took:    24ms, total docs uploaded:    8000
[I 150604 15:43:20 es_test_data:62] Upload: OK - upload took:    32ms, total docs uploaded:    9000
[I 150604 15:43:20 es_test_data:62] Upload: OK - upload took:    31ms, total docs uploaded:   10000
[I 150604 15:43:20 es_test_data:216] Done - total docs uploaded: 10000, took 1 seconds
[I 150604 15:43:20 es_test_data:217] Bulk upload average:           23 ms
[I 150604 15:43:20 es_test_data:218] Bulk upload median:            24 ms
[I 150604 15:43:20 es_test_data:219] Bulk upload 95th percentile:   31 ms

Without any command line options, it will generate and upload 1000 documents of the format

{
    "name":<<str>>,
    "age":<<int>>,
    "last_updated":<<ts>>
}

to an Elasticsearch cluster at http://localhost:9200 to an index called test_data.

Docker and Docker Compose

Requires Docker for running the app and Docker Compose for running a single ElasticSearch domain with two nodes (es1 and es2).

  1. Set the maximum virtual memory of your machine to 262144 otherwise the ElasticSearch instances will crash, see the docs
    $ sudo sysctl -w vm.max_map_count=262144
  2. Clone this repository
    $ git clone https://github.com/oliver006/elasticsearch-test-data.git
    $ cd elasticsearch-test-data
  3. Run the ElasticSearch stack
    $ docker-compose up --detached
  4. Run the app and inject random data to the ES stack
    $ docker run --rm -it --network host oliver006/es-test-data  \
        --es_url=http://localhost:9200  \
        --batch_size=10000  \
        --username=elastic \
        --password="esbackup-password"
  5. Cleanup
    $ docker-compose down --volumes

Not bad but what can I configure?

python es_test_data.py --help gives you the full set of command line ptions, here are the most important ones:

What about the document format?

Glad you're asking, let's get to the doc format.

The doc format is configured via --format=<<FORMAT>> with the default being name:str,age:int,last_updated:ts.

The general syntax looks like this:

<<field_name>>:<<field_type>>,<<field_name>>::<<field_type>>, ...

For every document, es_test_data.py will generate random values for each of the fields configured.

Currently supported field types are:

Todo

All suggestions, comments, ideas, pull requests are welcome!