opensearch-project / logstash-output-opensearch

A Logstash plugin that sends event data to a OpenSearch clusters and stores as an index.
https://opensearch.org/docs/latest/clients/logstash/index/
Apache License 2.0
104 stars 80 forks source link

[BUG] event field and strict mapping #212

Closed falcocoris closed 1 year ago

falcocoris commented 1 year ago

EDIT

Further tests have show that the source is not the output. I'll close this

Describe the bug The output creates an event nested field, containing an "original" field with the json payload parsed by logstash in there. It is problematic when using a strict mapping on the ES index as ES will reject the document insertion as this field is not on the mapping.

To Reproduce Steps to reproduce the behavior:

  1. Create an index with a strict mapping with no "event" field
  2. use logstash to do something and output to the index

Expected behavior This field could be interresting but i don't understand why it's default. Anyways, we should be able to disable it

Plugins input rabbitmq output opensearch

Host/Environment (please complete the following information):

Additional context There are some solutions to remove the field with filters but none work in this situation. The only thing we managed to do is to remove the "original" field, but no way to remove the "event" nested field The conf :

input {
  rabbitmq {
   ssl => true
   host => "mydomain.com:5672"
   user =>  "myuser"
   password => "mypwd"
   passive => true
   vhost =>  "myvhost"
   queue => "myqueue"
   durable => true
 }
}

filter {
  mutate {
    remove_field => [ "blabla", "bla", "blou", "blip" ]
  }
}

output {
  opensearch {
    hosts => ["https://mydomain.com:443"]
    index => "myindex"

    document_id => "%{myfield}"
    doc_as_upsert => true
    action => "update"

  }
}

document example when NOT using a strict mapping :

(...)
"event": {
            "original": """{"test":0.0,"blap":4192.0}"""
          },
"upload_state": 0,
"timestamp": "2023-05-05T10:00:52.000Z",
(...)