Open ly4she opened 4 years ago
cc @delvedor would you be able to help?
@ly4she what version of elastic are you using?
With this config:
docker run -d -p 9200:9200 --rm --name elastic docker.elastic.co/elasticsearch/elasticsearch:6.2.3
You will get:
{
"took": 58,
"timed_out": false,
"_shards": {
"total": 5,
"successful": 5,
"skipped": 0,
"failed": 0
},
"hits": {
"total": 1,
"max_score": 1.0,
"hits": [
{
"_index": "pinotest",
"_type": "log",
"_id": "nuvHrW8B46OCmJcu1eNa",
"_score": 1.0,
"_source": {
"pino": {
"a": {
"b": 11
}
},
"ecs": {
"version": "1.0.0"
},
"@timestamp": "--cut--",
"message": "hello world",
"log": {
"level": 30
},
"host": {
"hostname": "EOMM"
},
"process": {
"pid": 14960
}
}
}
]
}
}
@Eomm Elasticsearch-7.5.1
You should check the mapping of the index in your elasticseach instance
curl --location --request GET 'http://localhost:9200/pino/_mapping/log'
The problem here is that there is a mismatch from the mapping created the first time and the new log.
I suggest to delete the index and retry for testing purpose of course
When I pass nested objects as payload to elasticsearch, I'll get ResponseError: mapper_parsing_exception.
Code to reproduce:
Error I've got:
I think that is a transport responsibility to parse such objects, so I don't want manually stringify every time. And I think that it'll be great the posibility to pass some option like
pretty: true
for prettifying parsing maybe byJSON.stringify(payload, undefined, 2)