ankiit / logstash

Automatically exported from code.google.com/p/logstash
0 stars 0 forks source link

Elastic search logs error indicating Mapping not found for @timestamp #27

Closed GoogleCodeExporter closed 9 years ago

GoogleCodeExporter commented 9 years ago
Get this error when using river to index and perform any search through 
logstash-web.  One example:
@timestamp:[2011-01-27 TO 2011-01-29] 
loggerclass:"org.apache.hadoop.hdfs.server.datanode.DataNode"

[2011-01-27 18:11:31,353][DEBUG][action.search.type       ] [Mahkizmo] 
[_river][0], node[YKdyz3QiSdyjZb93oZbl7A], [P], s[STARTED]: Failed to execute 
[org.elasticsearch.action.search.SearchRequest@2ff530fc]
org.elasticsearch.search.SearchParseException: [_river][0]: from[0],size[50]: 
Parse Failure [Failed to parse source 
[{"size":50,"from":0,"sort":[{"@timestamp":"desc"}],"facets":{"by_hour":{"histog
ram":{"time_interval":"1h","field":"@timestamp"}}},"query":{"query_string":{"def
ault_operator":"AND","query":"timestamp:[2011-01-27 TO 2011-01-29] 
loggerclass:\"org.apache.hadoop.hdfs.server.datanode.DataNode\""}}}]]
        at org.elasticsearch.search.SearchService.parseSource(SearchService.java:420)
        at org.elasticsearch.search.SearchService.createContext(SearchService.java:335)
        at org.elasticsearch.search.SearchService.executeQueryPhase(SearchService.java:169)
        at org.elasticsearch.search.action.SearchServiceTransportAction.sendExecuteQuery(SearchServiceTransportAction.java:131)
        at org.elasticsearch.action.search.type.TransportSearchQueryThenFetchAction$AsyncAction.sendExecuteFirstPhase(TransportSearchQueryThenFetchAction.java:76)
        at org.elasticsearch.action.search.type.TransportSearchTypeAction$BaseAsyncAction.performFirstPhase(TransportSearchTypeAction.java:193)
        at org.elasticsearch.action.search.type.TransportSearchTypeAction$BaseAsyncAction.access$000(TransportSearchTypeAction.java:77)
        at org.elasticsearch.action.search.type.TransportSearchTypeAction$BaseAsyncAction$1.run(TransportSearchTypeAction.java:152)
        at java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:886)
        at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:908)
        at java.lang.Thread.run(Thread.java:619)
Caused by: org.elasticsearch.search.SearchParseException: [_river][0]: 
from[0],size[50]: Parse Failure [No mapping found for [@timestamp]]
        at org.elasticsearch.search.sort.SortParseElement.addSortField(SortParseElement.java:139)
        at org.elasticsearch.search.sort.SortParseElement.addCompoundSortField(SortParseElement.java:96)
        at org.elasticsearch.search.sort.SortParseElement.parse(SortParseElement.java:68)
        at org.elasticsearch.search.SearchService.parseSource(SearchService.java:407)
        ... 10 more

Original issue reported on code.google.com by deinspanjer on 28 Jan 2011 at 2:15

GoogleCodeExporter commented 9 years ago
It looks like elasticsearch isn't grokking @timestamp's format.

http://www.elasticsearch.com/docs/elasticsearch/mapping/date_format/

I think we need to make the format "basic_ordinal_date_time" in the 
elasticsearch output.

Original comment by petefbsd on 28 Jan 2011 at 3:04

GoogleCodeExporter commented 9 years ago
What version of elasticsearch?

Most time I see errors like this (no mapping found for @timestamp) it is 
because the index is actualy empty and has no data yet, so the failure is 
actually attempting to sort on '@timestamp' but since there's no data in the 
index, there's no field named '@timestamp' and thus we cannot sort on such 
madness and produces the error.

Original comment by jls.semi...@gmail.com on 31 Jan 2011 at 8:17

GoogleCodeExporter commented 9 years ago
Running with logstash 0.2.20110112115018
Definitely have data in the index.  The searches return data, but the histogram 
looks a bit wonky.

Original comment by deinspanjer on 31 Jan 2011 at 9:08

GoogleCodeExporter commented 9 years ago
I had previously working for elasticsearch 0.12. But once upgrade to 0.14, it 
is the same as above.

Original comment by jacky11...@gmail.com on 8 Feb 2011 at 3:47

GoogleCodeExporter commented 9 years ago
Hi, I work if I use two level index elasticsearch://localhost:9200/logs/all

Original comment by jacky11...@gmail.com on 8 Feb 2011 at 4:14

GoogleCodeExporter commented 9 years ago
Talked with kimchy (of elasticsearch). This is caused by 0.14.x not having an 
automatic field type for IPAddress which causes this problem.

0.15.x will have this disabled.

In the mean time, folks should use 0.13.x for now *or* manually configure their 
indexes when this happens to set the offending fields to string, not ip address.

Marking fixed since there is a current workaround and a future solution coming 
soon.

Original comment by jls.semi...@gmail.com on 10 Feb 2011 at 7:29

GoogleCodeExporter commented 9 years ago
Just for info, I see this problem with elasticsearch 0.13.1, and also 0.15.0.  
Haven't tried going back to 0.13.0 or 0.12 yet (logstash v 0.2.20110206003556).

Original comment by chrisma...@gmail.com on 27 Feb 2011 at 9:39

GoogleCodeExporter commented 9 years ago
I'm having this problem on the current stable version 15.2.

Original comment by wiley.cr...@gmail.com on 18 Apr 2011 at 8:41

GoogleCodeExporter commented 9 years ago
elastic0.16 also has it, as I'm trying out the logstash first-steps example...

Original comment by maximili...@gmail.com on 9 Jun 2011 at 3:02

GoogleCodeExporter commented 9 years ago
ignore unmapped should be set to true in sort clause
i.e. 
"sort" : [
        { "rating": {"order" : "desc", "ignore_unmapped" : true} },
        { "price": {"order" : "asc", "missing" : "_last", "ignore_unmapped" : true} }
    ]

Original comment by a1yadu on 11 Jun 2013 at 6:42