Closed uboness closed 11 years ago
@jrick1977 right now the aggs and the query are executed separately. we could investigate what the costs will be to incorporate it into parent/child queries... but it's not high on our priority list at the moment, but it'd be best to open a dedicated issue for it so there'll be more open discussion about it (and also will help us keep track of user requested features ;))
@all if you have any special feature requests for aggs, please open dedicated issues for them as
thx!
@revendless-team the terms
agg works with a single valued key, so you can't have an object representing it (though it might be an interesting features to look at).
The aggs you're trying are not supported (we don't support fields
settings).. you should really check out the docs to see what's supported http://www.elasticsearch.org/guide/en/elasticsearch/reference/master/search-aggregations.html
What you could do for now, is combine all three fields into one term using scripts:
{
"aggs": {
"level0": {
"terms": {
"script": "'[' + doc['address.district.id'].value + ']' + '[' + doc['address.district.title'].value + ']' + '[' + doc['address.district.url'].value + ']'"
},
"aggs": {
"level1": {
"terms": {
"script": "'[' + doc['address.city.id'].value + ']' + '[' + doc['address.city.title'].value + ']' + '[' + doc['address.city.url'].value + ']'"
},
"aggs": {
"level2": {
"terms": {
"script": "'[' + doc['address.country.id'].value + ']' + '[' + doc['address.country.title'].value + ']' + '[' + doc['address.country.url'].value + ']'"
}
}
}
}
}
}
}
}
so the key represents the whole term "record" like [id][title][url]
... then you client can process the terms. As a response you should get something like:
{
"aggregations": {
"level0": {
"buckets": [
{
"key": "[3][Brooklyn][/brooklyn]",
"doc_count": 1,
"level1": {
"buckets": [
{
"key": "[2][New York][/brooklyn-new-york]",
"doc_count": 1,
"level2": {
"buckets": [
{
"key": "[1][USA][/brooklyn-new-york-usa]",
"doc_count": 1
}
]
}
}
]
}
}
]
}
}
This looks absolutely fantastic! Very excited for 1.0.0. Are there any plans to include quartiles in the stats or extended stats aggregations?
@bobbyrenwick yes, we do have plans to support metric quantiles as though not as part of the extended/stats but as a separate standalone agg
@uboness that's great news!
Hi All, I recently tried the below query in elastic search Beta2 and couldn't get it to work. Anyone know what's wrong with the code? Or is the code for aggregations ordered by a sum not out yet in the current Beta2? Thanks!
code that doesn't work:
{ "size":0, "query":{ "filtered":{ "query":{"matchall":{}}, "filter":{ "and":{ "filters":[{"range":{"date_time":{"from":"2013-12-12T00:00:00Z","to":"2013-12-12T23:59:59Z"}}}] } } } }, "aggs":{ "XXXXXXXXX" : { "terms" : { "field" : "XXXXXXXXX.untouched", "size": 5, "order" : { "YYYYYY" : "desc" } }, "aggs" : { "YYYYYY" : { "sum" : { "field" : "impression" } } } } } }} '
this curiously works fine: { "size":0, "query":{ "filtered":{ "query":{"matchall":{}}, "filter":{ "and":{ "filters":[{"range":{"date_time":{"from":"2013-12-12T00:00:00Z","to":"2013-12-12T23:59:59Z"}}}] } } } }, "aggs":{ "XXXXXXXXX" : { "terms" : { "field" : "XXXXXXXXX.untouched", "size": 5, "order" : { "_count" : "desc" } }, "aggs" : { "YYYYYY" : { "sum" : { "field" : "impression" } } } } } }} '
@TrevRUNDSP yeah... we had a bug when sorting by sub aggs... it's fixed in 1.0.0.RC1
great, thanks!
when will be v1.0.0 will be out for production??
looks great
@rbnacharya given the RC2
release yesterday and the statement in the blog post the RC is pretty much production ready. We don't intent to change it anymore unless there are bugs reported. The GA release will follow pretty soon as well in the next weeks.
will be waiting for it.
@revendless-team @uboness +1 for multi value keys in aggregations.
@uboness the hierarchical aggregation feature looks very useful, yet our documents can be associated with more than one node in the tree, for example:
{
"drugName" : "My drug",
"indicationTree" : [
{
"indicationLevel1" : "Injury",
"indicationLevel2" : "Healing",
"indicationLevel3" : "Wound healing"
},
{
"indicationLevel1" : "Injury",
"indicationLevel2" : "Fractures",
"indicationLevel3" : "Limb fracture"
}
]
}
I tried this out but it is not working. The six nodes of the two paths are aggregated as if the nodes are part of the same path. Is there any way to get the hierarchical aggregation work correctly in this case?
please see #5324
Can I use some histogram for { "aggs" : {"agg1": {"ranges": {"fields": "f1" , "range": [ {"from" : 0 , "to":4}, {"from" : 0 , "to":8}, {"from" : 0 , "to":12}, {"from": 0} }
I actually want to categorize employee tenure as 0-1 yr 0-2yr 0-5yr 0-10yr
you can use filters aggregation
that works in 1.4v. I am using 1.2.2 version of elasticsearch.
Can I use script in histogram ?
where I can do Script : "time()-employee_date-of-joing" interval : 1
Can you please provide the code for above elastic search queries using NEST in c#. OR any link specifying dynamic queries forming with multiple filters and terms.
In the below query, if "in_stock_products" returns a "doc_count" of 0, "avg_price" returns a value of "null".
How can one prevent a null value aggregation?
Query:
POST _search/ { "aggs" : { "in_stock_products" : { "filter" : { "range" : { "stock" : { "gt" : 0 } } }, "aggs" : { "avg_price" : { "avg" : { "field" : "price" } } } } } }
Result:
"in_stock_products": { "doc_count": 0, "avg_price": { "value": null } }
Thanks.
Great article! Question: How would you validate if parameters entered are valid. Example, Say I have a MinAmount and a MaxAmount field. Is there a way for elastic search to check if MinAmount <= MaxAmount ? I do not know if ES has a prebuilt in feature for something like this
You should ask those sorts of questions on the mailing list - https://groups.google.com/forum/#!forum/elasticsearch
How would you go about building an aggregation for a timeshifted value? i.e. have the current number of status 500 errors of 495 at 17:07pm and want to find what that number was at 17:07 yesterday?
Our use case is to design product filters on search page (2nd page) like Amazon or any other e-commerce shop. Please see screen shot. Product should be filtered on these scenarios. Internal filters like TV, Fridge should be filtered to "OR" condition and outside filters Brand, Category based on "AND" . Do we need to write aggregation and aggregations. Please share sample query in JSON . Your help is appreciated.
Query should be like this: Select (Category= "TV" OR "Fridge" ) AND (Brands=LG or Samsung)
@Elastic Team/Anyone- Please share sample queries which you have already shared for similar discussion or url.
Category: TV Fridge
SubCategory: LCD LED
Brands LG Samsung
_NOTE: at this point we're focusing more on the functional design aspect rather than performance. Once we get this nailed down, we'll see how far we can push and optimize._
Background
The new aggregations module is due to elasticsearch 1.0 release, and aims to serve as the next generation replacement for the functionality we currently refer to as "faceting". Facets, currently provide a great way to aggregate data within a document set context. This context is defined by the executed query in combination with the different levels of filters that are defined (filtered queries, top level filters, and facet level filters). Although powerful as is, the current facets implementation was not designed from ground up to support complex aggregations and thus limited. The main problem with the current implementation stem in the fact that they are hard coded to work on one level and that the different types of facets (which account for the different types of aggregations we support) cannot be mixed and matched dynamically at query time. It is not possible to compose facets out of other facet and the user is effectively bound to the top level aggregations that we defined and nothing more than that.
The goal with the new aggregations module is to break the barriers the current facet implementation put in place. The new name ("Aggregations") also indicate the intention here - a generic yet extremely powerful framework for defining aggregations - any type of aggregation. The idea here is to have each aggregation defined as a "standalone" aggregation that can perform its task within any context (as a top level aggregation or embedded within other aggregations that can potentially narrow its computation scope). We would like to take all the knowledge and experience we've gained over the years working with facets and apply it when building the new framework.
Before we dive into the meaty part, it's important to set some key concepts and terminology first.
Key Concepts & Terminology
terms
aggregation holds a list of objects (buckets), each holding information about a unique term. While anavg
aggregation, just holds the avg number aggregated over all values of a specific field/s within a well defined set of documents.There are two types of aggregators/aggregations:
Structuring Aggregations
The following snippet captures the basic structure of aggregations:
The
aggregations
object (can also beaggs
for short) in the json holds the aggregations you'd like to be computed. Each aggregation is associated with a logical name that the user defines (e.g. if the aggregation computes the average price, then it'll make sense to call itavg_price
). These logical names, also uniquely identify the aggregations you define (you'll use the same names/keys to identify the aggregations in the response). Each aggregation has a specific type (<aggregation_type>
in the above snippet) and is typically the first key within the named aggregation body. Each type of aggregation define its own body, depending on the nature of the aggregation (eg. theavg
aggregation will define the field on which the avg will be calculated). At the same level of the aggregation type definition, one can optionally define a set of additional aggregations, but this only makes sense if the aggregation you defined is a bucketing aggregation. In this scenario, the aggregation you define on the bucketing aggregation level will be computed for all the buckets built by the bucketing aggregation. For example, if the you define a set of aggregations under therange
aggregation, these aggregations will be computed for each of the range buckets that are defined.In this manner, you can mix & match bucketing and calculating aggregations any way you'd like, create any set of complex hierarchies by embedding aggregations (of type bucket or calc) within other bucket aggregations. To better grasp how they can all work together, please refer to the examples section below.
Calc Aggregators
In this section will provide an overview of all calc aggregations available to date.
All the calc aggregators we have today belong to the same family which we like to call
stats
. All the aggregator in this family are based on values that can either come from the field data or from a script that the user defines.These aggregators operate on the following context: { D, FV } where D is the the set of documents from which the field values are extracted, and FV is the set of values that should be aggregated. The aggregations take all those field values and calculates statistical values (some only calculate on value - they're called
single value stats aggregators
, while others generate a set of values - these are calledmulti-value stats aggregators
).Here are all currently available stats aggregators
Avg
Single Value Aggregator - Will return the average over all field values in the aggregation context, or what ever values the script generates
_NOTE: when
field
andscript
are both specified, the script will be called for every value of the field in the context, and within the script you can access this value using the reserved variable_value
.Output:
Min
Single Value Aggregator - Will return the minimum value among all field values in the aggregation context, or what ever values the script generates
Output:
Max
Single Value Aggregator - Will return the maximum value among all field values in the aggregation context, or what ever values the script generates
Output:
Sum
Single Value Aggregator - Will return the sum of all field values in the aggregation context, or what ever values the script generates
Output:
Count
Single Value Aggregator - Will return the number of field values in the aggregation context, or what ever values the script generates
Output:
Stats
Multi Value Aggregator - Will return the following stats aggregated over the field values in the aggregation context, or what ever values the script generates:
Output:
Extended Stats
Multi Value Aggregator - an extended version of the Stats aggregation above, where in addition to its aggregated statistics the following will also be aggregated:
Output:
Bucket Aggregators
Bucket aggregators don't calculate values over fields like the
calc
aggregators do, but instead, they create buckets of documents. Each bucket defines a criteria (depends on the aggregation type) that determines whether or not a document in the current context "falls" in it. In other words, the buckets effectively define document sets (a.k.a docsets) on which the sub-aggregations are running on.There a different bucket aggregators, each with a different "bucketing" strategy. Some define a single bucket, some define fixed number of multiple bucket, and others dynamically create the buckets while evaluating the docs.
The following describe the currently supported bucket aggregators.
Global
Defines a single bucket of all the documents within the search execution context. This context is defined by the indices and the document types you're searching on, but is not influenced by the search query itself.
Note, global aggregators can only be placed as top level aggregators (it makes no sense to embed a global aggregator within another bucket aggregator)
Output
Filter
Defines a single bucket of all the documents in the current docset context which match a specified filter. Often this will be used to narrow down the current aggregation context to a specific set of documents.
Output
Missing
A field data based single bucket aggregator, that creates a bucket of all documents in the current docset context that are missing a field value. This aggregator will often be used in conjunction with other field data bucket aggregators (such as ranges) to return information for all the documents that could not be placed in any of the other buckets due to missing field data values. (The examples bellow show how well the range and the missing aggregators play together).
Output
Terms
A field data based multi-bucket aggregator where buckets are dynamically built - one per unique value (term) of a specific field. For each such bucket the document count will be aggregated (accounting for all the documents in the current docset context that have that term for the specified field). This aggregator is very similar to how the terms facet works except that it is an aggregator just like any other aggregator, meaning it can be embedded in other bucket aggregators and it can also hold any types of sub-aggregators itself.
Output
TODO: do we want to get rid of the "terms" level in the response and directly put the terms array under the aggregation name? (we do that in range aggregation)
Options
About order
One can define the order in which the term buckets will be sorted and therefore return in the response. There are 4 fixed/pre-defined order types and one more dynamic:
Order by term (alphabetically) ascending/descending:
Order by count (alphabetically) ascending/descending:
Order by direct embedded calc aggregation, ascending/descending. For single value calc aggregation:
Or, for multi-value calc aggregation:
Range
A field data bucket aggregation that enables the user to define a field on which the bucketing will work and a set of ranges. The aggregator will check each field data value in the current docset context against each bucket range and "bucket" the relevant document & values if they match. Note, that here, not only we're bucketing by document, we're also bucketing by value. For example, let's say we're bucketing on multi-value field, and document D has values [1, 2, 3, 4, 5] for the field. In addition, there is a range bucket [ x < 4 ]. When evaluating document D, it seems to fall right in this range bucket, but it does so due to field values [1, 2, 3], not because values [4, 5]. Now… if this bucket will also have a sub-aggregators associated with it (say, sum aggregator), the system will make sure to only aggregate values [1, 2, 3] excluding [4, 5](as 4 and 5 as values, don't really belong to this bucket). This is quite different than the other bucket aggregators we've seen until now which mainly focused on whether the document falls in the bucket or not. Here we also keep track of the values belonging to each bucket.
Output
Of course, you normally don't want to store the age as a field, but store the birthdate instead. We can use scripts to generate the age:
As with all other aggregations, leaving out the field from calc aggregator, will fall back on the field by which the range bucketing is done.
Output
Furthermore, you can also define a value script which will serve as a transformation to the field data value:
Output
Notice, the min aggregation above acts on the actual values that were used for the bucketing (after the transformation by the script), while the min_count aggregation act on the values of the count field that fall within their bucket.
Date Range
A range aggregation that is dedicated for date values. The main difference between this date range agg. to the normal range agg. is that the
from
andto
values can be expressed in Date Math expressions, and it is also possible to specify a date format by which thefrom
andto
json fields will be returned in the response:In the example above, we created two range buckets:
IP Range
Just like the dedicated date range aggregation, there is also a dedicated range aggregation for IPv4 typed fields:
Output:
IP ranges can also be defined as CIDR masks:
Output:
Histogram
An aggregation that can be applied to numeric fields, and dynamically builds fixed size (a.k.a. interval) buckets over all the values of the document fields in the docset context. For example, if the documents have a field that holds a price (numeric), we can ask this aggregator to dynamically build buckets with interval 5 (in case of
price
it may represent $5). When the aggregation executes, the price field of every document within the aggregation context will be evaluated and will be rounded down to its closes bucket - for example, if the price is32
and the bucket size is5
then the rounding will yield30
and thus the document will "fall" into the bucket the bucket that is associated withe the key30
. To make this more formal, here is the rounding function that is used:bucket_key = value - value % interval
A basic histogram aggergation on a single numeric field
value
(maybe be single or multi valued field)An histogram aggregation on multiple fields
The output of the histogram is an array of the buckets, where each bucket holds its key and the number of documents that fall in it. This array can be sorted based on different attributes in an ascending or descending order:
_key
- The buckets will be sorted by their key_count
- The buckets will be sorted by the number of documents that fall in themaggName
- Bucket may hold other aggegations that will be applied to those documents that fall in them. It is possible to sort the buckets based on direct single-valued calc aggregations that they holdaggName
&valueName
- It is also possible to sort buckets based on direct multi-valued calc aggregations that they holdSorting by bucket
key
descendingSorting by document count ascending
Adding a sum aggregation (which is a single valued calc aggregation) to the buckets and sorting by it
Adding a stats aggregation (which is a multi-valued calc aggregation) to the buckets and sorting by the avg
Using value scripts to "preprocess" the values before the bucketing
It's also possible to use document level scripts to compute the value by which the documents will be "bucketted"
Output:
Date Histogram
Date histogram is a similar aggregation to the normal histogram (as described above) except that it can only work on date fields. Since dates are indexed internally as long values, it's possible to use the normal histogram on dates as well, but problem though stems in the fact that time based intervals are not fixed (think of leap years and on the number of days in a month). For this reason, we need a spcial support for time based data. From functionality perspective, this historam supports the same features as the normal histogram. The main difference though is that the interval can be specified by time expressions.
Building a month length bucket intervals
or based on 1.5 months
Other available expressions for interval:
year
,quarter
,week
,day
,hour
,minute
,second
Since internally, dates are represented as 64bit numbers, these numbers are returned as the bucket keys (each key representing a date). For this reason, it is also possible to define a date format, which will result in returning the dates as formatted strings next to the numeric key values:
Output:
Timezones are also supported, enabling the user to define by which timezone they'd like to bucket the documents (this support is very similar to the TZ support in the DateHistogram facet).
Similar to the current date histogram facet, pref_offset & post_offset will are also supported, for offsets applying pre rounding and post rounding. The values are time values with a possible
-
sign. For example, to offset a week rounding to start on Sunday instead of Monday, one can pass pre_offset of -1d to decrease a day before doing the week (monday based) rounding, and then have post_offset set to -1d to actually set the return value to be Sunday, and not Monday.Like with the normal histogram, both document level scripts and value scripts are supported. It is possilbe to control the order of the buckets that are returned. And of course, nest other aggregations within the buckets.
Both the normal
histogram
and thedate_histogram
now support computing/returning empty buckets. This can be controlled by setting thecompute_empty_buckets
parameter totrue
(defaults tofalse
).Geo Distance
An aggregation that works on
geo_point
fields. Conceptually, it works very similar to range aggregation. The user can define a point oforigin
and a set of distance range buckets. The aggregation evaluate the distance of each document from theorigin
point and determine the bucket it belongs to based on the ranges (a document belongs to a bucket if the distance between the document and theorigin
falls within the distance range of the bucket).Output
The specified
field
must be of typegeo_point
(which can only be set explicitly in the mappings). And it can also hold an array ofgeo_point
fields, in which case all will be taken into account during aggregation. Theorigin
point can accept all formatgeo_point
supports:{ "lat" : 52.3760, "lon" : 4.894 }
- this is the safest format as it's the most explicit about thelat
&lon
values"52.3760, 4.894"
- where the first number is thelat
and the second is thelon
[4.894, 52.3760]
- which is based on the GeoJson standard and where the first number is thelon
and the second one is thelat
By default, the distance unit is
km
but it can also accept:mi
(miles),in
(inch),yd
(yards),m
(meters),cm
(centimeters),mm
(millimeters).There are two distance calculation modes:
arc
(the default) andplane
. Thearc
calculation is the most accurate one but also the more expensive one in terms of performance. Theplane
is faster but less accurate. Consider usingplane
when your search context is narrow smaller areas (like cities or even countries).plane
may return higher error mergins for searches across very large areans (e.g. cross atlantic search).Nested
A special single bucket aggregation which enables aggregating nested documents:
assuming the following mapping:
Here's how a nested aggregation can be defined:
As you can see above, the nested aggregation requires the path of the nested documents within the top level documents. Then one can define any type of aggregation over these nested documents.
Output:
Examples
Filter + Range + Missing + Stats
Analyse the online product catalog web access logs. The following aggregation will only aggregate those logs from yesterday (the filter aggregation), providing information for different price ranges (the range aggregation), where per price range we'll return the price stats on that range and the total page views for those documents in the each range. We're also interested in finding all the bloopers - all those products that for some reason don't have prices associated with them and still they are exposed to the user and being accessed and viewed.
Aggregating Hierarchical Data
Quite often you'd like to get aggregations on location in an hierarchical manner. For example, show all countries and how many documents fall within each country, and for each country show a breakdown by city. Here's a simple way to do it using hierarchical terms aggregations: