Open jduo opened 1 month ago
I have this almost hooked up. I loaded the students table which has name, gpa, and grad_year fields.
When I issue this PPL query, it seems like it is using the schema from the implied ProjectOperator instead of using the schema from the TRENDLINE command, even though I overrode TrendlineOperator#schema() to just build a schema based on the computations list:
{ "query" : "source=students | TRENDLINE SMA(1, gpa) as foo " }
I get the following JSON result of null arrays:
{ "schema": [ { "name": "grad_year", "type": "long" }, { "name": "name", "type": "string" }, { "name": "gpa", "type": "float" } ], "datarows": [ [ null, null, null ], [ null, null, null ], [ null, null, null ] ], "total": 3, "size": 3 }
However if change the PPL to use an alias that happens to have the same name as the original field:
{ "query" : "source=students | TRENDLINE SMA(1, gpa) as gpa " }
I get data back correctly for one of the array elements in each row.
Is it correct that ProjectOperator does not use the schema from its input?
I have this almost hooked up. I loaded the students table which has name, gpa, and grad_year fields. When I issue this PPL query, it seems like it is using the schema from the implied ProjectOperator instead of using the schema from the TRENDLINE command, even though I overrode TrendlineOperator#schema() to just build a schema based on the computations list:
{ "query" : "source=students | TRENDLINE SMA(1, gpa) as foo " }
I get the following JSON result of null arrays:
{ "schema": [ { "name": "grad_year", "type": "long" }, { "name": "name", "type": "string" }, { "name": "gpa", "type": "float" } ], "datarows": [ [ null, null, null ], [ null, null, null ], [ null, null, null ] ], "total": 3, "size": 3 }
However if change the PPL to use an alias that happens to have the same name as the original field:
{ "query" : "source=students | TRENDLINE SMA(1, gpa) as gpa " }
I get data back correctly for one of the array elements in each row.Is it correct that ProjectOperator does not use the schema from its input?
I would expect the only field out of this schema to be the one computation in trendline ("foo"), rather than all 3 fields in the real index, but perhaps I'm mistaken here.
I have this almost hooked up. I loaded the students table which has name, gpa, and grad_year fields. When I issue this PPL query, it seems like it is using the schema from the implied ProjectOperator instead of using the schema from the TRENDLINE command, even though I overrode TrendlineOperator#schema() to just build a schema based on the computations list:
{ "query" : "source=students | TRENDLINE SMA(1, gpa) as foo " }
I get the following JSON result of null arrays:
{ "schema": [ { "name": "grad_year", "type": "long" }, { "name": "name", "type": "string" }, { "name": "gpa", "type": "float" } ], "datarows": [ [ null, null, null ], [ null, null, null ], [ null, null, null ] ], "total": 3, "size": 3 }
However if change the PPL to use an alias that happens to have the same name as the original field:
{ "query" : "source=students | TRENDLINE SMA(1, gpa) as gpa " }
I get data back correctly for one of the array elements in each row.Is it correct that ProjectOperator does not use the schema from its input?
@vamsi-amazon @penghuo can you please verify ?
I have this almost hooked up. I loaded the students table which has name, gpa, and grad_year fields. When I issue this PPL query, it seems like it is using the schema from the implied ProjectOperator instead of using the schema from the TRENDLINE command, even though I overrode TrendlineOperator#schema() to just build a schema based on the computations list:
{ "query" : "source=students | TRENDLINE SMA(1, gpa) as foo " }
I get the following JSON result of null arrays:{ "schema": [ { "name": "grad_year", "type": "long" }, { "name": "name", "type": "string" }, { "name": "gpa", "type": "float" } ], "datarows": [ [ null, null, null ], [ null, null, null ], [ null, null, null ] ], "total": 3, "size": 3 }
However if change the PPL to use an alias that happens to have the same name as the original field:{ "query" : "source=students | TRENDLINE SMA(1, gpa) as gpa " }
I get data back correctly for one of the array elements in each row. Is it correct that ProjectOperator does not use the schema from its input?@vamsi-amazon @penghuo can you please verify ?
Possible design for trendline output schema:
Requesting reviews from @LantaoJin @MaxKsyunz Thanks
I have this almost hooked up. I loaded the students table which has name, gpa, and grad_year fields. When I issue this PPL query, it seems like it is using the schema from the implied ProjectOperator instead of using the schema from the TRENDLINE command, even though I overrode TrendlineOperator#schema() to just build a schema based on the computations list:
{ "query" : "source=students | TRENDLINE SMA(1, gpa) as foo " }
I get the following JSON result of null arrays:{ "schema": [ { "name": "grad_year", "type": "long" }, { "name": "name", "type": "string" }, { "name": "gpa", "type": "float" } ], "datarows": [ [ null, null, null ], [ null, null, null ], [ null, null, null ] ], "total": 3, "size": 3 }
However if change the PPL to use an alias that happens to have the same name as the original field:{ "query" : "source=students | TRENDLINE SMA(1, gpa) as gpa " }
I get data back correctly for one of the array elements in each row. Is it correct that ProjectOperator does not use the schema from its input?@vamsi-amazon @penghuo can you please verify ?
Possible design for trendline output schema:
- If the field in the input is not in the trendline computations, it shows up unaltered.
- If the field is used in trendline and the computation alias is the same as the field name, it gets replaced with the trendline computation.
- If the field is used in trendline and the computation alias has a different name than the field name, it shows up as a new field in the result.
@jduo did you manage to review the spark trendline
PR ?
I have this almost hooked up. I loaded the students table which has name, gpa, and grad_year fields. When I issue this PPL query, it seems like it is using the schema from the implied ProjectOperator instead of using the schema from the TRENDLINE command, even though I overrode TrendlineOperator#schema() to just build a schema based on the computations list:
{ "query" : "source=students | TRENDLINE SMA(1, gpa) as foo " }
I get the following JSON result of null arrays:{ "schema": [ { "name": "grad_year", "type": "long" }, { "name": "name", "type": "string" }, { "name": "gpa", "type": "float" } ], "datarows": [ [ null, null, null ], [ null, null, null ], [ null, null, null ] ], "total": 3, "size": 3 }
However if change the PPL to use an alias that happens to have the same name as the original field:{ "query" : "source=students | TRENDLINE SMA(1, gpa) as gpa " }
I get data back correctly for one of the array elements in each row. Is it correct that ProjectOperator does not use the schema from its input?@vamsi-amazon @penghuo can you please verify ?
Possible design for trendline output schema:
- If the field in the input is not in the trendline computations, it shows up unaltered.
- If the field is used in trendline and the computation alias is the same as the field name, it gets replaced with the trendline computation.
- If the field is used in trendline and the computation alias has a different name than the field name, it shows up as a new field in the result.
@jduo yes I think it make sense... @penghuo @dai-chen ??
I have this almost hooked up. I loaded the students table which has name, gpa, and grad_year fields. When I issue this PPL query, it seems like it is using the schema from the implied ProjectOperator instead of using the schema from the TRENDLINE command, even though I overrode TrendlineOperator#schema() to just build a schema based on the computations list:
{ "query" : "source=students | TRENDLINE SMA(1, gpa) as foo " }
I get the following JSON result of null arrays:{ "schema": [ { "name": "grad_year", "type": "long" }, { "name": "name", "type": "string" }, { "name": "gpa", "type": "float" } ], "datarows": [ [ null, null, null ], [ null, null, null ], [ null, null, null ] ], "total": 3, "size": 3 }
However if change the PPL to use an alias that happens to have the same name as the original field:{ "query" : "source=students | TRENDLINE SMA(1, gpa) as gpa " }
I get data back correctly for one of the array elements in each row. Is it correct that ProjectOperator does not use the schema from its input?@vamsi-amazon @penghuo can you please verify ?
Possible design for trendline output schema:
- If the field in the input is not in the trendline computations, it shows up unaltered.
- If the field is used in trendline and the computation alias is the same as the field name, it gets replaced with the trendline computation.
- If the field is used in trendline and the computation alias has a different name than the field name, it shows up as a new field in the result.
@jduo did you manage to review the spark
trendline
PR ?I have this almost hooked up. I loaded the students table which has name, gpa, and grad_year fields. When I issue this PPL query, it seems like it is using the schema from the implied ProjectOperator instead of using the schema from the TRENDLINE command, even though I overrode TrendlineOperator#schema() to just build a schema based on the computations list:
{ "query" : "source=students | TRENDLINE SMA(1, gpa) as foo " }
I get the following JSON result of null arrays:{ "schema": [ { "name": "grad_year", "type": "long" }, { "name": "name", "type": "string" }, { "name": "gpa", "type": "float" } ], "datarows": [ [ null, null, null ], [ null, null, null ], [ null, null, null ] ], "total": 3, "size": 3 }
However if change the PPL to use an alias that happens to have the same name as the original field:{ "query" : "source=students | TRENDLINE SMA(1, gpa) as gpa " }
I get data back correctly for one of the array elements in each row. Is it correct that ProjectOperator does not use the schema from its input?@vamsi-amazon @penghuo can you please verify ?
Possible design for trendline output schema:
- If the field in the input is not in the trendline computations, it shows up unaltered.
- If the field is used in trendline and the computation alias is the same as the field name, it gets replaced with the trendline computation.
- If the field is used in trendline and the computation alias has a different name than the field name, it shows up as a new field in the result.
@jduo yes I think it make sense... @penghuo @dai-chen ??
@YANG-DB , I used the PPL parser code from the Spark PR. The schema semantics seem to be the same AFAIK, but I haven't tried the Spark one out. Same with the handling of results when there aren't enough samples (returning NULL) @kt-eliatra ?
The majority of the implementation is done. There's some more work left to support datetime types. Only simple moving average is implemented, not weighted moving average.
Datetime support has been added so this is effectively code complete (only supporting simple moving average for this iteration).
@MaxKsyunz @Yury-Fridlyand
This functionality hasn't been tested with pagination.
There are syntax changes in the Spark version of this command that haven't been reflected here (opensearch-spark/#833).
I think the latter we could support here by having Analyzer return a LogicalSort on top of the LogicalTrendline if the sort clause is added.
It's not clear from me in the Spark version what the field name should be if an alias is not specified. @YANG-DB @salyh @kt-eliatra
Any insight on what could trigger bwc errors here?
There are syntax changes in the Spark version of this command that haven't been reflected here (opensearch-spark/#833).
* Alias has been made optional. * There is now an optional sort specification.
I think the latter we could support here by having Analyzer return a LogicalSort on top of the LogicalTrendline if the sort clause is added.
It's not clear from me in the Spark version what the field name should be if an alias is not specified. @YANG-DB @salyh @kt-eliatra
https://github.com/opensearch-project/opensearch-spark/pull/833#discussion_r1823003234
@salyh @YANG-DB , I have added the sort functionality and made alias optional.
Rebasing on upstream/main to see if the bwc test failures go away.
@salyh , @YANG-DB , @LantaoJin would you be able to review this PR? As the Spark PR finished it might make sense to do this while it is still fresh.
Attention: Patch coverage is 99.04306%
with 2 lines
in your changes missing coverage. Please review.
Project coverage is 94.54%. Comparing base (
5716cab
) to head (8d7bc7c
). Report is 3 commits behind head on main.
Files with missing lines | Patch % | Lines |
---|---|---|
...pensearch/sql/ppl/parser/AstExpressionBuilder.java | 81.81% | 1 Missing and 1 partial :warning: |
:umbrella: View full report in Codecov by Sentry.
:loudspeaker: Have feedback on the report? Share it here.
Description
Adds the trendline command
Related Issues
Resolves #3013
3011
Check List
--signoff
.By submitting this pull request, I confirm that my contribution is made under the terms of the Apache 2.0 license. For more information on following Developer Certificate of Origin and signing off your commits, please check here.