Closed adamjq closed 1 month ago
@noCharger Could you please take a look if this can be supported via LTR plugin, thanks.
Thanks @noCharger. If you have any tips on how to profile latency for feature logging that would be great to hear too
@adamjq By convention, I believe the profile
API will include latency in the profile
field rather than the hits
field. If we want to record the logging, can we create a logging schema that includes both necessary and optional fields?
@noCharger @adamjq
The value of this issue is not fully clear to me. The components of the sltr query that contribute to latency are the feature-related subqueries and the model execution (respectively prediction).
A regular query using sltr with "profile": true
will include the latencies of the full re-ranking (RankerQuery), as well as the latencies of the subqueries. A query like
{
"profile": "true",
"query": {
"query_string": {
"query": "title:rambo"
}
},
"rescore": {
"query": {
"rescore_query": {
"sltr": {
"params": {
"keywords": "rambo"
},
"model": "my_ranklib_model"
}
}
}
}
}
will return
{
...
"profile": {
"shards": [
{
"id": "[4uNofbnkSXKIqATg_DXrSw][movies][0]",
"inbound_network_time_in_millis": 0,
"outbound_network_time_in_millis": 0,
"searches": [
{
"query": [
{
"type": "TermQuery",
...
},
{
"type": "RankerQuery",
"description": "rankerquery:",
"time_in_nanos": 475327,
"breakdown": {
"set_min_competitive_score_count": 0,
"match_count": 0,
"shallow_advance_count": 0,
"next_doc": 0,
"score_count": 3,
"compute_max_score_count": 0,
"advance": 10260,
"advance_count": 3,
"score": 88935,
"shallow_advance": 0,
"create_weight_count": 1,
"build_scorer": 121347,
"set_min_competitive_score": 0,
"match": 0,
"next_doc_count": 0,
"compute_max_score": 0,
"build_scorer_count": 2,
"create_weight": 254785
},
"children": [
{
"type": "TermQuery",
"description": "title:rambo",
"time_in_nanos": 330599,
"breakdown": {
"set_min_competitive_score_count": 0,
"match_count": 0,
"shallow_advance_count": 0,
"next_doc": 0,
"score_count": 3,
"compute_max_score_count": 0,
"advance": 3924,
"advance_count": 3,
"score": 4137,
"shallow_advance": 0,
"create_weight_count": 1,
"build_scorer": 99301,
"set_min_competitive_score": 0,
"match": 0,
"next_doc_count": 0,
"compute_max_score": 0,
"build_scorer_count": 2,
"create_weight": 223237
}
}
]
}
],
"rewrite_time": 48422,
"collector": [
{
"name": "SimpleTopScoreDocCollector",
"reason": "search_top_hits",
"time_in_nanos": 55063
}
]
}
],
"aggregations": []
}
]
}
}
The latency of the model should mostly depend on the model complexity (e.g. the number of trees) or size of the feature vector. So I think this rather needs to be considered as an attribute of the model rather than as an attribute of single features. The model latency then should be roughly the subtraction of RankerQuery latency minus the sum of latencies of the child queries. We could consider to measure the model latency to get more precise values and to simplify its evaluation, but I have not acknowledged so far whether or how it is possible to modify the profile output in a plugin.
Thanks for the reply @JohannesDaniel. I was hoping to get better insights about the relative latency that each feature adds during feature logging, to understand if some features contribute more to latency than others, rather than the latency of the model during inference. I think we can mark this as not needed though if I can get that from the profile API
Is your feature request related to a problem?
As a developer, I want to be able to debug the latency each feature adds to a query during feature logging.
This would help identify features which have an outsized impact on latency.
What solution would you like?
Note - the queries below are taken from a POC repo I created here.
Add a field similar to
"time_in_nanos"
to the feature logs in the SLTR query response when"profile": true
, e.g.Request
Response:
What alternatives have you considered?
This feature could also be implemented via an SLTR query parameter instead of through the
profile
API setting.Do you have any additional context?
Please let me know if there are alternative ways to debug the latency each feature adds to a query during feature logging already using the plugin, as I couldn't find any references in the official documentation