Closed johnsardine closed 2 years ago
That sounds like a good idea! And definitely open to contributions!
I think it's similar to track
, so clients can send back to Flagr with additional evaluation context together with previous evaluation result.
I would propose
evalContext.entityContext
https://checkr.github.io/flagr/api_docs/#operation/postEvaluation
Let me know if you want to start working on it, or I will find some time later.
@zhouzhuojie Thank you for your prompt reply. I was thinking a bit more with my team and maybe you also reached this conclusion on your side. The actual collection of the results is probably not a priority for this project.
Imagine you collect metrics using Mixpanel, you can easily collect the experiment results there. Otherwise you'd have to develop a couple of analysis mechanisms into Flagr.
Thank you for looking into this but I'll actually drop this request unless you actually see the need for it.
It's not a very high priority, and Flagr already collected server-side evaluation logs.
I would imagine this can be useful for the following scenarios:
track
is the most accurate way of logging that impression.track
can help to add that context to the logs.I will keep this issue open in case someone or me wants to implement it.
Stale issue message
Flagr can decide which variation to use for an experiment but it does not receive the data generated by the experiment.
For example: If you're evaluating the time to complete a task given an A/B variant, you'd want to log how much time the variant A got vs the variant B.
Does flagr intend to collect these results in the near future? Is it open to contributions?