open-telemetry / oteps

OpenTelemetry Enhancement Proposals
https://opentelemetry.io
Apache License 2.0
337 stars 164 forks source link

Columnar encoding for the OpenTelemetry protocol #171

Closed lquerel closed 1 year ago

lquerel commented 3 years ago

Ongoing experiments, research, and draft specification for a multi-variate columnar encoding of OTLP.

At the request of @jmacd

linux-foundation-easycla[bot] commented 3 years ago

CLA Signed

The committers listed above are authorized under a signed CLA.

tigrannajaryan commented 3 years ago

Closing/reopening to see if EasyCLA passes.

lquerel commented 3 years ago

Closing/reopening to see if EasyCLA passes.

@tigrannajaryan I have been able to fix the CLA issue by adding my professional email into my Github account.

lquerel commented 3 years ago

@tigrannajaryan and @yurishkuro is there anything I can do to complete/clarify this OTEP?

tigrannajaryan commented 3 years ago

@tigrannajaryan and @yurishkuro is there anything I can do to complete/clarify this OTEP?

@lquerel I may have missed it but I don't see answers to these questions:

Apache Arrow data is represented as an opaque byte array in the BatchEvent protobuf message. Is the intent here that it is essentially unprocessable in intermediaries, such as OpenTelemetry Collector and will be simply forwarded as is? If that is not the intent then how can this data be processed? Given that OpenTelemetry Collector is fundamentally built on the current row-oriented data model and all processing capabilities in the Collector work in terms of the row-oriented data model how can the telemetry received in BatchEvent be processed? Do we expect that columnar data will be converted to row-oriented data when it hits the Collector, is processed in row format then converted back to columnar data to exit the Collector? If this is the intent what’s the performance impact of this?

lquerel commented 3 years ago

@tigrannajaryan and @yurishkuro is there anything I can do to complete/clarify this OTEP?

@lquerel I may have missed it but I don't see answers to these questions:

Apache Arrow data is represented as an opaque byte array in the BatchEvent protobuf message. Is the intent here that it is essentially unprocessable in intermediaries, such as OpenTelemetry Collector and will be simply forwarded as is? If that is not the intent then how can this data be processed? Given that OpenTelemetry Collector is fundamentally built on the current row-oriented data model and all processing capabilities in the Collector work in terms of the row-oriented data model how can the telemetry received in BatchEvent be processed? Do we expect that columnar data will be converted to row-oriented data when it hits the Collector, is processed in row format then converted back to columnar data to exit the Collector? If this is the intent what’s the performance impact of this?

@tigrannajaryan I had added yesterday in the section "OpenTelemetry entities to Arrow mapping" some elements to answer your questions and I submitted today some additional clarifications. Please let me know if this is what you expected or if I missed something.

tigrannajaryan commented 3 years ago

I spent a bit more thinking about this proposal.

I think it can be very efficient representation for specialized cases. It is not clear if it can serve as a general-purpose telemetry protocol (the role that OTLP serves today), primarily because it is not clear how intermediary notes like Collector can process data in this format efficiently, while also allowing to define such transformations in a user-friendly way, like current Collector processors do.

I think the best way forward for this proposal would be to:

carlosalberto commented 3 years ago

I second defining this as a new OTLP format (e.g. otlp-arrow), and have the usual prototypes in a few languages.

anuraaga commented 3 years ago

Is my understanding correct that this format's intent isn't RPC but to save for analytics processing? So we'd expect collector to still accept OTLP but save this format to S3?

lquerel commented 3 years ago

@tigrannajaryan, @jmacd thanks for your feedback. I will work on them as soon as possible. I have some other urgent tasks to complete first at F5.

lquerel commented 3 years ago

Is my understanding correct that this format's intent isn't RPC but to save for analytics processing? So we'd expect collector to still accept OTLP but save this format to S3?

@anuraaga, no the intent of this format is to optimize data transfer and in-memory data processing based on a columnar representation. Apache Arrow buffers can be easily converted into Parquet format for storage.

gramidt commented 3 years ago

Is my understanding correct that this format's intent isn't RPC but to save for analytics processing? So we'd expect collector to still accept OTLP but save this format to S3?

@anuraaga - While this format can help for analytics processing, it serves a much greater purpose:

1) Enables multi-variate time-series data. This is critical for systems where metric values are related and wouldn't make sense as individual univariate values (e.g., firewall metrics, multi-axis devices, browser events, etc.). The protocol today only supports univariate (w/ multi-dimensions: attributes, resources) metrics.

2) OTLP as it stands is not performant/cost efficient enough for capturing all data from high-throughput systems (e.g., edge proxies, load-balancers, etc.). This proposal will not only enable capturing data from high-throughput systems, but will significantly reduce the operating cost for processing and receiving telemetry (egress, ingress, compute, etc.).

jmacd commented 3 years ago

OTLP as it stands is not performant/cost efficient enough for capturing all data

Lightstep will corroborate this statement. Customers are looking to Sampling today as an approach to lowering data collection costs for OpenTelemetry trace data. The proposal in this OTEP suggests that users could pay more than an order of magnitude less to collect the same amount of Span data. Where we see this being used to the customer's advantage: OTel collectors would receive ordinary OTLP and then--in an exporter--perform Columnar compression for sending outside their network, i.e., to the vendor across an expensive network link. OTel collector would not need a new pipeline, only a new exporter.

It would be unusual for an exporter to exist and no corresponding receiver or pipeline. Receiving column-compressed metric event data is not very different than receiving Statsd events, they're just more compressed. The receiver will have to perform multivariate-to-univariate metric aggregation in order to inject ordinary OTLP into an OTel collector metrics pipeline. As a user this is not what I'm looking for, but it's something we could build if there is interest.

gramidt commented 3 years ago

It would be unusual for an exporter to exist and no corresponding receiver or pipeline. Receiving column-compressed metric event data is not very different than receiving Statsd events, they're just more compressed. The receiver will have to perform multivariate-to-univariate metric aggregation in order to inject ordinary OTLP into an OTel collector metrics pipeline. As a user this is not what I'm looking for, but it's something we could build if there is interest.

+1 - I believe both a new receiver (multivariate to univariate) and a new straight event pipeline will be needed. There are users ( I am not at liberty to mention their names ) that are looking forward to this enhancement so they can collect and process multivariate time-series. These users currently have multiple environments setup where they use collectors in varying configurations to buffer, filter, and export telemetry.

carlosalberto commented 3 years ago

Where we see this being used to the customer's advantage: OTel collectors would receive ordinary OTLP and then--in an exporter--perform Columnar compression for sending outside their network, i.e., to the vendor across an expensive network link. OTel collector would not need a new pipeline, only a new exporter.

Trying to re-take the conversation - I agree 100% with the previous paragraph. Once we could verify and tune that it works as expected, we could extend this to the rest fo the SDKs later on.

gramidt commented 3 years ago

Where we see this being used to the customer's advantage: OTel collectors would receive ordinary OTLP and then--in an exporter--perform Columnar compression for sending outside their network, i.e., to the vendor across an expensive network link. OTel collector would not need a new pipeline, only a new exporter.

Trying to re-take the conversation - I agree 100% with the previous paragraph. Once we could verify and tune that it works as expected, we could extend this to the rest fo the SDKs later on.

While an exporter is a good start, a pipeline that supports end-to-end multivariate support will also be needed for the Collector. There are fortune 50 companies that use the Collector today and they want to be able to process multivariate time-series using their existing tooling.

jsuereth commented 3 years ago

One thing I haven't seen in this proposal (and I'm curious what the plan for this is going forward) is when you aggregate data in columnar format given the timestamps are shared.

  1. Do you ever envision a columnar representations joining Traces Metrics + Logs in one "event"/"batch"?
  2. Metrics in OTLP today are arbitrarily aggregated and exported at an interval (vs. actually being multi-variate or sampled at an important event). If we support columnar metrics, should these be synchronous co-exported in place or is the current practice of aggregating (e.g. latency) make sense? Would it make more sense to, e.g. export a Span + related metric points in one columnar export?
  3. Logs vs. Events are already called out. Didn't catch the answer to that one.
lquerel commented 3 years ago

@tigrannajaryan @jmacd @jsuereth @carlosalberto @gramidt @yurishkuro Sorry guys for the delay. Between my PTOs and some urgent tasks on my day to day job I have not been able to make any progress in the last few weeks. Fortunately, this will change next week.

lquerel commented 3 years ago

OTEL - Page 6 (2)

@tigrannajaryan @jmacd @carlosalberto @gramidt @jsuereth @anuraaga @yurishkuro I've tried to summarize in a one diagram the direction I'm taking based on the various feedback. Let me know what you think.

Unless there is a strong disagreement, I'd like to continue to integrate logs and more importantly traces into the benchmark to clarify whether this approach could be global or more focused on the multivariate time-series scenario.

I also think we could phase this project by implementing the multivariate time-series first followed by traces and logs (if the benchmarks confirm my assumption).

Thank you for your patience.

lquerel commented 3 years ago

@jsuereth

One thing I haven't seen in this proposal (and I'm curious what the plan for this is going forward) is when you aggregate data in columnar format given the timestamps are shared.

  1. Do you ever envision a columnar representations joining Traces Metrics + Logs in one "event"/"batch"?

No I don't think so. In order to be efficient, batches must be homogeneous. So by design, a batch will be a collection of events sharing the same schema. However, when that makes sense we could represent complex events such as a span event containing related metrics in a single schema (as you suggested in the next section) and consequently build homogeneous batches of such complex events.

  1. Metrics in OTLP today are arbitrarily aggregated and exported at an interval (vs. actually being multi-variate or sampled at an important event). If we support columnar metrics, should these be synchronous co-exported in place or is the current practice of aggregating (e.g. latency) make sense? Would it make more sense to, e.g. export a Span + related metric points in one columnar export?

I'm not sure if I understood the questions correctly, so please rephrase if the following answers are not what you expected. Regarding multivariate time-series aggregation, I don't think there is a fundamental difference with the univariate scenario. We should still be able to aggregate multivariate events sharing the same attributes/labels while respecting the rules of aggregation of metrics according to their nature (gauge, counter, ...). Regarding span + related metric, this seems to me a valid scenario of multivariate time-series. We have a set of metrics related to a set of complex attributes representing a span. This representation should avoid data duplication and should also improve processing speed/complexity by eliminating the need to recombine the spans with their corresponding univariate metrics. We might need to separate these complex events from the standard metrics/logs/spans for compatibility reasons. In oltp-arrow metrics, logs and spans can be converted into their counterpart in oltp with sometimes 1:N conversions (e.g. multivariate to n univariate). Things will probably be more challenging for complex events like the one you described. Separating them in a "not oltp compatible" category will make that clear.

tigrannajaryan commented 3 years ago

@lquerel Sorry for delayed response. Thanks for posting the diagram. It makes the intent clear.

I believe what we see here is a need to introduce a new signal type for metrics (metrics-arrow or metrics-columnar, whatever we call it) in the Collector and possibly also for logs and traces similarly. This then necessitates new processors that works with the new signal types. This is potentially a very large amount of work, especially if we are want to be close to feature parity with what processors exist today.

We likely also need to support converting regular data to columnar and back so that existing receivers and exporters for non-OTLP are not left out of the ecosystem. Likely also support for signal type converting in pipelines to make it easier to use together with non-columnar types.

Then we are also looking at implementing columnar exporters in Otel SDKs, likely also other changes necessitated to support columnar data type.

I have a feeling that while technically it is feasible it is unlikely that we as a community are currently able to lift such a major undertaking in the near future.

I do see the benefits and understand how it can lead to significant operational cost savings but at the moment I am very doubtful that this can happen soon simply because I don't see the engineering resources available that can turn this idea into reality.

It is certainly possible to start small, it is not an all-or-nothing endeavour, but again some critical amount of support is needed to get value out of it, which is still non-trivial amount of work and even I don't know if we will be able to lift off.

Sorry if this sounds negative. I like the proposal on the technical merit, but given the limited resources we have at Otel I personally don't see a good way forward right now. I hope I am wrong and there is a way to find engineers to work on this.

lquerel commented 3 years ago

@tigrannajaryan Thank you for your detailed comment. I am aware of the amount of work involved in implementing this OTEP. I am in the process of checking with my company to see if I can invest some of my time in it. I should have an answer by the end of next week. I will most likely start with the multivariate time series implementation as this is the most important part for F5.

I am also creating a trace-based benchmarking tool to allow other companies to test this approach on their own data (OLTP --> OLTP-ARROW, and Columnar SDK --> OLTP-ARROW [this one is not done yet]). @jmacd's team will probably test it soon. See this git repo --> https://github.com/lquerel/oltp-arrow

carlosalberto commented 3 years ago

It is certainly possible to start small, it is not an all-or-nothing endeavour, but again some critical amount of support is needed to get value out of it, which is still non-trivial amount of work and even I don't know if we will be able to lift off.

I agree with the feeling, and hence I would like to state again that working on this feature for Collector export would be a good, small, yet foundational step (the Collector pipeline would stay the same, but additional OTLP-arrow exporters would be created in parallel).

In Lightstep we are interested in this and we may be able to provide some cycles as well.

lquerel commented 3 years ago

@carlosalberto IMO, implementing an OLTP-ARROW exporter will be relatively easy to implement for traces and logs. But I'm still not sure that we can generalize the transformation of a set of related univariate time-series to a multivariate time-series without the collaboration of the time-series producer.

lquerel commented 2 years ago

Just a few words to confirm that I will be able to spend several weeks (this quarter) implementing an OLTP-to-OLTP-Arrow receiver, a basic columnar oriented processor, and an OLTP-Arrow exporter.

gramidt commented 2 years ago

@bogdandrutu @tigrannajaryan @jmacd @jsuereth - We're working on getting further support from a small team within F5 to work on this; however, support from other member organizations would help expedite the progress. Based on the data, the ROI for anyone who receives OTel telemetry would be significant.

I would like to help establish a working group to get this completed. Are any of you able to help recruit resources given the ROI for your organizations?

tigrannajaryan commented 2 years ago

Are any of you able to help recruit resources given the ROI for your organizations?

Unfortunately I don't think I can at the moment.

lquerel commented 2 years ago

@tigrannajaryan @jmacd @carlosalberto @gramidt @jsuereth @anuraaga @yurishkuro Hi all, I'm organizing a meeting next Thursday for a demo of an end-to-end implementation of the OTLP Arrow protocol. Please select your best slots in the following doodle poll and feel free to send this link to anyone interested. Thanks

https://doodle.com/meeting/participate/id/PdRPn3Ea

tigrannajaryan commented 2 years ago

@tigrannajaryan @jmacd @carlosalberto @gramidt @jsuereth @anuraaga @yurishkuro Hi all, I'm organizing a meeting next Thursday for a demo of an end-to-end implementation of the OTLP Arrow protocol. Please select your best slots in the following doodle poll and feel free to send this link to anyone interested. Thanks

https://doodle.com/meeting/participate/id/PdRPn3Ea

Done. It would be useful to add some more slots another day to make it easier to find time that works for all.

lquerel commented 2 years ago

@tigrannajaryan added Friday morning

lquerel commented 2 years ago

According to the poll results, this Thursday 11AM-12PM PST is the best time to do this demo. Please feel free to forward the zoom link below to anyone who might be interested by this presentation/demo. Thanks - Laurent

Zoom meeting link: https://f5networks.zoom.us/j/92928467912?pwd=UFZ5VjhOVG9LeWhtTUNVcFVLUXA1Zz09

annanay25 commented 2 years ago

Hi @lquerel, this work looks really interesting.

Is it possible to record the call and publish the recording for those who can't make it?

lquerel commented 2 years ago

Hi Annanay,

I will record the meeting and will publish it.

Thanks

Laurent

On Thu, Dec 16, 2021 at 6:00 AM Annanay Agarwal @.***> wrote:

Hi @lquerel https://github.com/lquerel, this work looks really interesting.

Is it possible to record the call and publish the recording for those who can't make it?

— You are receiving this because you were mentioned. Reply to this email directly, view it on GitHub https://github.com/open-telemetry/oteps/pull/171#issuecomment-995843501, or unsubscribe https://github.com/notifications/unsubscribe-auth/AAFAUSVZBKHXVOQ2KPMWALLURHWJHANCNFSM5B47OABQ .

-- Laurent Quérel

lquerel commented 2 years ago

Link to the slides: https://docs.google.com/presentation/d/1u65vz7v4FpNto-t2NGx6D61C9uvk9f4mgKxCMIZKxe8/edit?usp=sharing

lquerel commented 2 years ago

Link to the recording: https://youtu.be/9dGGjREaggY

tigrannajaryan commented 2 years ago

@lquerel thanks for the demo yesterday. A few comments/questions (and I am happy to help you navigate some of these):

Non-string Attributes

Is it possible to have non-string attributes? The schema here shows that attributes are DataType::Utf8. Are other types possible?

Non-primitive Attributes

Is it possible to have non-primitive attribute values, e.g. arrays or maps, including nested?

Non-uniform Attributes

What happens if different records have different types of values for the same attribute name? This is atypical, but may technically happen and is allowed by OTLP today.

Dictionary

Do you store attribute values in the dictionary or only attribute names? If you store the values as well the shared dictionary state can potentially grow unbounded very quickly when high-cardinality data is recorded. Is it possible to limit the dictionary size somehow? Is it possible to have dictionary-less mode of operation for use-cases when memory usage from shared state is a concern?

Can you seed the dictionary with Otel semantic conventions, so that these are never needed to be transferred on the wire?

gRPC Streams

gRPC streams are not load balancer friendly. They use persistent connections and load balancers don't rebalance them. Over time this can result in a significant dis-balance in the target load. We have observed this with OpenCensus protocol which used gRPC streams.

One possible approach is to forcedly reconnect periodically.

Benchmarks

It would be useful to do benchmarks for very small batch sizes and also with real-world data, such as for example data from Collector's hostmetricsreceiver or from Otel Helm Chart from Kubernetes.

Naming

If the new protocol is not going to be interoperable with existing versions of OTLP receivers then I don't think we can call it OTLP. It will be misleading and source of confusion. If there is a way to make old and new versions of senders and receivers interoperate seamlessly then it is ok to use the name. You may want to look into having the Arrow stream as an optionally attempted mode with a fallback to the old mode if the receiver doesn't support it.

Schema

The word "Schema" already has a meaning in OpenTelemetry and means something very different. It may be useful to disambiguate this in your proposal to avoid confusion.

Collector

I touched this in the meeting, but want to emphasize one more time. There is a very large ecosystem around the current Collector already. It can be very useful to build on top of this ecosystem instead of trying to build a new one. There is over 120 components in the Collector and re-implementing these would be a huge undertaking (even if not all processors are needed). There is a large number current Collector contributors who don't necessarily know Rust. It is going to be very difficult to generate a similar momentum around a new Collector.

I would highly advise to look into ways to make the new protocol work with current Collector. Perhaps one first interesting step would be to implement the protocol as Collector receiver and exporter. If it demonstrates much lower wire size / network cost that is sufficient to start the ball rolling, even if it means there is performance impact because of the need to convert to/from pdata internal memory representation.

jpkrohling commented 2 years ago

I just finished watching the presentation, thank you for delivering and recording it. I found it highly interesting and I understand the choice of Rust given the ecosystem around Apache Arrow. But I share the same concerns as @tigrannajaryan: we cannot expect our current collector ecosystem to be rewritten in Rust, especially given that we are late in delivering a v1 already.

That all said, I am excited about a columnar-friendly alternative to OTLP, potentially with a different name as @tigrannajaryan mentioned earlier.

lquerel commented 2 years ago

@tigrannajaryan Thank you for your very detailed feedback.

In general, I understand the need to leverage this large ecosystem and will definitively give your last suggestion some serious thought. I wasn't aware of the restriction regarding integration with C APIs. This obviously complicates my initial plan and makes integration with libraries like DataFusion also problematic. I hope this vacation season will be a good opportunity to think about my options.

Answers to your questions:

tigrannajaryan commented 2 years ago
  • Naming: I'd like to better understand your proposal here (i.e. You may want to look into having the Arrow stream...).

@lquerel current OTLP version is v0.11.0. Let's assume we introduce OTLP v0.12.0 which adds support for Arrow streams. Can we make senders of v0.12.0 interoperable with receives of v0.11.0? If we can make it happen then I think we can safely call this protocol OTLP, otherwise it would be confusing. I think such interoperability is possible to achieve:

yurishkuro commented 2 years ago

@lquerel thanks for the demo. As I mentioned during the meeting, I am very interested in seeing how this could work from the SDK side. The backend pipeline you demoed has a strong dependency on fixed schemas for different data streams, yet "schemas" as such play no role in the OpenTelemetry APIs today, all data recording (traces and metrics) is done via open-ended Map<String, Any>. Even though, in practice, any given service would normally have only a small number of "shapes" for the spans (and larger but still fixed number of shapes for metrics), there is no mechanism for distinguishing which event is recorded with which shape. One naive way would be for the exporter to group the events by their schemas, which could be pretty expensive computationally (e.g. sorting attributes and computing a hash).

Or did you envision Arrow encoding only in the backend pipelines and not from the SDKs?

lquerel commented 2 years ago

@tigrannajaryan yes, I think it is perfectly achievable and I will check its feasibility soon. I like your approach because it fits perfectly into the current ecosystem while allowing gateways that natively support end-to-end arrow streams to target bandwidth optimized scenarios (see Lightstep use cases described by @jmacd ).

@yurishkuro I think both scenarios will coexist (at least at first).

The choice to use this option in the SDK being left to the user.

Concerning the important variability of the shapes for the traces, I would like to study a little more closely a concrete case. Would you have a simple and reproducible scenario? Do you think that the Synthetic Load Generator Utiliy described in the performance page is a good candidate to produce this type of trace?

tigrannajaryan commented 2 years ago

IIRC, Synthetic Load Generator Utiliy produces fairly static data, so not a good example of variability.

lquerel commented 2 years ago

@tigrannajaryan I confirm that we can detect the presence of a new stream oriented service and fallback on the existing services if not implemented by the receiver.

My plan is:

  1. Support a configuration with a Go Collector sending metrics/logs/traces to the Rust Collector in order to complete the transformation logic between the current version of the protocol and the new one.
  2. Support a configuration with a Rust Collector sending metrics/logs/traces to the Go Collector to support the reverse transformation.
  3. Check traces scenarios based on @yurishkuro feedback.
  4. Update the OTEP according to these modifications
  5. Enter in the review process, refine OTEP and obtain approval.
  6. Update the Go OTLP receiver to expose the stream oriented service.
  7. Update the Go OTLP exporter to transform and send stream oriented events.

The last two steps could be done with the help of the community.

Looking forward to your feedback.

jgehrcke commented 2 years ago

Hello! Great to see this here. I have not followed the conversation in all detail but skimmed over the current Motivation and Explanation sections.

I would like to add a motivational aspect. Encapsulated binary Arrow messages have the advantage that their layout can be known / read prior to reading the bulk of the data. That is, a rate-limiting decision can be made in the receiving end after reading only a few bytes from a (TCP) connection. That could be a huge win for larger-scale environments compared to the current/legacy Prometheus (protobuf) way of communicating larger chunks of metric/log data (here, the receiving end typically reads and deserializes the complete protobuf message before it can make a rate-limiting decision).

Happy to elaborate/discuss.

lquerel commented 2 years ago

@jgehrcke I will add this in the motivation section. In the last iteration of the arrow-based protocol, I separated the declaration of the Arrow Schema from the RecordBatch, so it's even slightly better for this use case. The schema is only sent one time per stream and delta dictionaries is also supported.

mdisibio commented 2 years ago

Hi, this proposal is very interesting and I have been following along for some time. Has there been any recent development? We are pursuing columnar trace storage in Grafana Tempo and would like to converge on something compatible if possible.

In particular the open questions around attribute shape and how to translate the map<string,any> to columnar effectively.

Arrow integration has many benefits for processing, but since we are focused on the disk format, it is not immediately required. Happy to discuss details and thinking if there is interest.

lquerel commented 2 years ago

Hi @mdisibio, Short answer yes I'm still working on this proposal and its implementation (Rust and Go), but not at the speed I'd like.

I worked on the representation of attributes which are in fact a kind of hierarchical representation. I ended up with the following Arrow representation:

I also changed a bit the way the OTLP entities are represented in Arrow RecordBatch to get rid of some duplications that make sense in the context of a telemetry protocol (not necessarily the best option for a disk format).

I am interested in discussing this with you in more detail. We definitely have everything to gain by finding the best tradeoff between disk and wire format to optimize the overall performance. Please contact me via email: l.querel@f5.com or laurent.querel@gmail.com.

lquerel commented 2 years ago

Hi everyone, This new update is a much more complete version of the specification including

New version of OTEP 0156 here.

The initial POC (Rust implementation) served as a validation for the specifications and for the benchmarks. However, a Go library converting OTLP entities into OTLP Arrow entities is in progress. This library should allow the community to integrate the receiver and exporter described in the specification.

@tigrannajaryan , @jmacd , @yurishkuro , @jsuereth, @gramidt , @carlosalberto , @jpkrohling , @pirgeo , @mdisibio, @jgehrcke , @annanay25 , @anuraaga - I am waiting for your feedback. Thanks in advance.

tigrannajaryan commented 2 years ago

@lquerel thanks for updating the OTEP. I am travelling and mostly AFK, but will take a look next week. If you have any Go code that implements this it would be also great to see it.

lquerel commented 2 years ago

@tigrannajaryan "If you have any Go code that implements this it would be also great to see it."

This is definitely the plan. I hope to make this public this summer.