Open jeremydvoss opened 1 year ago
From what I understand this is a request for a TC review of Python Logging implementation. I will bring this up in the next TC meeting.
@open-telemetry/python-maintainers can you please also confirm the review request?
Yes, @tigrannajaryan this is our request, thanks!
@ocelotl thanks. I will confirm in the next TC meeting and will post back here who will do the review.
Update: No TC meeting this week due to Kubecon, will be discussed next week.
I will be doing the review.
Trace Context Injection works as expected, I can see a span and a log record in its context:
{
"name": "roll",
"context": {
"trace_id": "0x8b346ee9af8718bf73a4318d073cbbb1",
"span_id": "0x7d0581e1f3f51bea",
"trace_state": "[]"
},
"kind": "SpanKind.INTERNAL",
"parent_id": null,
"start_time": "2023-11-22T17:01:36.027543Z",
"end_time": "2023-11-22T17:01:36.027925Z",
"status": {
"status_code": "UNSET"
},
"attributes": {
"roll.value": "5"
},
"events": [],
"links": [],
"resource": {
"attributes": {
"telemetry.sdk.language": "python",
"telemetry.sdk.name": "opentelemetry",
"telemetry.sdk.version": "1.21.0",
"service.name": "dice-server",
"telemetry.auto.version": "0.42b0"
},
"schema_url": ""
}
}
{
"body": "Anonymous player is rolling the dice: 5",
"severity_number": "<SeverityNumber.WARN: 13>",
"severity_text": "WARNING",
"attributes": {
"otelSpanID": "7d0581e1f3f51bea",
"otelTraceID": "8b346ee9af8718bf73a4318d073cbbb1",
"otelTraceSampled": true,
"otelServiceName": "dice-server"
},
"dropped_attributes": 0,
"timestamp": "2023-11-22T17:01:36.027744Z",
"trace_id": "0x8b346ee9af8718bf73a4318d073cbbb1",
"span_id": "0x7d0581e1f3f51bea",
"trace_flags": 1,
"resource": "BoundedAttributes({'telemetry.sdk.language': 'python', 'telemetry.sdk.name': 'opentelemetry', 'telemetry.sdk.version': '1.21.0', 'service.name': 'dice-server', 'telemetry.auto.version': '0.42b0'}, maxlen=None)"
}
OTLP File exporter is not yet implemented, according to the compatibility matrix, so skipping the step.
LoggerProvider.ForceFlush verified, works as expected. Code used:
logger2.warning("Jail zesty vixen who grabbed pay from quack.")
# Trace context correlation
tracer = trace.get_tracer(__name__)
with tracer.start_as_current_span("foo"):
# Do something
logger2.error("Hyderabad, we have a major problem.")
logger_provider.force_flush()
time.sleep(10)
logger_provider.shutdown()
Logs are printed immediately, when logger_provider.force_flush()
is executed. Without force_flush()
they get delayed until the batch processor decides to output them.
@ocelotl @open-telemetry/python-maintainers I started the review and will be filing issues like this as I go: https://github.com/open-telemetry/opentelemetry-python/issues/created_by/tigrannajaryan
Let me know if you want me to ping you on each issue.
Hi: keen to get logging to stable here so please let me know if I can help out in any way
Hi: keen to get logging to stable here so please let me know if I can help out in any way
@garry-cairns Thank you. It would be great if you can help fix any of the non-compliance issues that I filed so far: https://github.com/open-telemetry/opentelemetry-python/issues/created_by/tigrannajaryan
@open-telemetry/python-maintainers FYI, I am waiting for you to respond/resolve issues I opened so far so that I can continue the review.
@tigrannajaryan
It is in our timelines to address these issues. Just facing other priorities at the moment. @garry-cairns Feel free to pickup any of the tasks if you want to expedite the process.
@jeremydvoss fyi
@tigrannajaryan
It is in our timelines to address these issues. Just facing other priorities at the moment. @garry-cairns Feel free to pickup any of the tasks if you want to expedite the process.
@jeremydvoss fyi
Yep I've commented on 3545 and 3546 to say I don't think they reproduce, and I've sent a PR I believe will close out 3060 if you can take a look when you have a moment.
@lzchen @jeremydvoss is this still an ongoing issue?
@trask
@jeremydvoss will be working on bringing Logging SDK to stability. The issues that @tigrannajaryan had brought up as part of the review has not been fully addressed/resolved yet but I believe the review itself is already finished. Does that mean this issue can be closed?
@tigrannajaryan are you planning to do a final review once the issues you reported have been resolved, or is the review portion done from your perspective? thanks
@tigrannajaryan are you planning to do a final review once the issues you reported have been resolved, or is the review portion done from your perspective? thanks
No, I have not finished the review, I had to stop after reporting the issues I found and it took a while before the issues were addressed. Unfortunately I don't have time to finish the review now, I have other work to do. We likely need someone else to take over. At the very least if I were to do the review I would go over the fixed issues to make sure everything works according to the spec.
@tigrannajaryan
Thanks for all your effort and hard work in reviewing. We have more resourcing on our end to commit to addressing the issues you have already made. We will be on the lookout for someone else from the TC to possibly take over and will work with them extensively to push this to fruition. @jeremydvoss will likely be the PoC from the Python SIG side.
@lzchen I will add this to the next TC meeting agenda to find a reviewer.
We discussed in the TC meeting. I don't have time right now but should be able to help if folks can wait for a month (I should have bandwidth Nov. 18th - end of year 2024).
Thanks @reyang for the update!
I'm going to work on this, ETA ~mid-Nov
I have been investigating if Python's logging signal is ready to be stabilized. Could we get a TC member to look through the code and confirm that it is in line with the log spec and sem conv?
[UPDATE from @tigrannajaryan] Checklist to verify, borrowed from https://github.com/open-telemetry/community/issues/1537 (checkmark means reviewed/verified against spec definition):