Closed leireroman closed 6 days ago
Thanks for the feedback! We are routing this to the appropriate team for follow-up. cc @jeremydvoss @lzchen.
Thanks for your question @leireroman, the team will take a look and respond as soon as possible.
@leireroman
If you'd like to force flush your events telemetry, you can do so by using:
from opentelemetry._logs import get_logger_provider
...
configure_azure_monitor(connection_string=connection_string)
track_event(event_name, message_dict)
...
get_logger_provider().force_flush()
@leireroman
If you'd like to force flush your events telemetry, you can do so by using:
from opentelemetry._logs import get_logger_provider ... configure_azure_monitor(connection_string=connection_string) track_event(event_name, message_dict) ... get_logger_provider().force_flush()
This solved my problem. Thanks!
I've a job in Databricks that runs multiple tasks (some in parallel, others sequentially) and currently, I'm sending telemetry information using OpenCensus. As support for OpenCensus will end on 30 September, I'm doing the transition to Azure Monitor OpenTelemetry Python Distro.
In
functions
notebook, I've the following functions:Then, in each task of the job, I call the function
send_custom_event(event_name, message_dict)
, so that the telemetry of each task is sent to the table customEvents in Application Insights.The issue that I'm facing is that not all events are sent. Sometimes I have received the event for the first task but not for some of the parallel tasks. Other times I have not received the event of the first task nor some of the parallel tasks.
Why is this happening? Is there a way to do flush() to force the event to be sent? That option was available when using OpenCensus and the sending of events works perfectly.
Thanks in advance