Closed dlstadther closed 5 months ago
Perhaps https://github.com/googleapis/google-cloud-python/issues/7831 is related?
Thank you @dlstadther for raising the issue! I will try to reproduce the issue by running the program on repeat, but it seems really hard to reproduce deterministically. Do you have any local log info when the program got stuck? That will make it easier to pin point what went wrong.
Definitely sounds like https://github.com/googleapis/google-cloud-python/issues/7831#issuecomment-555021505
At one time we had some default client-side timeouts to make the client more resilient to this sort of thing, but it's really difficult to pick a default that works for all APIs in BigQuery. Maybe a default timeout just for jobs.get could solve this one?
@Linchin , we have application info-level logging, but nothing specific to the bigquery client. I log statement occurs just before job.result()
(telling us the custom job_id
that will be submitted), we can see in the BigQuery console that the job_id
exists and the query completed, but the application log immediately following the job.result()
does not get emitted.
I don't have hard numbers easily accessible regarding frequency or percentage of occurance, but it is seemingly rare.
@tswast , is there anything we can do to see your suggestion implemented in an upcoming release version?
Would your proposal of a default client-side timeout behave similar to an http request timeout (how long to wait for a response) regardless of query completion state, or like the user specifying job.result(timeout=...)
where the query result is expected to be completed within a duration? The former would be preferred for a general default.
Would your proposal of a default client-side timeout behave similar to an http request timeout (how long to wait for a response) regardless of query completion state, or like the user specifying job.result(timeout=...) where the query result is expected to be completed within a duration? The former would be preferred for a general default.
My proposal is for an HTTP request timeout.
For query jobs, they could last several days if they are multi-job scripts or BQML jobs, so wouldn't make sense to me to do a default there.
That said, if you do know your query will complete in a certain amount of time, we do turn the overall timeout into an HTTP request timeout, so it would prevent things from getting stuck if you have an idea of how long the query should take.
is there anything we can do to see your suggestion implemented in an upcoming release version?
I'm actively working with my teammates to get this implemented.
Thanks you @tswast and @Linchin for promptly working to address this issue and already releasing a new public version which includes the fix!
We will be upgrading our environments and monitoring to ensure this issue is gone. Thanks!
(this issue was first taken to a Google employee; they recommended this bug report be submitted here too)
Expected Behavior
QueryJob.result()
always returns when a submitted query completesIssue
At random and unexplainable frequencies, the
QueryJob.result()
runs indefinitely even after the submittedjob_id
is shown as completed in BigQuery's Job History.Anecdotical observation which motivated this outreach:
job.result()
to complete.job.result()
required manual termination over 6 days later on 2024-05-06.We are implementing process-level timeouts to prevent this specific issue from running indefinitely, but this is a bandaid solution to a bug in the
google-cloud-bigquery
python package.Environment details
google-cloud-bigquery
version: 3.21.0Steps to reproduce
We are unable to deterministically reproduce this issue.
Code example
Simplified code example depicting the objects and methods used to