Open polleyg opened 7 years ago
I haven't seen anything like this in 2.0.0; we run batch jobs on a daily basis and have restarted our streaming pipelines a few times now.
Is this in streaming, batch, or both? What is the delta between job cancellation time and table creation/update time? Is there a reproducible case?
Only seen it in batch so far, and cannot reproduce yet.
Still happening in 2.2.0 templated batch jobs on our side. We're currently managing it with cleanup scripts but it's a PITA. I was wondering if it might be an idea to put an expiry on those datasets to auto cleanup? I guess determining that might be difficult depending on how long a batch can run for but a day seems safe. At least that would limit the number of temp files to 24 if one were running them every hour.
I was just thinking about this today because it happened yet again. Agree, auto expire on the datasets makes sense.
So I did a little investigation and it does look like that's actually implemented... not sure why it's still happening though.
LOG.info("Creating temporary dataset {} for query results", tableToExtract.getDatasetId());
tableService.createDataset(
tableToExtract.getProjectId(),
tableToExtract.getDatasetId(),
location,
"Temporary tables for query results of job " + bqOptions.getJobName(),
// Set a TTL of 1 day on the temporary tables, which ought to be enough in all cases:
// the temporary tables are used only to immediately extract them into files.
// They are normally cleaned up, but in case of job failure the cleanup step may not run,
// and then they'll get deleted after the TTL.
24 * 3600 * 1000L /* 1 day */);
I think I'll try do a bit more debugging of my own... p.s. is this the correct forum to be discussing this?
dev@beam.apache.org is a good place and also by opening a tracking issue on https://issues.apache.org/jira/projects/BEAM so people can follow the bug.
I am also facing this issue. If job failed, I observed that table got delete after 1 day. But DataSet still remain exist. Can we have option to clean temp dataset and tables immediately if job failed. ?
This method cleanupTempResource(options.as(BigQueryOptions.class));
is responsible for cleaning temp dataset and tables. This executed if job succeed in public List<BoundedSource<T>> split(long desiredBundleSizeBytes, PipelineOptions options)
method call . If any error also we need to clean based on pipeline option.
Can any one have better idea. ?
I have this as well, python sdk version 2.27 running on Google dataflow. See attached.
Would be nice if the tables would at least expire automatically. Or if the temp dataset name was configurable. Or something else.
Check out https://beam.apache.org/community/contact-us/ for ways to reach the Beam community with bug reports and questions.
Since porting to 2.1.0, Dataflow is leaving Datasets/Tables behind in BigQuery when the pipeline is cancelled or when it fails. We've been on 1.8.0/1.9.0 previous to this, and we've never see this before. We skipped 2.0.0, so unsure which version it was actually introduced in.
I cancelled a job (2017-10-08_18_35_30-13495977675828673253), and it left behind a dataset and table in BigQuery: