runt18 / google-bigquery

Automatically exported from code.google.com/p/google-bigquery
0 stars 0 forks source link

Data Duplication on Load #500

Closed GoogleCodeExporter closed 8 years ago

GoogleCodeExporter commented 8 years ago
What steps will reproduce the problem?
1. Call Load job API in parallel to load 9 local files into a BigQuery table 
(WRITE_APPEND)
2. Sometimes, I get larger number of data than I load (Not always)

What is the expected output? What do you see instead?

No data duplication

What version of the product are you using? On what operating system?

https://github.com/embulk/embulk-output-bigquery v0.3.0
google-api-ruby-client 0.9.4

Please provide any additional information below.

My load strategy is as below:
1. Create a BigQuery temporary table named like LOAD_TEMP_xxxx first
2. Load local files into the temporary table in parallel with WRITE_APPEND
3. Issue a copy job to copy the temporary table into a target table.
4. Delete the temporary table

Load job IDs
[job_-PV7F_Ykog2K51qxCIGLhnrzppI]
[job_42jMMhYPB7jSjijD6ANC2GwJMAQ]
[job_4JYQ-mEqFOCQre4Q45tkOV0C2tM]
[job_Quhqjcf1DuBQTYYXhWgfPXKjrek]
[job_WsuvUwiqR4oiqTj00Rix4-o_cks]
[job_YUrlgAAvw5LlAQn44Nqfeh6yYsU]
[job_gsxnUOnt0ckpWSkH6OoCjJji9rs]
[job_udRZJ9LAq4DZTU17gLOjgYH8Zvg]
[job_wskXdKapOCWyopEYX5ltCY5nGuc]

Copy job ID
[job_QcxlyoZ7kyAryvr4pMLtNPK40SA]

Response.statistics for Load Job IDs
[job_-PV7F_Ykog2K51qxCIGLhnrzppI] 
response.statistics:{:creation_time=>"1460517459320", 
:end_time=>"1460517466386", :start_time=>"1460517461783", 
:load=>{:output_bytes=>"1498575", :output_rows=>"6422", :input_files=>"1", 
:input_file_bytes=>"2502173"}}
[job_udRZJ9LAq4DZTU17gLOjgYH8Zvg] 
response.statistics:{:creation_time=>"1460517461612", 
:end_time=>"1460517468288", :start_time=>"1460517462329", 
:load=>{:output_bytes=>"1508380", :output_rows=>"6472", :input_files=>"1", 
:input_file_bytes=>"2519920"}}
[job_WsuvUwiqR4oiqTj00Rix4-o_cks] 
response.statistics:{:creation_time=>"1460517459634", 
:end_time=>"1460517467173", :start_time=>"1460517462437", 
:load=>{:output_bytes=>"1478179", :output_rows=>"6333", :input_files=>"1", 
:input_file_bytes=>"2468225"}}
[job_Quhqjcf1DuBQTYYXhWgfPXKjrek] 
response.statistics:{:creation_time=>"1460517550644", 
:end_time=>"1460517606856", :start_time=>"1460517587136", 
:load=>{:output_bytes=>"1468293", :output_rows=>"6288", :input_files=>"1", 
:input_file_bytes=>"207690"}}
[job_wskXdKapOCWyopEYX5ltCY5nGuc] 
response.statistics:{:creation_time=>"1460517459556", 
:end_time=>"1460517607503", :start_time=>"1460517591185", 
:load=>{:output_bytes=>"1500259", :output_rows=>"6426", :input_files=>"1", 
:input_file_bytes=>"210917"}}
[job_YUrlgAAvw5LlAQn44Nqfeh6yYsU] 
response.statistics:{:creation_time=>"1460517459460", 
:end_time=>"1460517606479", :start_time=>"1460517589730", 
:load=>{:output_bytes=>"1501680", :output_rows=>"6421", :input_files=>"1", 
:input_file_bytes=>"210137"}}
[job_gsxnUOnt0ckpWSkH6OoCjJji9rs] 
response.statistics:{:creation_time=>"1460517460098", 
:end_time=>"1460517606344", :start_time=>"1460517594260", 
:load=>{:output_bytes=>"1488999", :output_rows=>"6377", :input_files=>"1", 
:input_file_bytes=>"208597"}}
[job_4JYQ-mEqFOCQre4Q45tkOV0C2tM] 
response.statistics:{:creation_time=>"1460517462051", 
:end_time=>"1460517605459", :start_time=>"1460517594880", 
:load=>{:output_bytes=>"51060", :output_rows=>"219", :input_files=>"1", 
:input_file_bytes=>"8079"}}
[job_42jMMhYPB7jSjijD6ANC2GwJMAQ] 
response.statistics:{:creation_time=>"1460517459633", 
:end_time=>"1460517620484", :start_time=>"1460517598299", 
:load=>{:output_bytes=>"1497567", :output_rows=>"6426", :input_files=>"1", 
:input_file_bytes=>"211656"}}
[job_QcxlyoZ7kyAryvr4pMLtNPK40SA] 
response.statistics:{:creation_time=>"1460517623579", 
:end_time=>"1460517629572", :start_time=>"1460517623910"}

Total in response.statistics is 51384, but getting real number of rows with 
get_table API tells me that the row number is 57672. Looking the table, it 
seems data is duplicated. 

2016-04-13 03:20:23.264 +0000 [INFO] (0001:transaction): 
embulk-output-bigquery: transaction_report: 
{"num_input_rows":51384,"num_response_rows":51384,"num_output_rows":57672,"num_r
ejected_rows":-6288}

I first doubted retry of google-api-ruby-client, but under DEBUG log of 
google-api-ruby-client, there was no retry issued. 

Original issue reported on code.google.com by seo.naot...@dena.jp on 13 Apr 2016 at 4:21

GoogleCodeExporter commented 8 years ago
The number of rows in the table currently seems to be 51384 (both using sql as 
well as from API).  Could you please provide a bit more information on the API 
call that you are performing that shows 57672 rows?

Original comment by epa...@google.com on 13 Apr 2016 at 5:00

GoogleCodeExporter commented 8 years ago
[deleted comment]
GoogleCodeExporter commented 8 years ago
[deleted comment]
GoogleCodeExporter commented 8 years ago
[deleted comment]
GoogleCodeExporter commented 8 years ago
> The number of rows in the table currently seems to be 51384 (both using sql 
as well as from API)

My colleague retried to load, and removed the old table since correct data was 
required for data analysis. That should be why.
I asked him to  copy and keep a bad table next time (we encounter this 
phenomenon about 3 times per day)

> Could you please provide a bit more information on the API call that you are 
performing that shows 57672 rows?

The body of load job api is like this 
https://github.com/embulk/embulk-output-bigquery/blob/7698ef320a78dd93ec79a512d5
18daec8865d3bc/lib/embulk/output/bigquery/bigquery_client.rb#L152-L170
       body = {
          configuration: {
            load: {
              destination_table: {
                project_id: @project,
                dataset_id: @dataset,
                table_id: table,
              },
              schema: {
                fields: fields,
              },
              write_disposition: 'WRITE_APPEND',
              source_format:         'NEWLINE_DELIMITED_JSON',
              max_bad_records:       0,
              field_delimiter:       nil,
              encoding:              'UTF-8',
              ignore_unknown_values: false,
              allow_quoted_newlines: false,
            }
        }
        opts = {
          upload_source: path,
          content_type: "application/octet-stream",
        }

and issue this load job in parallel to one temporary table.

Original comment by seo.naot...@dena.jp on 13 Apr 2016 at 5:58

GoogleCodeExporter commented 8 years ago
get_table API is just like this > 
https://github.com/embulk/embulk-output-bigquery/blob/7698ef320a78dd93ec79a512d5
18daec8865d3bc/lib/embulk/output/bigquery/bigquery_client.rb#L380. Nothing is 
special.

Original comment by seo.naot...@dena.jp on 13 Apr 2016 at 6:01

GoogleCodeExporter commented 8 years ago
I found some information from the history of the table. I will provide an 
update with more details 04/13 morning PDT.

Original comment by epa...@google.com on 13 Apr 2016 at 6:41

GoogleCodeExporter commented 8 years ago
[deleted comment]
GoogleCodeExporter commented 8 years ago
Occured again. In this case, the number of rows was correctly 3, but it became 
6.

Load job IDs:
job_nbtQdZBoT0PzW9cW46NVohP40H0 (only 1 job because number of inputs was small)

Copy job ID:
job_gg2CvFl9fEH7XrMhJ3fZPp7cl4E

Response.statistics for Load Job IDs

[job_nbtQdZBoT0PzW9cW46NVohP40H0] 
response.statistics:{:creation_time=>"1460535126422", 
:start_time=>"1460535144895", :load=>{:output_bytes=>"838", :output_rows=>"3", 
:input_files=>"1", :input_file_bytes=>"412"}, :end_time=>"1460535155398"}

Report from embulk-output-bigquery:

{"num_input_rows":3,"num_response_rows":3,"num_output_rows":6,"num_rejected_rows
":-3}

Original comment by seo.naot...@dena.jp on 13 Apr 2016 at 8:41

GoogleCodeExporter commented 8 years ago

Original comment by wes...@google.com on 13 Apr 2016 at 3:40

GoogleCodeExporter commented 8 years ago
Looking at the first example (with 9 load jobs):
The job corresponding to loading 6288 rows (job_Quhqjcf1DuBQTYYXhWgfPXKjrek) 
was submitted twice (possibly due to a retry?). The two job ids are:
job_EWDq-z0DQ9Ho5FpkeSvbm3_Iz0Q
job_Quhqjcf1DuBQTYYXhWgfPXKjrek
If you would like loads to be idempotent, you can supply a job_id to the load 
job.

Original comment by epa...@google.com on 13 Apr 2016 at 5:46

GoogleCodeExporter commented 8 years ago
Detailed advice about managing job retry can be found here:
https://cloud.google.com/bigquery/docs/managing_jobs_datasets_projects#managingj
obs

Original comment by jcon...@google.com on 13 Apr 2016 at 5:57

GoogleCodeExporter commented 8 years ago
Hmm, as looking DEBUG log of google-api-ruby-client, I could not find 
job_EWDq-z0DQ9Ho5FpkeSvbm3_Iz0Q.
But, thank you for your information, and I will try generating and supply a 
job_id by myself. 

Original comment by seo.naot...@dena.jp on 13 Apr 2016 at 6:33

GoogleCodeExporter commented 8 years ago

Original comment by epa...@google.com on 13 Apr 2016 at 7:15