spotify / spark-bigquery

Google BigQuery support for Spark, SQL, and DataFrames
Apache License 2.0
155 stars 52 forks source link

Error: java.io.IOException: Too many tables and views for query: Max: 1000 #36

Closed yu-iskw closed 7 years ago

yu-iskw commented 7 years ago

Hi all,

I have gotten the error below since 4, May 2017. It seems that inserting data on Spark to a BigQuery temporary table was failed. I guess the error was caused by some changes on BigQuery.

I will report the error to the BigQuery issues site later as well.

My environment is under:

Best,

Error Message

17/05/04 13:00:30 INFO com.spotify.spark.bigquery.BigQueryClient: Destination table: {datasetId=XXXXXXXXXXX, projectId=XXXXXXXX, tableId=spark_bigquery_20170504130030_1067308100}
Exception in thread "main" java.util.concurrent.ExecutionException: java.io.IOException: Too many tables and views for query: Max: 1000
    at com.google.common.util.concurrent.AbstractFuture$Sync.getValue(AbstractFuture.java:289)
    at com.google.common.util.concurrent.AbstractFuture$Sync.get(AbstractFuture.java:276)
    at com.google.common.util.concurrent.AbstractFuture.get(AbstractFuture.java:111)
    at com.google.common.util.concurrent.Uninterruptibles.getUninterruptibly(Uninterruptibles.java:132)
    at com.google.common.cache.LocalCache$Segment.getAndRecordStats(LocalCache.java:2381)
    at com.google.common.cache.LocalCache$Segment.loadSync(LocalCache.java:2351)
    at com.google.common.cache.LocalCache$Segment.lockedGetOrLoad(LocalCache.java:2313)
    at com.google.common.cache.LocalCache$Segment.get(LocalCache.java:2228)
    at com.google.common.cache.LocalCache.get(LocalCache.java:3965)
    at com.google.common.cache.LocalCache.getOrLoad(LocalCache.java:3969)
    at com.google.common.cache.LocalCache$LocalManualCache.get(LocalCache.java:4829)
    at com.spotify.spark.bigquery.BigQueryClient.query(BigQueryClient.scala:105)
    at com.spotify.spark.bigquery.package$BigQuerySQLContext.bigQuerySelect(package.scala:93)
    at com.mercari.spark.sql.SparkBigQueryHelper.selectBigQueryTable(SparkBigQueryHelper.scala:110)
    at com.mercari.spark.batch.UserProfilesTableCreator.fetchUUID(UserProfilesTableCreator.scala:148)
    at com.mercari.spark.batch.UserProfilesTableCreator.fetch(UserProfilesTableCreator.scala:45)
    at com.mercari.spark.batch.UserProfilesTableCreator.run(UserProfilesTableCreator.scala:24)
    at com.mercari.spark.batch.AbstractBatch.runWithRetry(AbstractBatch.scala:28)
    at com.mercari.spark.batch.AbstractBatch.runWithRetry(AbstractBatch.scala:34)
    at com.mercari.spark.batch.AbstractBatch.runWithRetry(AbstractBatch.scala:34)
    at com.mercari.spark.batch.AbstractBatch.runWithRetry(AbstractBatch.scala:34)
    at com.mercari.spark.batch.AbstractBatch.runWithRetry(AbstractBatch.scala:34)
    at com.mercari.spark.batch.AbstractBatch.runWithRetry(AbstractBatch.scala:34)
    at com.mercari.spark.batch.AbstractBatch.runWithRetry(AbstractBatch.scala:34)
    at com.mercari.spark.batch.AbstractBatch.runWithRetry(AbstractBatch.scala:34)
    at com.mercari.spark.batch.AbstractBatch.runWithRetry(AbstractBatch.scala:34)
    at com.mercari.spark.batch.AbstractBatch.runWithRetry(AbstractBatch.scala:34)
    at com.mercari.spark.batch.AbstractBatch.runWithRetry(AbstractBatch.scala:34)
    at com.mercari.spark.batch.AbstractBatch.runWithRetry(AbstractBatch.scala:34)
    at com.mercari.spark.batch.AbstractBatch.runWithRetry(AbstractBatch.scala:34)
    at com.mercari.spark.batch.AbstractBatch.runWithRetry(AbstractBatch.scala:34)
    at com.mercari.spark.batch.AbstractBatch.runWithRetry(AbstractBatch.scala:34)
    at com.mercari.spark.batch.AbstractBatch.runWithRetry(AbstractBatch.scala:34)
    at com.mercari.spark.batch.AbstractBatch.runWithRetry(AbstractBatch.scala:34)
    at com.mercari.spark.batch.AbstractBatch.runWithRetry(AbstractBatch.scala:34)
    at com.mercari.spark.batch.AbstractBatch.runWithRetry(AbstractBatch.scala:34)
    at com.mercari.spark.batch.AbstractBatch.runWithRetry(AbstractBatch.scala:34)
    at com.mercari.spark.batch.AbstractBatch.runWithRetry(AbstractBatch.scala:34)
    at com.mercari.spark.batch.AbstractBatch.runWithRetry(AbstractBatch.scala:25)
    at com.mercari.spark.batch.UserProfilesTableCreator$.main(UserProfilesTableCreator.scala:239)
    at com.mercari.spark.batch.UserProfilesTableCreator.main(UserProfilesTableCreator.scala)
    at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
    at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
    at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
    at java.lang.reflect.Method.invoke(Method.java:498)
    at org.apache.spark.deploy.SparkSubmit$.org$apache$spark$deploy$SparkSubmit$$runMain(SparkSubmit.scala:731)
    at org.apache.spark.deploy.SparkSubmit$.doRunMain$1(SparkSubmit.scala:181)
    at org.apache.spark.deploy.SparkSubmit$.submit(SparkSubmit.scala:206)
    at org.apache.spark.deploy.SparkSubmit$.main(SparkSubmit.scala:121)
    at org.apache.spark.deploy.SparkSubmit.main(SparkSubmit.scala)
Caused by: java.io.IOException: Too many tables and views for query: Max: 1000
    at com.google.cloud.hadoop.io.bigquery.BigQueryUtils.waitForJobCompletion(BigQueryUtils.java:95)
    at com.spotify.spark.bigquery.BigQueryClient.com$spotify$spark$bigquery$BigQueryClient$$waitForJob(BigQueryClient.scala:134)
    at com.spotify.spark.bigquery.BigQueryClient$$anon$1.load(BigQueryClient.scala:90)
    at com.spotify.spark.bigquery.BigQueryClient$$anon$1.load(BigQueryClient.scala:79)
    at com.google.common.cache.LocalCache$LoadingValueReference.loadFuture(LocalCache.java:3568)
    at com.google.common.cache.LocalCache$Segment.loadSync(LocalCache.java:2350)
    ... 44 more
17/05/04 13:00:31 INFO akka.remote.RemoteActorRefProvider$RemotingTerminator: Shutting down remote daemon.
17/05/04 13:00:31 INFO akka.remote.RemoteActorRefProvider$RemotingTerminator: Remote daemon shut down; proceeding with flushing remote transports.
yu-iskw commented 7 years ago

I got that the error was completely caused by BigQuery specifications. It seems that they changed the spec recently.