Open rchynoweth opened 2 months ago
What happens if you omit the first count check? I'm trying to narrow down if there is an issue with the spark.read being called twice.
Interesting, it appears that the last count returned is still incorrect if I omit the first count. To summarize, I just did the following:
However, after a couple minutes it returned the proper count as a dataframe.
Hi @rchynoweth, I've encountered a similar issue. It looks like since spark-BQ connector 0.34.0 enableReadSessionCaching property is by default "true' (and corresponding one readSessionCacheDurationMins default is 5mins). I'm not 100% sure but for me, if you read the same table (even if it was changed in the meantime) within 5 minutes spark doesn't read any data from BQ but from the cache. I've tried to clean the cache but with no success. Simply set this property to "false" once starting your spark session and you should get the correct count. However, the option to refresh the table should be added.
@MichalBogoryja - do you have to set it in the spark settings? Not in the dataframe read options? Doesn't work for me in the dataframe read.
This has been fixed and will be available in the next release. In the meantime, you can test it using the nightly build. E.g. gs://spark-lib-nightly-snapshots/spark-3.5-bigquery-nightly-snapshot.jar
I am having an issue getting accurate counts when reading/writing to BigQuery from Databricks after installing the connector.
Connector Version: spark-3.5-bigquery-0.39.1.jar Apache Spark 3.5.0 Scala 2.12 Databricks 14.3LTS
Code to replicate:
When I check the BQ table the rows are updated but not reflected in my Dataframe. If I use the query option instead of the table and perform "select count(1) from mydataset.test_read_write_table" then the counts are accurate. This seems like a potential cache problem which I tried using the cacheExpirationTimeInMinutes option to 0 but it seems to not work. However, if I set it to a positive integer it does work after the time setting is up.