The Storage API BigQuery sink already handles rows that do not conform to schema and forwards them to the failed-rows PCollection. We should handle the remaining cases - rows that are too large, and rows that don't conform to any other BigQuery constraints.
What would you like to happen?
The Storage API BigQuery sink already handles rows that do not conform to schema and forwards them to the failed-rows PCollection. We should handle the remaining cases - rows that are too large, and rows that don't conform to any other BigQuery constraints.
Issue Priority
Priority: 2
Issue Component
Component: io-java-gcp