In the Google Cloud console, an error message in the Cloud Run page was warning us that we were exceeding an API request limit. This was probably due to the fact that we make a get_table request with every streaming insert, and there is a limit of 100 API requests per second per user per method (see Issue #96 in Pitt-Google Broker repo).
This PR fixes the bug causing this error message. Instead of making a get_table request, we specify the BigQuery table_id and table schema (selected_fields) and include these parameters in insert_rows(). See the insert_rows() documentation for specific details.
These changes were tested by deploying a classifier instance with the trigger topic: elasticc-loop. We successfully wrote classifications to BigQuery, although it is important to note that the rate of alerts being classified using elasticc-loop is not comparable to the rate of classifications that will occur during the elasticc2 challenge.
In the Google Cloud console, an error message in the Cloud Run page was warning us that we were exceeding an API request limit. This was probably due to the fact that we make a
get_table
request with every streaming insert, and there is a limit of 100 API requests per second per user per method (see Issue #96 in Pitt-Google Broker repo).This PR fixes the bug causing this error message. Instead of making a
get_table
request, we specify the BigQuerytable_id
and table schema (selected_fields
) and include these parameters ininsert_rows()
. See theinsert_rows()
documentation for specific details.These changes were tested by deploying a classifier instance with the trigger topic: elasticc-loop. We successfully wrote classifications to BigQuery, although it is important to note that the rate of alerts being classified using elasticc-loop is not comparable to the rate of classifications that will occur during the elasticc2 challenge.