Create an .csv file with many invalid rows (e.g. a donations import file with 200+ rows, with a non-existent payment account like "BAD PAYMENT ACCOUNT" as the donation_payment_account value for each row).
Launch a python terminal
Import dependencies: from parsons.etl.table import Table, from parsons.action_kit import ActionKit
Create an ActionKit object ak with access to a sandbox ActionKit instance (e.g. ak = ActionKit(**credentials['action_kit_sandbox']))
Import the data from .csv: ak_table = Table.from_csv('path/to/upload.csv')
Set the import page: import_page='some-test-import-page'
Then, all on one line, run: res = ak.bulk_upload_table(ak_table, import_page=import_page); errors = ak.collect_upload_errors(res['results']); print(errors); print(len(errors)).
The final commands are on a single line because the delay of a human entering terminal commands is long enough hide the problem. Before this change, you will get fewer errors than the total number of invalid rows in the file. After the change, you will get as many errors as there are invalid rows.
Test steps:
from parsons.etl.table import Table
,from parsons.action_kit import ActionKit
ak
with access to a sandbox ActionKit instance (e.g.ak = ActionKit(**credentials['action_kit_sandbox'])
)ak_table = Table.from_csv('path/to/upload.csv')
import_page='some-test-import-page'
res = ak.bulk_upload_table(ak_table, import_page=import_page); errors = ak.collect_upload_errors(res['results']); print(errors); print(len(errors))
.The final commands are on a single line because the delay of a human entering terminal commands is long enough hide the problem. Before this change, you will get fewer errors than the total number of invalid rows in the file. After the change, you will get as many errors as there are invalid rows.