This is a complete rewrite of the gem with tests, full API capability and error reporting (for invalid method arguments/data and invalid API responses). You can now add multiple batches per job this way you don't have to have "wait" hacks on the operation methods in the current version. You can also specify the concurrency mode (Parallel is default). A lot of features here but maintains the simplicity of the gem. I've updated the README with examples.
One of the neat features is strong typing for batch and query results. When requesting the results of a batch that did a create, update, delete or upsert action you'll receive an array of BatchResult objects where it has helpers for determining if it completed, failed, etc. With Ruby's built in Array methods filtering out failed results is super easy. For results of a query action you get an array of hashes with keys (symbols) using the Salesforce field name! Finally, no more sifting through via index.
I'm not to familiar with querying and expected a list of result ids if a query returns more than 10000 records but this doesn't seem to be the case in my testing. I have already coded for this use case but puzzled that it doesn't work that way. I guess its a 1-1 relationship with batches and results. I expected multiple results per batch based on the XML response.
I have support for the original operation methods but I feel that they should be removed.
I will continue to post changes to this as I still have rdocs to write.
I've reviewed all the forks so I've included all those changes in this rewrite, don't worry! You can specify a host so if you want to access test.salesforce.com you can and keys for the add batch call is retrieved from the first hash in the data set. I easily push 250,000 or more records a day using this gem. Looping to figure out the keys is a nice feature but with that kind of data its not optimal.
A lot of great stuff in here. Let me know what you think.
This is a complete rewrite of the gem with tests, full API capability and error reporting (for invalid method arguments/data and invalid API responses). You can now add multiple batches per job this way you don't have to have "wait" hacks on the operation methods in the current version. You can also specify the concurrency mode (Parallel is default). A lot of features here but maintains the simplicity of the gem. I've updated the README with examples.
One of the neat features is strong typing for batch and query results. When requesting the results of a batch that did a create, update, delete or upsert action you'll receive an array of BatchResult objects where it has helpers for determining if it completed, failed, etc. With Ruby's built in Array methods filtering out failed results is super easy. For results of a query action you get an array of hashes with keys (symbols) using the Salesforce field name! Finally, no more sifting through via index.
I'm not to familiar with querying and expected a list of result ids if a query returns more than 10000 records but this doesn't seem to be the case in my testing. I have already coded for this use case but puzzled that it doesn't work that way. I guess its a 1-1 relationship with batches and results. I expected multiple results per batch based on the XML response.
I have support for the original operation methods but I feel that they should be removed.
I will continue to post changes to this as I still have rdocs to write.
I've reviewed all the forks so I've included all those changes in this rewrite, don't worry! You can specify a host so if you want to access test.salesforce.com you can and keys for the add batch call is retrieved from the first hash in the data set. I easily push 250,000 or more records a day using this gem. Looping to figure out the keys is a nice feature but with that kind of data its not optimal.
A lot of great stuff in here. Let me know what you think.