Closed godd9170 closed 7 years ago
After some thought I've come to two potential conventions. I'm focusing mostly on the Bulk one, as it's the more considerable problem to solve.
The idea is to declare in it's own json
object at the beginning of the payload, all the lookups that will ultimately exist in each of the rows.
{
"client_id" : "ABCDEFG1234567",
"sf_object_id" : "Account",
"lookups" : {
"<object_name1>" : "<mapping_field1>",
"<object_name2>" : "<mapping_field2>"
},
"rows" : [
{
"<field1>" : "<value>",
"<field2>" : "<value>",
"<mapping_field1>" : "<unique_value1>",
"<mapping_field2>" : "<unique_value2>"
},
{
"<field1>" : "<value>",
"<field2>" : "<value>",
"<mapping_field1>" : "<unique_value1>",
"<mapping_field2>" : "<unique_value2>"
}
]
}
In this convention, each row can lookup as many objects on different uniquely identified fields as it pleases.
{
"client_id" : "ABCDEFG1234567",
"sf_object_id" : "Account",
"rows" : [
{
"<field1>" : "<value>",
"<field2>" : "<value>",
"<lookup_field>" : {
"object" : "<sfdc_object>",
"field" : "<field_name>",
"unique_value": "<value>"
}
},
{
"<field1>" : "<value>",
"<field2>" : "<value>",
"<lookup_field>" : {
"object" : "<sfdc_object>",
"field" : "<field_name>",
"unique_value": "<value>"
}
}
]
}
As I see it here are the tradeoffs
Issue | Convention 1 | Convention 2 |
---|---|---|
Algorithm Complexity | :white_check_mark: Much simpler given that we treat all rows as having the same lookups with the same unique field as the identifier | :x: More strenuous as any n number of lookups with any x number of unique field means multiple queries to identify all the SFDC Ids |
Ease of Use | :x: IMHO much more convoluted and difficult to understand | :white_check_mark: Allows the payload to strictly be data rows |
Flexibility | :x: As a stark contrast to the first issue, the simplicity in Algorithm means less flexibility | :white_check_mark: Any combo of lookups to any combo of objects is very attractive, and being that we're only loading into ONE object type, there will be a realistic upper limit on just how many lookup fields exist on said object. |
A major component of this is also some alterations to the URL. I'm envisioning the following (which works in both the proposed conventions above).
api.saasli.com/{client_id}/bulk/{object}?config1=true&config2=false
This means our payload is ONLY the data that's going to be dealt with, The URL dictates the method (bulk/single), the client who is performing AND the object. Finally any little configs that seem to pop up as we develop (Like the decision on whether to create new lookup records if none exist) can just be tacked on as optional query strings.
The above mentioned URL is missing a value that will indicate which field on the object represents the unique discerning field.
We're also going to need a place to specify the unique external id field on the object, so that we can be upserting. I don't like the idea of tacking yet another value onto the path, but that's the only way I can ensure that its included.
So those last two messages I think I'm talking about the same thing. The URL should look something like:
api.saasli.com/{client_id}/bulk/{object}/{external_id_field}
The external_id_field
is going to represent the field that MUST be an external id field within Salesforce.
Our event endpoint is great, however it assumes we've got an Object called
User_Usage_History__c
in every sfdc, and that's just not true. This issue is more of a forum to discuss the structure for such an endpoint. We'll make this API substantially more versatile if we can write data to ANY object type.The real difficulty is going to be associating this new record via Lookup Relationships to other arbitrary objects.