survos / platform-api-r

Sample programs demonstrating calls to the API
0 stars 0 forks source link

Method for parsing Criteria to API GET Endpoints #6

Open phillc73 opened 8 years ago

phillc73 commented 8 years ago

We need to discuss how you wish to deal with the parsing of criteria from R to the API GET endpoints.

As there are multiple possible criteria, which all need to be parsed in an individual criteria parameter, there is the potential for a big mess.

As I see it, there are three options:

  1. Allow the user to free form enter any criteria they wish. I would need to update each package function to assess the entered "criteria" against the named options for each end point, e.g. max_per_page, project_code, page etc. Anything parsed to the function from R, which didn't match a named option, would be treated as a criteria instead. There would be no further checking of these criteria's validity, it would be assumed the end user was entering valid data.
  2. Define a specific set of criteria to be used for each API endpoint. Think of your use cases, and then define the relevant criteria. Once we have the criteria defined for each API endpoint, or even just the most important ones to start with, I can update the package functions.
  3. Make the end user do everything in their R script. Just allow them to enter the relevant endpoint options and then filter the returned data by criteria. e.g. Use the project_code option for the members endpoint, which returns all matching records for that project code, then the end user filters the data by member_id.

I think option 2 is probably a nice middle ground. Option 1 would be a lot of work. Option 3 perhaps asking too much of your end users.

If you would like to define some prioritised specific criteria for various endpoints, I can move ahead with adding them to the package functions.

tacman commented 8 years ago

When you say option 1 is a lot of work, you mean for the R programmer, right? Not for the library, which would simply pass the data to the API and send it back, right?

That's probably fine. I'll be asking the R programmer to start working on the scripts today.

Tac

On Mon, Sep 21, 2015 at 9:34 AM, Phill notifications@github.com wrote:

We need to discuss how you wish to deal with the parsing of criteria from R to the API GET endpoints.

As there are multiple possible criteria, which all need to be parsed in an individual criteria parameter, there is the potential for a big mess.

As I see it, there are three options:

1.

Allow the user to free form enter any criteria they wish. I would need to update each package function to assess the entered "criteria" against the named options for each end point, e.g. max_per_page, project_code, page etc. Anything parsed to the function from R, which didn't match a named option, would be treated as a criteria instead. There would be no further checking of these criteria's validity, it would be assumed the end user was entering valid data. 2.

Define a specific set of criteria to be used for each API endpoint. Think of your use cases, and then define the relevant criteria. Once have the criteria defined for each API endpoint, or even just the most important ones to start with, I can update the package functions. 3.

Make the end user do everything in their R script. Just allow them to enter the relevant endpoint options and then filter the returned data by criteria. e.g. Use the project_code option for the members endpoint, which returns all matching records for that project code, then the end user filters the data by member_id.

I think option 2 is probably a nice middle ground. Option 1 would be a lot of work. Option 3 perhaps asking too much of your end users.

If you would like to define some prioritised specific criteria for various endpoints, I can move ahead with adding them to the package functions.

— Reply to this email directly or view it on GitHub https://github.com/survos/platform-api-r/issues/6.

phillc73 commented 8 years ago

Option 1 is the most effort for library development, yes.

Option 3 is the most effort for the end user. They will have to do maximum R coding to extract their required data. This is the least effort for development of the library.

Option 2 seems like a good middle ground. Define some common criteria for 85% of use cases, which will probably only be 35% of the work of Option 1.