billdeitrick / pypco

A Python client for the Planning Center Online API.
MIT License
39 stars 13 forks source link

Optional conversion of jsons from Api method to custom object #51

Open 07pepa opened 2 years ago

07pepa commented 2 years ago

By small modification of api methods a users (dependency injected) suplied method could convert/validate returned json from api and potentialy turn it into wanted class (that you can work with much beter then json and you do not have to do it in your code)

a great example of this is pydantic

i could suply a constructor to interface

here is small sniped i

from enum import Enum
from pydantic import BaseModel
class PcoType(Enum):
    schedule = "Schedule"
    time = "PlanTime"

class PCOData(BaseModel):
    type: PcoType
    id: int

pco.get('/people/v2/schedules', preproces=PCOData.__innit__) #this is only example it is incorect
pastorhudson commented 2 years ago

@billdeitrick 's first version of pypco did this. You can look at it with I think it's v0 branch. It's still there. However what happened was I kept asking for endpoints that were not in the api documentation. And it wouldn't work. Plus every time the api changed then we had to generate new constructor code. It was high maintenance. How would we avoid these pain points?

07pepa commented 2 years ago

This is not a big thing or issue but it would help others customize api and shorten their code

07pepa commented 2 years ago

i am also willing to add this

billdeitrick commented 2 years ago

@pastorhudson is right, the original version of pypco worked somewhat like this. We autogenerated model classes from the API documentation endpoints with this script.

Once API versioning was introduced, this approach was no longer maintainable. That would have meant that we needed to do a release every time PCO made a change, while also supporting all past API versions. Otherwise we would be the barrier to accessing new endpoints and features. For comparison, even the Ruby wrapper put out by the PCO team doesn't do this.

But, I'm not opposed to exploring options for how we could make this work. We could probably use the same approach of generating classes from the documentation to create the API models.

Here are some thoughts I've had as to how we could go about this:

  1. Include models for all API versions, and choose the version you want at initialization. I don't like this option at all.
  2. The pypco user is required to generate the model classes they need when they initialize their project. I don't much like this option either; getting started is too hard.
  3. My current preferred option: we make having model classes optional, and provide a generation tool that you can use if you want. You can then generate the model classes only if you want and pass them around via dependency injection where you want to use them. If you don't want to do this, you can completely ignore this functionality.

I like option 3 the best (and that sounds along the lines of what is being suggested above), but am open to other ideas too.

pastorhudson commented 2 years ago

I prefer option 3 as well. Mostly because I have thousands of lines of code in production using the current interface.

I find that I'm using includes extensively when dealing with the pco api. For instance here's a groups api call I use: /groups/v2/groups?include=group_type,location&order=name

So the main object returned is a group but we're going to have group_type objects and location objects in the include. These need to be matched with the group objects so we know which include goes with which group.

In v0 of pypco I would have a nice interface for the main object returned, but then I had to do all the manual work with the include json to match it up and use that data. It got really messy.

Currently pypco adds the correct include data to the correct object when using pco.iterate().

If you can correctly turn all the includes into the correct object classes and connect everything up that would be a dream to use for lots of things until we step outside of the documented supported use cases.

Just some thoughts.

07pepa commented 2 years ago

option 3 for me work as well... idealy do not make hard dependency on some tool (but if needed it may be done and i will not evangelize pydantic furher even i use it profesionaly}

pastorhudson commented 2 years ago

I've heard great things about pydantic. Python 3.10 has data classes. Would it be too soon to drop support for Python < 3.10? It probably makes sense to support whatever version is shipping with most Linux distributions as long as they keep pace with 3.x.

billdeitrick commented 2 years ago

I think we should be supporting versions of Python that are currently receiving active support, so >= 3.7 at the moment. Pydantic seems to have broad support in the Python community, so I think that would be a good choice.

Like @pastorhudson pointed out, there is some complexity here. Includes will be tricky, but I think that's solvable. Overall the key is that we don't want to break anyone using pypco as-is today, but provide an easy path for auto-generating and deserializing to custom types if desired.

Personally, I would like to see async support added first, and then this.