aliev / aioauth

Asynchronous OAuth 2.0 provider for Python 3
https://aliev.me/aioauth
MIT License
212 stars 19 forks source link

Project Restructure #28

Closed aliev closed 3 years ago

aliev commented 3 years ago

I have some ideas, but mostly thoughts, I'm still studying the library, so sorry if they don't make sense 😅 haha

I was thinking maybe providing some store implementations + an in-memory implementation (to play with, at the moment the in-memory is part of the tests), kind of like fastapi-users does [1] [2]

Also I was wondering if this should be aio-auth or asgi-auth, being asgi means it should work with starlette, fastapi and any other asgi library (kind of like wsgi with flask, django, etc). But this is mostly a thought rather than an idea, not sure the impact, though asgi servers provide the request object [3]. But don't hold up to that thought, maybe is also better to provide interfaces for different frameworks, like it is documented now in the README. I still have to weight in the different approaches, and if an asgi approach is even possible (I think it would work for middlewares... but providing the endpoints may be super difficult, so maybe just to have each framework implementation isolated.

Maybe the user should do: pip install aioauth[sqlalchemy,starlette] ???

For the docs I'd use mkdocs.

I'd also recommend using the pyproject.toml format, or separating the requirements into normal + development requirements (wheel and twine are not "test", so semantically is a bit confusing).

I maintain commitizen, so if you were to use conventionalcommits + semver, you could get automatic releases + changelog [4] [5]

Thanks a lot for the package and I hope I can contribute more. If I come up with any real useful idea I'll open a ticket and we can discuss there 👍🏻

Originally posted by @Woile in https://github.com/aliev/aioauth/issues/27#issuecomment-809474440

aliev commented 3 years ago

@Woile I created an issue to continue discussion here, because when I merge the PR we can lost the discussion :)

synchronizing commented 3 years ago

@Woile

I was thinking maybe providing some store implementations + an in-memory implementation (to play with, at the moment the in-memory is part of the tests), kind of like fastapi-users does [1] [2]

I think this is a solid idea. We could either (a) default to MemoryDB, or (b) allow the user to enable the memory database via the settings.

Also I was wondering if this should be aio-auth or asgi-auth, being asgi means it should work with starlette, fastapi and any other asgi library (kind of like wsgi with flask, django, etc). But this is mostly a thought rather than an idea.

I appreciate the thought, but I personally still side with 'aioauth' - primarily because there is no discrimination between server gateway interfaces as long as they are asynchronous. The way I see it is that 'asgi' is a subset of 'aio' libraries, and 'aioauth' doesn't care.

But don't hold up to that thought, maybe is also better to provide interfaces for different frameworks, like it is documented now in the README.

@aliev and I spoke briefly about this. The goal is to implement a few helper methods that will allow easier use of 'aioauth' with different frameworks.

I still have to weight in the different approaches, and if an asgi approach is even possible (I think it would work for middlewares... but providing the endpoints may be super difficult, so maybe just to have each framework implementation isolated.

In the end of the day I think the end user will have to implement their needs regardless. Specially once OpenID Connect gets implemented, things like scopes and claims will need to be manually programmed by the end user.

For the docs I'd use mkdocs.

I'm working on docs right now, but using Sphinx. Haven't used mkdocs in the past- any reasoning for this choice? Interested in learning more, and if it's good, I can switch over #21 from Sphinx to mkdocs.

I'd also recommend using the pyproject.toml format, or separating the requirements into normal + development requirements (wheel and twine are not "test", so semantically is a bit confusing).

We are currently doing that via the "requirements" folder, I believe? Unsure what you mean here, sorry.

I maintain commitizen, so if you were to use conventionalcommits + semver, you could get automatic releases + changelog [4] [5]

This sounds interesting. I haven't spoken with @aliev yet on this, but I was figuring to suggest pbr + Github Actions for automating PyPi releases and the sort. Will check out the project you linked.

If I come up with any real useful idea I'll open a ticket and we can discuss there 👍🏻

Awesome man. Thanks for the previous PR. @aliev has set-up a nice repo and project here, and I think like you I also want to see this project flourishing. Companies like Auth0, Okta, etc. are expensive, and having seen successful executions of open source OAuth/OpenID Connect projects in the synchronous world I know it's doable to do the same in the async world.

(Written via mobile. Excuse any spelling/grammar mistake.)

woile commented 3 years ago

We could either (a) default to MemoryDB, or (b) allow the user to enable the memory database via the settings.

I would go for b) I wouldn't default, it may give the wrong impression, better to show that's an in-memory adapter. An even throw a warning when using it: `warning: do not use MemoryDB in production environments

primarily because there is no discrimination between server gateway interfaces as long as they are asynchronous

Makes sense.

The goal is to implement a few helper methods that will allow easier use of 'aioauth' with different frameworks.

This could just be documentation (at least at the beginning), to reduce the maintenance burden

any reasoning for this choice?

I found mkdocs + mkdocs-material [1] much simpler than anything else. Writing documentation and contributing, I always find markdown simpler. IIRC for sphinx you create a folder with python files + makefile or something like that. With mkdocs you just have a docs/ folder filled with md files. I think it's flexible enough without sacrificing simplicity. You can see starlette confiruation and result. And here a github action to auto-deploy I think you can copy-paste it and if you have a docs/ folder with .md files + a basic mkdocs.yaml it should just work.

Unsure what you mean here

I would rename requirements/test.txt -> requirements/dev.txt, but it's super minor. Or use pyproject.toml:

This sounds interesting

Check it out, I wrote a small article. I would say the only price is to stick to a commit standard, like: https://www.conventionalcommits.org/ and then you are ready to use commitizen, which will bump the version in any file present + create tag + changelog. There's a github action that also automates everything, you can see it here: https://github.com/commitizen-tools/commitizen-action

and I think like you I also want to see this project flourishing

Agreed, for my personal project is a bit big (oauth not the library itself), but I'm evaluating using it anyway as a way to internalize oauth and contributing and trying to grow it, but still I'm constrained by time, so I hope I can invest some time on it. I'd really like to have some simple interoperable tools to provide identity + oauth + openidconnect which you can integrate in a progressive manner.

aliev commented 3 years ago

@Woile

I was thinking maybe providing some store implementations + an in-memory implementation (to play with, at the moment the in-memory is part of the tests), kind of like fastapi-users does [1] [2]

fastapi-users offers a fixed model structure, which could takes away the freedom of choice in our case. we do not have any requirements to a specific ORM or DB backend. each user of the library decides what is best for him to use, we only offer the basic structure of the models that were taken from the RFC. I suggest making good documentation with different user stories instead.

having a simple example based on "in memory database" is great for start, but I was not able to find good "in-memory database", for this reason, in tests, we store everything in standard data structures of Python.

woile commented 3 years ago

I suggest making good documentation with different user stories instead

This is perfectly fine, and I think it's the best way to start, so you can monitor the package growth and decide (if wanted), on a future strategy. For example, if many users were using, sqlalchemy + databases from a gist, it could be formalized into either a new package or "adapters" inside this library, to simplify devs lives, but yes, it's not needed right away.

having a simple example based on "in memory database" is great for start, but I was not able to find good "in-memory database", for this reason, in tests, we store everything in standard data structures of Python.

yes, the db in the tests is a valid in-memory db. I would refactor it a bit to use dictionaries instead of lists, which would make it simpler (maybe faster), I didn't mean to use another library, just basic python datatypes is fine. But making it available to users, instead of using it only for tests, would mean devs are gonna be able to try and play with the library in their projects right away.

aliev commented 3 years ago

yes, the db in the tests is a valid in-memory db. I would refactor it a bit to use dictionaries instead of lists, which would make it simpler (maybe faster), I didn't mean to use another library, just basic python datatypes is fine. But making it available to users, instead of using it only for tests, would mean devs are gonna be able to try and play with the library in their projects right away.

that's would be great! Unfortunately, I didn't have enough time to put the tests in order :)

synchronizing commented 3 years ago

(a) default to MemoryDB, or (b) allow the user to enable the memory database via the settings.

I would go for b) I wouldn't default, it may give the wrong impression, better to show that's an in-memory adapter. An even throw a warning when using it: `warning: do not use MemoryDB in production environments

I agree with this, sounds solid.

The goal is to implement a few helper methods that will allow easier use of 'aioauth' with different frameworks.

This could just be documentation (at least at the beginning), to reduce the maintenance burden

That's the plan right now! Working on it when I have free time from work/school over at #21.

any reasoning for this choice?

I found mkdocs + mkdocs-material [1] much simpler than anything else. Writing documentation and contributing, I always find markdown simpler. IIRC for sphinx you create a folder with python files + makefile or something like that. With mkdocs you just have a docs/ folder filled with md files. I think it's flexible enough without sacrificing simplicity.

I'll definitely be checking out MKDocs then. Any suggestion for auto-docs? After a bit of research I've come across a few, but unsure which one is the most robust. All in-code docs are currently written in the Google Docstring format.

Unsure what you mean here

I would rename requirements/test.txt -> requirements/dev.txt, but it's super minor.

We currently have the split for test and docs - we could potentially merge them into dev.

This sounds interesting

Check it out, I wrote a small article. I would say the only price is to stick to a commit standard, like: https://www.conventionalcommits.org/ and then you are ready to use commitizen, which will bump the version in any file present + create tag + changelog. There's a github action that also automates everything, you can see it here: https://github.com/commitizen-tools/commitizen-action

I read through the article and docs for the project, and frankly, I really like this. @aliev if you would support the change over to commitizen, I'm fully supportive.

and I think like you I also want to see this project flourishing

... but still I'm constrained by time, so I hope I can invest some time on it.

Same here. Working on the project when I have the time. Hopeful to see this guy flourishing well.

woile commented 3 years ago

Any suggestion for auto-docs? After a bit of research I've come across a few, but unsure which one is the most robust. All in-code docs are currently written in the Google Docstring format.

I think the best is https://github.com/tomchristie/mkautodoc which basically allows you to write any valid markdown inside the docstring. This is nice and similar to what rust does by default.