zvictor / brainyduck

🐥 A micro "no-backend" framework 🤯 Quickly build powerful BaaS using only your graphql schemas
https://duck.brainy.sh
GNU Affero General Public License v3.0
69 stars 3 forks source link

obtain the expanded schema without importing it to fauna #1

Open zvictor opened 4 years ago

zvictor commented 4 years ago

TLDR for the @fauna team: Compiled suggestions on how to improve Fauna and fix this issue can be found here.


Currently, in order to generate TS types (faugra generate-types and faugra build-sdk), faugra makes some compromises that won't be acceptable to the general audience.

The problem

Given a schema file, faugra uploads its content to fauna in order to:

Without uploading the schema to the cloud the TS types would be incomplete , lacking the content that fauna adds to it.

Current solution

Putting all together, in order to generate the TS types, faugra needs to:

  1. prepare the schema (which potentially comes from a merge of multiple files)
  2. upload the basic schema (requires credentials and can have serious unintended consequences)
  3. download the expanded one (requires credentials)
  4. run graphql-codegen

As modularisation is a core principle of faugra we need to repeat this process for each file individually (--> We had to give up on that because of this issue). But, if we do not reset the database before pushing the new schema in, fauna will merge the content of the files. The last schema uploaded will in practice extend the content of all schema files pushed before. Therefore, importing the schema in override mode is a must.

Considerations

Considering the performance and side effects of steps $2 and $3 I believe that I can't have the TS types being generated "on save", as I initially planned. And, after all considered, I wonder if anyone would actually bother going through the hassle of setting up such tool that requires credentials and mess with your data.

So, we need to find a way to kill steps $2 and $3: we need to programmatically add the missing content in the basic schema instead of publishing it to the cloud. Where do we start? :upside_down_face:

zvictor commented 4 years ago

If we could programatically expand the schemas ourselves, we would not need to upload the schema to fauna.

As it looks like that fauna is running a loosely compliant version of openCRUD, I believe that we could run a script that would make any initial schema expand to add the openCRUD components. That would give us pretty much the same result as uploading the schema to fauna (we could then customize it to be fully compliant).

Sadly, building such script would a big project on itself.

I managed to find only 2 packages that are doing that already, but they have a lot of added flavours on top: @graphback/codegen-schema and prisma-generate-schema.

They seem to achieve what we need, but they are wrapped in lots of extra complexities that serve the specific needs of their frameworks and simplifying them would be as hard as starting from scratch.

Therefore, we are still sadly dependent on fauna's team goodwill to help us moving forward.

@erickpintor @lregnier @n400 is there any chance you could advocate inside fauna to open source the part of the code that handles the schema expansion? it's virtually impossible to build any extension like faugra having to rely on unaccessible parts

n400 commented 4 years ago

Hi @zvictor, Leo and Erick are pretty swamped right now with the upcoming release. I've added this to our roadmap and will discuss it with the Product and Engineering teams to see how feasible it is (being tracked internally as ROAD-245). Feel free to email or DM me on Slack for an update. Sorry that the project has been blocked :/

zvictor commented 2 years ago

Hi @n400! It looks like you are not working with Fauna anymore, which is very sad to hear 🥹 Do you know who I could contact to help me fighting for the OSS community inside Fauna?

n400 commented 2 years ago

Hi @zvictor! It’s good to hear from you. I would try reaching out to @rts-rob, Head of DA. He’s passionate about the OSS community and started Fauna Labs .

rts-rob commented 2 years ago

Thanks @n400 !

Hi @zvictor - looping in @Shadid12 who has done some related work in tooling, all available in Fauna Labs.

You can also find us in our forums and Discord server!

zvictor commented 2 years ago

Thanks @n400!

@rts-rob @Shadid12 would you be able to check and report on ROAD-245? Is it still an open issue or is the work done there?

zvictor commented 2 years ago

To be very clear and make it easier for the Fauna team to help us here, I would like to present 2 proposals of solutions.

A) Publishing and Maintaining a Package

It would be great to have the Fauna code open sourced, for obvious reasons, but that is something out of our reach of influence apparently. However, that does not mean that relevant pieces of code could not be open sourced as separated packages, right?

Somewhere inside Fauna the schema we upload gets expanded to include the basic CRUD methods, either while it's still just a GraphQL schema or somewhere else when it's already being processed and being converted into Collections and so on. Regardless of when/where that happens, we need to split parts of it out and publish them as independent packages.

Libraries like fauGRa and it's sort can only be built if we have an environment that is predictable and reliable. Otherwise, "magic features" like the creation of the basic CRUD methods become unreachable, breaking the composability principle.

Having a package maintained by Fauna that we can call from our code in order to expand the schema ourselves would be great!

B) Adding a New Import Mode

POST /import?mode=dry-run

A new mode that would take the schema, validate it, and return the expanded/final schema from Fauna. Currently we achieve that result by pulling and then pushing the schema from Fauna: https://github.com/zvictor/faugra/blob/6e6ac349605bedb64c8aef1ca8e0dea43f1de80b/commands/generate-types.js#L35-L37

Needless to say, that's a terrible solution 🥲

The dry-run endpoint/mode must be publicly accessible (i.e. with no need to present a secret key). Otherwise, users wouldn't feel comfortable delegating the secret keys of (possibly) production databases to tools that are supposed to run things in dry-mode. Plus, it would add extra work in the CI/CD and whatever other environments that would need to keep an extra key just for dry-run operations.

zvictor commented 2 years ago

Some reflections on the proposals:

zvictor commented 2 years ago

Hi @rts-rob and @Shadid12 ! Did you have the chance to check the situation of ROAD-245 and maybe check the other comments in this thread as well?

I have been working in this framework for the last 2y and I am very happy with what I have accomplished with it so far. I am also confident that many will love it and that the Fauna community will benefit directly if this ever becomes popular.

I never dared to actually launch/promote this framework anywhere because the UX of it was not quite there yet, but I would like to finally get it to the next level soon: I plan on renaming and rebranding the whole project, working on #10, and then making a decent launch.

All of it has been waiting since April 2020 on this one issue right here. Until #1 is fixed I don't believe that the UX this tool can provide will be of any use to almost anyone.

So I really hope I can hear back from you on what we can do to move forward together. 🤘

ghost commented 2 years ago

I think the realistic thing would be to extract the faunadb server from their docker image, decompile and write a tool to replicate the functionality.

The main issue I see with this approach is that we have to manage this, instead of fauna.

don’t think there will be too much support on this from fauna directly

edit: After going through the decompiled source, it looks like it would be a huge undertaking to convert this to typescript.

Running faugra through FaunaDev docker with the config set to bypass the cache(60 sec timeout) it works the same as in fauna with proper validation.

As for fixing 2 and 3, The user should have docker installed, run 1-2 commands.

Since the config by default to the db is secret. Faugra could create a db with a unique hash, generate its secret import the schema and download the expanded one, and delete that db

zvictor commented 2 years ago

Thank you for the deep investigation and for sharing your thoughts with us, Daniel!

don’t think there will be too much support on this from fauna directly

Speaking from experience, we can't expect any level of support/collaboration from them on any direction we go. We are by our own, and waiting for things to improve on their side has proved to be a mistake in numerous occasions.

it looks like it would be a huge undertaking to convert this to typescript.

Likely yes. Plus, we would need to keep track of changes in a complex system in another language (Java?) to them port it to our system whenever changes. It sounds very manual and prone to errors, unfortunately 🥹

Running faugra through FaunaDev docker with the config set to bypass the cache(60 sec timeout) it works the same as in fauna with proper validation. The user should have docker installed, run 1-2 commands.

It's a great solution once docker is available. The problems is that our ultimate goal is to run npx faugra build anywhere in the build chain, and requiring docker and images pulls would be a killing requirement to most people.

zvictor commented 2 years ago

Faugra could create a db with a unique hash, generate its secret import the schema and download the expanded one, and delete that db

Here is something promising. I would be happy to build that, but as a public service instead of through docker.


My proposal for an independent endpoint:

  1. We deploy a publicly accessible serverless endpoint (e.g. faugra.workers.dev) that holds a key/secret we own.
  2. For every request it receives containing a graphql schema, it creates a child DB and forwards the user's schema to the new DB.
  3. It pulls the expanded schema and deletes the created DB, returning the schema to the user.

Cons:

  1. Costs. Who will pay for it? I currently already run all Faugra's tests on my private Fauna account, which is okay so far because it only costs me a couple of cents every time I start working on a new feature for Faugra (and the sponsor I have is actually just a placeholder, in case you are wondering 😆). The proposed service, though, would be a whole different story.
  2. Abuse risks. What protections can we have in place to avoid attacks and abuse? I really want to drop the need of a key to access such service, but the risks are real.

Pros

  1. UX. We will be able to provide the intended and ultimate experience, which would allow us to refactor the whole project and finally launch it! 🐣🎉
  2. Fake door testing. Building such service independently would provide the data and analytics to show Fauna how much this feature is needed, allowing them to experiment with it before fully committing and having to maintain it.

Questions

  1. What protections can we have in place to avoid attacks and abuse?
  2. How would we finance the service? @rts-rob Would Fauna provide us a sponsored key so that we can run this endpoint with it (and hopefully our automated tests as well)?
zvictor commented 2 years ago

I have just implemented in v0.0.52 the solution I proposed in my previous comment.

faugra build now sends the schema to a remote endpoint that then regurgitates it back with the expanded types.

How we will finance and keep this service up I honestly have no idea. 🤷‍♀️ For now, that's the best we have.