Open zvictor opened 4 years ago
If we could programatically expand the schemas ourselves, we would not need to upload the schema to fauna.
As it looks like that fauna is running a loosely compliant version of openCRUD, I believe that we could run a script that would make any initial schema expand to add the openCRUD components. That would give us pretty much the same result as uploading the schema to fauna (we could then customize it to be fully compliant).
Sadly, building such script would a big project on itself.
I managed to find only 2 packages that are doing that already, but they have a lot of added flavours on top: @graphback/codegen-schema and prisma-generate-schema.
They seem to achieve what we need, but they are wrapped in lots of extra complexities that serve the specific needs of their frameworks and simplifying them would be as hard as starting from scratch.
Therefore, we are still sadly dependent on fauna's team goodwill to help us moving forward.
@erickpintor @lregnier @n400 is there any chance you could advocate inside fauna to open source the part of the code that handles the schema expansion? it's virtually impossible to build any extension like faugra having to rely on unaccessible parts
Hi @zvictor, Leo and Erick are pretty swamped right now with the upcoming release. I've added this to our roadmap and will discuss it with the Product and Engineering teams to see how feasible it is (being tracked internally as ROAD-245). Feel free to email or DM me on Slack for an update. Sorry that the project has been blocked :/
Hi @n400! It looks like you are not working with Fauna anymore, which is very sad to hear 🥹 Do you know who I could contact to help me fighting for the OSS community inside Fauna?
Hi @zvictor! It’s good to hear from you. I would try reaching out to @rts-rob, Head of DA. He’s passionate about the OSS community and started Fauna Labs .
Thanks @n400 !
Hi @zvictor - looping in @Shadid12 who has done some related work in tooling, all available in Fauna Labs.
You can also find us in our forums and Discord server!
Thanks @n400!
@rts-rob @Shadid12 would you be able to check and report on ROAD-245? Is it still an open issue or is the work done there?
To be very clear and make it easier for the Fauna team to help us here, I would like to present 2 proposals of solutions.
It would be great to have the Fauna code open sourced, for obvious reasons, but that is something out of our reach of influence apparently. However, that does not mean that relevant pieces of code could not be open sourced as separated packages, right?
Somewhere inside Fauna the schema we upload gets expanded to include the basic CRUD methods, either while it's still just a GraphQL schema or somewhere else when it's already being processed and being converted into Collections and so on. Regardless of when/where that happens, we need to split parts of it out and publish them as independent packages.
Libraries like fauGRa and it's sort can only be built if we have an environment that is predictable and reliable. Otherwise, "magic features" like the creation of the basic CRUD methods become unreachable, breaking the composability principle.
Having a package maintained by Fauna that we can call from our code in order to expand the schema ourselves would be great!
POST /import?mode=dry-run
A new mode that would take the schema, validate it, and return the expanded/final schema from Fauna. Currently we achieve that result by pulling and then pushing the schema from Fauna: https://github.com/zvictor/faugra/blob/6e6ac349605bedb64c8aef1ca8e0dea43f1de80b/commands/generate-types.js#L35-L37
Needless to say, that's a terrible solution 🥲
The dry-run endpoint/mode must be publicly accessible (i.e. with no need to present a secret key). Otherwise, users wouldn't feel comfortable delegating the secret keys of (possibly) production databases to tools that are supposed to run things in dry-mode. Plus, it would add extra work in the CI/CD and whatever other environments that would need to keep an extra key just for dry-run operations.
Some reflections on the proposals:
Option A is definitely my favorite because it does not require network access. Option B would break the rules of some CI/CD environments that, for security reasons, discourage the use of network in automated operations.
Solving this issue would allow us to easily fix #10. The UX would improve drastically.
Hi @rts-rob and @Shadid12 ! Did you have the chance to check the situation of ROAD-245 and maybe check the other comments in this thread as well?
I have been working in this framework for the last 2y and I am very happy with what I have accomplished with it so far. I am also confident that many will love it and that the Fauna community will benefit directly if this ever becomes popular.
I never dared to actually launch/promote this framework anywhere because the UX of it was not quite there yet, but I would like to finally get it to the next level soon: I plan on renaming and rebranding the whole project, working on #10, and then making a decent launch.
All of it has been waiting since April 2020 on this one issue right here. Until #1 is fixed I don't believe that the UX this tool can provide will be of any use to almost anyone.
So I really hope I can hear back from you on what we can do to move forward together. 🤘
I think the realistic thing would be to extract the faunadb server from their docker image, decompile and write a tool to replicate the functionality.
The main issue I see with this approach is that we have to manage this, instead of fauna.
don’t think there will be too much support on this from fauna directly
edit: After going through the decompiled source, it looks like it would be a huge undertaking to convert this to typescript.
Running faugra through FaunaDev docker with the config set to bypass the cache(60 sec timeout) it works the same as in fauna with proper validation.
As for fixing 2 and 3, The user should have docker installed, run 1-2 commands.
Since the config by default to the db is secret
. Faugra could create a db with a unique hash, generate its secret import the schema and download the expanded one, and delete that db
Thank you for the deep investigation and for sharing your thoughts with us, Daniel!
don’t think there will be too much support on this from fauna directly
Speaking from experience, we can't expect any level of support/collaboration from them on any direction we go. We are by our own, and waiting for things to improve on their side has proved to be a mistake in numerous occasions.
it looks like it would be a huge undertaking to convert this to typescript.
Likely yes. Plus, we would need to keep track of changes in a complex system in another language (Java?) to them port it to our system whenever changes. It sounds very manual and prone to errors, unfortunately 🥹
Running faugra through FaunaDev docker with the config set to bypass the cache(60 sec timeout) it works the same as in fauna with proper validation. The user should have docker installed, run 1-2 commands.
It's a great solution once docker is available. The problems is that our ultimate goal is to run npx faugra build
anywhere in the build chain, and requiring docker and images pulls would be a killing requirement to most people.
Faugra could create a db with a unique hash, generate its secret import the schema and download the expanded one, and delete that db
Here is something promising. I would be happy to build that, but as a public service instead of through docker.
faugra.workers.dev
) that holds a key/secret we own.I have just implemented in v0.0.52
the solution I proposed in my previous comment.
faugra build
now sends the schema to a remote endpoint that then regurgitates it back with the expanded types.
How we will finance and keep this service up I honestly have no idea. 🤷♀️ For now, that's the best we have.
TLDR for the @fauna team: Compiled suggestions on how to improve Fauna and fix this issue can be found here.
Currently, in order to generate TS types (
faugra generate-types
andfaugra build-sdk
), faugra makes some compromises that won't be acceptable to the general audience.The problem
Given a schema file, faugra uploads its content to fauna in order to:
have any missing "base" schema appended to it (it adds e.g.scalar Date
anddirective @resolve
)--> primitive values are being hardcoded into base.gql instead.findAll<Type>ByID
)Without uploading the schema to the cloud the TS types would be incomplete , lacking the content that fauna adds to it.
Current solution
Putting all together, in order to generate the TS types, faugra needs to:
As modularisation is a core principle of faugra we need to repeat this process for each file individually(--> We had to give up on that because of this issue). But, if we do not reset the database before pushing the new schema in, fauna will merge the content of the files. The last schema uploaded will in practice extend the content of all schema files pushed before. Therefore, importing the schema in override mode is a must.Considerations
Considering the performance and side effects of steps $2 and $3 I believe that I can't have the TS types being generated "on save", as I initially planned. And, after all considered, I wonder if anyone would actually bother going through the hassle of setting up such tool that requires credentials and mess with your data.
So, we need to find a way to kill steps $2 and $3: we need to programmatically add the missing content in the basic schema instead of publishing it to the cloud. Where do we start? :upside_down_face: