ubiquibot / permit-generation

A standalone module to generate permits.
0 stars 6 forks source link

feat: encode decode permits #10

Closed gentlementlegen closed 2 months ago

gentlementlegen commented 3 months ago

Resolves #8

github-actions[bot] commented 3 months ago
Lines Statements Branches Functions
Coverage: NaN%
Unknown% (0/0) Unknown% (0/0) Unknown% (0/0)

JUnit

Tests Skipped Failures Errors Time
0 0 :zzz: 0 :x: 0 :fire: 6.666s :stopwatch:
gentlementlegen commented 3 months ago

To be resolved here: https://github.com/ubiquibot/permit-generation/pull/7#discussion_r1555958708

0x4007 commented 2 months ago

I guess we will migrate to Cloudflare KV in another future task?

gentlementlegen commented 2 months ago

What's going on in this PR:

It introduces function helpers to serialize / deserialize permits and transform them into a base64 encoded string. This is useful when we want to pass the permit data around, like for example with pay.ubq.fi within the claim inside the URL.

Having it inside this package allows us to have only one source of truth, easily implementable inside any of our project if needed.

Another problem was that the Supabase generated classes were very old and using the old DB schema. It got updated and code / tests have been fixed accordingly.

gentlementlegen commented 2 months ago

I guess we will migrate to Cloudflare KV in another future task?

@0x4007 what part of Cloudflare KV do you refer when talking about this repo?

0x4007 commented 2 months ago

I guess we will migrate to Cloudflare KV in another future task?

@0x4007 what part of Cloudflare KV do you refer when talking about this repo?

I want plugins "agents" to default to using KV instead of Supabase.

This is because I want to make the developer experience as simple as possible and I believe that setting up a KV store is less involved than a Supabase database.

We should consider full database storage as exceptions instead of the norm.

I also think it's worth implementing the changes in this repository (in another pull) because it will be used as an example for future developers.

gentlementlegen commented 2 months ago

I guess we will migrate to Cloudflare KV in another future task?

@0x4007 what part of Cloudflare KV do you refer when talking about this repo?

I want plugins "agents" to default to using KV instead of Supabase.

This is because I want to make the developer experience as simple as possible and I believe that setting up a KV store is less involved than a Supabase database.

We should consider full database storage as exceptions instead of the norm.

I also think it's worth implementing the changes in this repository (in another pull) because it will be used as an example for future developers.

KV cannot replace what Supabase does, they have completely different purposes. While Supabase can replace KV, opposite is not true. KV only purpose is to do Key-Value pairs, hence its name. While this is totally suitable for simple things like remembering a user wallet, it is not suitable to do updates, joins, queries since it handles only a one to one relationship. And even in the case of wallets here, they contain multiple columns that cannot be represented through KV. To quote the docs

Workers KV provides low-latency, high-throughput global storage to your Cloudflare Workers applications. Workers KV is ideal for storing user configuration data, routing data, A/B testing configurations and authentication tokens, and is well suited for read-heavy workloads.

So in this scenario, it would be possible to do it if we drop other columns of the wallets. But that also means we start to scatter data all around.

whilefoo commented 2 months ago

We also can't run plugins in Github Actions if we use Cloudflare KV (technically you can with a worker proxy but then you need to implement auth...)

0x4007 commented 2 months ago

I guess we will migrate to Cloudflare KV in another future task?

@0x4007 what part of Cloudflare KV do you refer when talking about this repo?

I want plugins "agents" to default to using KV instead of Supabase.

This is because I want to make the developer experience as simple as possible and I believe that setting up a KV store is less involved than a Supabase database.

We should consider full database storage as exceptions instead of the norm.

I also think it's worth implementing the changes in this repository (in another pull) because it will be used as an example for future developers.

KV cannot replace what Supabase does, they have completely different purposes. While Supabase can replace KV, opposite is not true. KV only purpose is to do Key-Value pairs, hence its name. While this is totally suitable for simple things like remembering a user wallet, it is not suitable to do updates, joins, queries since it handles only a one to one relationship. And even in the case of wallets here, they contain multiple columns that cannot be represented through KV. To quote the docs

Workers KV provides low-latency, high-throughput global storage to your Cloudflare Workers applications. Workers KV is ideal for storing user configuration data, routing data, A/B testing configurations and authentication tokens, and is well suited for read-heavy workloads.

So in this scenario, it would be possible to do it if we drop other columns of the wallets. But that also means we start to scatter data all around.

My vision is to encode an array (or object) inside of the value for situations like this @gentlementlegen

0x4007 commented 2 months ago

We also can't run plugins in Github Actions if we use Cloudflare KV (technically you can with a worker proxy but then you need to implement auth...)

Auth is not relevant to your point of choosing Supabase over KV.

Auth is required for Supabase as well.

@Whilefoo

gentlementlegen commented 2 months ago

This is a very utopian scenario where we never have to change the shape of the content, and where this data is only specifically needed in one and only spot. What if tomorrow we have that encoded object and wanna add / remove / change a type? KV doesn't have any migration system, we should be writing scripts that update the data. And if that data is accessed by any other project, it will subsequently break every one of them. KV also does not handle RLS or rules to access specific data and columns, which we use to control with fine grain of what is accessible or not, which will lead us to have lot of custom code to protect / control access to such data. With KV we cannot seed data either, which is very useful for local setup and testing.

Maybe this can be discussed somewhere else, but I am not sure to understand the need to move away from Supabase? It is extremely easy to setup, robust, free, and helps accessing the data in an ordered fashion. To me KV seems helpful in scenario where you have just a front-end and wanna save some data specific to a user with no relation with any other object, or settings for example. I feel like we are comparing a shovel to a rake, because both of these techs do not aim to achieve the same purpose, and are not really interchangeable

0x4007 commented 2 months ago

This is a very utopian scenario where we never have to change the shape of the content, and where this data is only specifically needed in one and only spot. What if tomorrow we have that encoded object and wanna add / remove / change a type? KV doesn't have any migration system, we should be writing scripts that update the data.

Yes this is a valid concern, but at the same time, I really think we should keep the data PER plugin as simple as possible.

It's hard to anticipate how large of a data structure any possible future plugin developer would use; but in this exceptional case, the plugin developer can switch out the data storage solution to be Supabase (or any other database)

However for most cases I anticipate that a simple KV will suffice. Think of plugins as like, micro services. Each one should only be concerned with storing simple data directly relevant to their simple capability.

Again, I am trying to make plugin development as convenient and as accessible as possible for new developers so that our ecosystem can grow faster. It is our duty to make plugin development as easy as possible to get started with.

And if that data is accessed by any other project, it will subsequently break every one of them. KV also does not handle RLS or rules to access specific data and columns, which we use to control with fine grain of what is accessible or not, which will lead us to have lot of custom code to protect / control access to such data.

I anticipated that we needed to implement the custom code anyways on the plugin level. However, yes, this implies less safety measures without RLS etc. again, I don't think it's a showstopper if the data is decentralized. If a single plugin's data is compromised, the entire network won't go down.

We can also consider making wrapper methods in our SDK that could be useful data for many plugins. For example, collect all data related to the completion of the task (include the associated pull)

And this includes every user, the list of their contributions etc.

The SDK could also possibly include the generic boilerplate security related features. This would be a conversation for the future but again the objective is to make plugin development as accessible as possible.

With KV we cannot seed data either, which is very useful for local setup and testing.

Maybe this can be discussed somewhere else, but I am not sure to understand the need to move away from Supabase? It is extremely easy to setup, robust, free, and helps accessing the data in an ordered fashion. To me KV seems helpful in scenario where you have just a front-end and wanna save some data specific to a user with no relation with any other object, or settings for example. I feel like we are comparing a shovel to a rake, because both of these techs do not aim to achieve the same purpose, and are not really interchangeable

I have intimate experience working with Supabase so I am quite familiar with the product. But to be honest I only really conceptually understand KV, I haven't really worked with it on a serious project etc.

There is a chance I could be wrong about this, but conceptually it seems faster and simpler to setup if no UI interaction is necessary to set it up (Supabase in my experience does require to sign in to the UI, establish the schema etc)

rndquu commented 2 months ago

Overall it seems that we need to:

  1. Use a shared supabase instance for all of the "core" plugins (assistive pricing, conversation rewards, wallet registration, etc...). Most of the plugins share the same entities (i.e. DB tables) so keeping everything in a single DB eliminates the hassle of sharing data between plugins.
  2. Allow none-core plugin developers to use whatever they want (supabase, cloudflare KV, etc...).

Another point in favour of a single DB instance for "core" plugins is that many of our web services (pay.ubq.fi, onboard.ubq.fi, etc...) require data from a shared supabase instance that we use right now. It is very easy to retrieve data from supabase instance using public anon key. Moving to cloudflare KV for each plugin would require implementing some backend service that would collect data from different cloudflare KV instances so that frontend services could read that data whenever they need to.

0x4007 commented 2 months ago

Overall it seems that we need to:

  1. Use a shared supabase instance for all of the "core" plugins (assistive pricing, conversation rewards, wallet registration, etc...). Most of the plugins share the same entities (i.e. DB tables) so keeping everything in a single DB eliminates the hassle of sharing data between plugins.

  2. Allow none-core plugin developers to use whatever they want (supabase, cloudflare KV, etc...).

Another point in favour of a single DB instance for "core" plugins is that many of our web services (pay.ubq.fi, onboard.ubq.fi, etc...) require data from a shared supabase instance that we use right now. It is very easy to retrieve data from supabase instance using public anon key. Moving to cloudflare KV for each plugin would require implementing some backend service that would collect data from different cloudflare KV instances so that frontend services could read that data whenever they need to.

I think this is a good proposal. So then let's focus on Supabase in the short term, as we are building the key/core/"official" plugins.

Then later we should make a boilerplate/example plugin for developers to fork which uses a simpler storage solution such as KV.

I am concerned that new developers will try and study our existing plugins in order to learn how to make their own. If we don't follow our own "best practices" then it may lead to confusion. We will need to be very deliberate about the messaging (example: use this example plugin project as a starting point instead of studying our existing complex ones) for the sake of growing our developer community faster.

gentlementlegen commented 2 months ago

@0x4007 I totally see your point and agree that isolating is good, but as @rndquu said there is a lot of data shared within each plugin like wallets, users, GitHub info and so on. And micro-services would generally use an orchestrator (Kubernetes for example) which helps passing data around, or some central point that helps communication between each entity. It is a common scenario, just again not with KV because I really don't think this is meant to be used for this. And actually, most micro-services spin up their own db even with simple data, and would use NoSql to make things lighter.

whilefoo commented 2 months ago

Auth is not relevant to your point of choosing Supabase over KV.

Auth is required for Supabase as well.

@whilefoo

Yes Supabase already has auth but if you want to use Cloudflare KV with plugins running on Github Actions then you need a Cloudflare worker to act as a proxy to access the KV and because Cloudflare workers don't have auth, you have to implement your own auth. So we would have to create a proxy worker that every plugin developer has to deploy and use in their plugin which defeats the point of using KV because it's easier

0x4007 commented 2 months ago

And micro-services would generally use an orchestrator (Kubernetes for example) which helps passing data around, or some central point that helps communication between each entity

I don't have experience with designing microservices. We should follow best practices, and try our best to make it easy for developers to get started with building plugins.