cozemble / monorepo

A data and process canvas
https://cozemble.com
Apache License 2.0
13 stars 1 forks source link

persistence for data entry #4

Closed mike-hogan closed 1 year ago

mike-hogan commented 1 year ago

The paginated data editor right now is a UI only component, it does not persist changes to the database.

Current best idea of how to approach this is similar to the model editor - wrap a DataRecord in an EventSourcedDataRecord, event source the changes in the UI, apply them to the database on save.

Attempt to keep the saving method abstracted away, because while the core of cozemble is saving to the associated postgres, we need to keep the door open to other saving strategics, like Airtable, Google Sheets, Rest APIs and who knows what else.

It will be tricky, possibly, to neatly map the mutation events to coherent GQL or SQL, so lets see.

mike-hogan commented 1 year ago
mike-hogan commented 1 year ago

The model editor can have its own persistence strategy. Let's say its hasura or plain postgres fronted by express - whatever, I can save models.

The policy around customer schemas and backend tech should be a user choice.

One use might elect to have hasura at endpoint 1, another might elect to have supabase at endpoint 2.

The model editor, when saving, will apply the user's chosen policy.

In the case of hasura, that means it will be configured with Hasura URL and creds, and postgres URL and creds. It will migrate the database and track the changes.

I don't know enough about supabase right now to venture a guess as to what might be required.

Back to hasura. This has the downside of one transaction to save model edits, another transaction to migrate the database, and then some http calls to hasura to track tables. A lot of scope for things to go wrong.

mike-hogan commented 1 year ago

I trialled Gql client using graphql-request. Binning it though, because if the Gql server returns an error response, graphql-request converts that into a thrown error, and I lose the context in the json object that was returned from the server.

Case in point: Hasura returned this:

{
  "errors": [
    {
      "extensions": {
        "code": "constraint-violation",
        "path": "$.selectionSet.insert_invoice.args.objects[0].customer.data"
      },
      "message": "Not-NULL violation. null value in column \"email\" violates not-null constraint"
    }
  ]
}

But graphql-request throw an error with this message:

Not-NULL violation. null value in column "email" violates not-null constraint: {"response":{"errors":[{"extensions":{"code":"constraint-violation","path":"$.selectionSet.insert_invoice.args.objects[0].customer.data"},"message":"Not-NULL violation. null value in column \"email\" violates not-null constraint"}],"status":200,"headers":{"map":{"content-type":"application/json; charset=utf-8"}}},"request":{"query":"mutation MyMutation {\n  insert_invoice(\n    objects: {invoice_id: \"1\", customer: {data: {first_name: \"1\", address: {data: {line_1: \"1\", post_code: \"1\"}}}}}\n  ) {\n    returning {\n      invoice_id\n      customer {\n        first_name\n        last_name\n        phone\n        email\n        address {\n          line_1\n          line_2\n          post_code\n        }\n      }\n    }\n  }\n}","variables":{}}}

This makes it much harder to get back the precise context of the error json object.

So going to try an axios based gql client implementation

mike-hogan commented 1 year ago

Done sufficiently well to insert_ and update_ against a local Hasura instance