simonw / datasette

An open source multi-tool for exploring and publishing data
https://datasette.io
Apache License 2.0
9.41k stars 672 forks source link

De-tangling Metadata before Datasette 1.0 #2143

Open asg017 opened 1 year ago

asg017 commented 1 year ago

Metadata in Datasette is a really powerful feature, but is a bit difficult to work with. It was initially a way to add "metadata" about your "data" in Datasette instances, like descriptions for databases/tables/columns, titles, source URLs, licenses, etc. But it later became the go-to spot for other Datasette features that have nothing to do with metadata, like permissions/plugins/canned queries.

Specifically, I've found the following problems when working with Datasette metadata:

  1. Metadata cannot be updated without re-starting the entire Datasette instance.
  2. The metadata.json/metadata.yaml has become a kitchen sink of unrelated (imo) features like plugin config, authentication config, canned queries
  3. The Python APIs for defining extra metadata are a bit awkward (the datasette.metadata() class, get_metadata() hook, etc.)

Possible solutions

Here's a few ideas of Datasette core changes we can make to address these problems.

Re-vamp the Datasette Python metadata APIs

The Datasette object has a single datasette.metadata() method that's a bit difficult to work with. There's also no Python API for inserted new metadata, so plugins have to rely on the get_metadata() hook.

The get_metadata() hook can also be improved - it doesn't work with async functions yet, so you're quite limited to what you can do.

(I'm a bit fuzzy on what to actually do here, but I imagine it'll be very small breaking changes to a few Python methods)

Add an optional datasette_metadata table

Datasette should detect and use metadata stored in a new special table called datasette_metadata. This would be a regular table that a user can edit on their own, and would serve as a "live updating" source of metadata, than can be changed while the Datasette instance is running.

Not too sure what the schema would look like, but I'd imagine:

CREATE TABLE datasette_metadata(
  level text,
  target any,
  key text,
  value any,
  primary key (level, target)
)

Every row in this table would map to a single metadata "entry".

A quick sample:

level target key value
datasette NULL title my datasette title...
db fixtures source
table ["fixtures", "students"] label_column student_name
column ["fixtures", "students", "birthdate"] description

This datasette_metadata would be configured with other tools, and hopefully not manually by end users. Datasette Core could also offer a UI for editing entries in datasette_metadata, to update descriptions/columns on the fly.

Re-vamp metadata.json and move non-metadata config to another place

The motivation behind this is that it's awkward that metadata.json contains config about things that are not strictly metadata, including:

I think we should move these outside of metadata.json and into a different file. The datasette.json idea in #2093 may be a good solution here: plugin/permissions/canned queries can be defined in datasette.json, while metadata.json/datasette_metadata will strictly be about documenting databases/tables/columns.

simonw commented 1 year ago

I completely agree: metadata is a mess, and it deserves our attention.

  1. Metadata cannot be updated without re-starting the entire Datasette instance.

That's not completely true - there are hacks around that. I have a plugin that applies one set of gnarly hacks for that here: https://github.com/simonw/datasette-remote-metadata - it's pretty grim though!

  1. The metadata.json/metadata.yaml has become a kitchen sink of unrelated (imo) features like plugin config, authentication config, canned queries

100% this: it's a complete mess.

Datasette used to have a datasette --config foo:bar mechanism, which I deprecated in favour of datasette --setting foo bar partly because I wanted to free up --config for pointing at a real config file, so we could stop dropping everything in --metadata metadata.yml.

  1. The Python APIs for defining extra metadata are a bit awkward (the datasette.metadata() class, get_metadata() hook, etc.)

Yes, they're not pretty at all.

simonw commented 1 year ago

The single biggest design challenge I've had with metadata relates to how it should or should not be inherited.

If you apply a license to a Datasette instance, it feels like that should flow down to cover all of the databases and all of the tables within those databases.

If the license is at the database level, it should cover all tables.

But... should source do the same thing? I made it behave the same way as license, but it's presumably common for one database to have a single license but multiple different sources of data.

Then there's title - should that inherit? It feels like title should apply to only one level - you may want a title that applies to the instance, then a different custom title for databases and tables.

Here's the current state of play for metadata: https://docs.datasette.io/en/1.0a3/metadata.html

So there's title and description - and I'll be honest, I'm not 100% sure even I understand how those should be inherited down by tables/etc.

There's description_html which over-rides the description if it is set. It's a useful customization hack, but a bit surprising.

Then there are these six:

I added about later than the others, because I realized that plenty of my own projects needed a link to an article explaining them somewhere - e.g. https://scotrail.datasette.io/

Tables can also have column descriptions - just a string for each column. There's a demo of those here: https://latest.datasette.io/fixtures/roadside_attractions

And then there's all of the other stuff, most of which feels much more like "settings" than "metadata":

And the authentication stuff! allow and allow_sql blocks: https://docs.datasette.io/en/1.0a3/authentication.html#defining-permissions-with-allow-blocks

And the new permissions key in the 1.0 alphas: https://docs.datasette.io/en/1.0a3/authentication.html#other-permissions-in-metadata

I think that might be everything (excluding the plugins settings stuff, which is also a bad fit for metadata.)

And to make things even more confusing... I believe you can add arbitrary key/value pairs to your metadata and then use them in your templates! I think I've heard from at least one person who uses that ability.

simonw commented 1 year ago

My ideal situation then would be something like this:

Currently we have three types of things:

Should settings and configuration be separate? I'm not 100% sure that they should - maybe those two concepts should be combined somehow.

Configuration directory mode needs to be considered too: https://docs.datasette.io/en/stable/settings.html#configuration-directory-mode - interestingly it already has a thing where it can pick up settings from a settings.json file - where settings are things like datasette --setting sql_time_limit_ms 4000.

simonw commented 1 year ago

A related point that I've been considering a lot recently: it turns out that sometimes I really want to define settings on the CLI instead of in a file, purely for convenience.

It's pretty annoying when I want to try out a new plugin but I have to create a dedicated metadata.yml file for it just to setup a single option - I'd love to have the option to be able to run this instead:

datasette data.db --plugin-setting datasette-upload-csvs default-database data

So maybe there's a world in which all of the settings can be applied in a datasette.yml file OR with command-line options.

That gets trickier when you need to pass a nested structure or similar, but we could always support those as JSON:

datasette data.db --plugin-setting datasette-emoji-reactions emoji '["😼", "🐺"]'

Note that we kind of have precedent for this in datasette publish: https://docs.datasette.io/en/stable/publish.html#custom-metadata-and-plugins

datasette publish heroku my_database.db \
    --name my-heroku-app-demo \
    --install=datasette-auth-github \
    --plugin-secret datasette-auth-github client_id your_client_id \
    --plugin-secret datasette-auth-github client_secret your_client_secret
simonw commented 1 year ago

Hah, that --plugin-secret thing was a messy solution I came up with to the problem that all metadata is visible at /-/metadata - so if you need to stash a secret you need a way to keep it not-visible in there!

Hence the whole $env mess: https://docs.datasette.io/en/stable/plugins.html#secret-configuration-values

{
    "plugins": {
        "datasette-auth-github": {
            "client_secret": {
                "$env": "GITHUB_CLIENT_SECRET"
            }
        }
    }
}

If configuration and metadata were separate we could ditch that whole messy situation - configuration can stay hidden, metadata can stay public.

Though I have been thinking that Datasette might benefit from a "secrets" mechanism that's separate from configuration and metadata... kind of like what LLM has: https://llm.datasette.io/en/stable/help.html#llm-keys-help

asg017 commented 1 year ago

I agree with all your points!

I think the best solution would be having a datasette.json config file, where you "configure" your datasette instances, with settings, permissions/auth, plugin configuration, and table settings (sortable column, label columns, etc.). Which #2093 would do.

Then optionally, you have a metadata.json, or use datasette_metadata, or some other plugin to define metadata (ex the future sqlite-docs plugin).

Everything in datasette.json could also be overwritten by CLI flags, like --setting key value, --plugin xxxx key value.

We could even completely remove settings.json in favor or just datasette.json. Mostly because I think the less files the better, especially if they have generic names like settings.json or config.json.

asg017 commented 1 year ago

Another option would be, instead of flat datasette.json/datasette.yaml files, we could instead use a Python file, like datasette_config.py. That way one could dynamically generate config (ex dev vs prod, auto-discover credentials, etc.). Kinda like Django settings.

Though I imagine Python imports might make this complex to do, and json/yaml is already supported and pretty easy to write

simonw commented 1 year ago

Yeah, I'm convinced by that. There's not point in having both settings.json and datasette.json.

I like datasette.json ( / datasette.yml) as a name. That can be the file that lives in your config directory too, so if you run datasette . in a folder containing datasette.yml all of those settings get picked up.

Here's a thought for how it could look - I'll go with the YAML format because I expect that to be the default most people use, just because it supports multi-line strings better.

I based this on the big example at https://docs.datasette.io/en/1.0a3/metadata.html#using-yaml-for-metadata - and combined some bits from https://docs.datasette.io/en/1.0a3/authentication.html as well.

title: Demonstrating Metadata from YAML
description_html: |-
  <p>This description includes a long HTML string</p>
  <ul>
    <li>YAML is better for embedding HTML strings than JSON!</li>
  </ul>

settings:
  default_page_size: 10
  max_returned_rows: 3000
  sql_time_limit_ms": 8000

databases:
  docs:
    permissions:
      create-table:
        id: editor
  fixtures:
    tables:
      no_primary_key:
        hidden: true
    queries:
      neighborhood_search:
        sql: |-
          select neighborhood, facet_cities.name, state
          from facetable join facet_cities on facetable.city_id = facet_cities.id
          where neighborhood like '%' || :text || '%' order by neighborhood;
        title: Search neighborhoods
        description_html: |-
          <p>This demonstrates <em>basic</em> LIKE search

permissions:
  debug-menu:
    id: '*'

plugins:
  datasette-ripgrep:
    path: /usr/local/lib/python3.11/site-packages

I'm inclined to say we try to be a super-set of the existing metadata.yml format, at least where it makes sense to do so. That way the upgrade path is smooth for people. Also, I don't think the format itself is terrible - it's the name that's the big problem.

In this example I've mixed in one extra concept: that settings: block with a bunch of settings in it.

There are some things in there that look a little bit like metadata - the title and description_html fields.

But are they metadata? The title and description of the overall instance feels like it could be described as general configuration. The stuff for the query should live where the query itself is defined.

Note that queries can be defined by a plugin hook too: https://docs.datasette.io/en/1.0a3/plugin_hooks.html#canned-queries-datasette-database-actor

What do you think? Is this the right direction, or are you thinking there's a more radical redesign that would make sense here?

simonw commented 1 year ago

Actually there is one thing that I'm not comfortable about with respect to the existing design: the way the database / tables stuff is nested.

They assume that the user will attach the database to Datasette using a fixed name - docs.db or whatever.

But what if we want to support users downloading databases from each other and attaching them to Datasette where those DBs might carry some of their own configuration?

Moving metadata into the databases makes sense there, but what about database-specific settings like the default sort order for a table, or configured canned queries?

Having those tied to the filename of the database itself feels unpleasant to me. But how else could we handle this?

simonw commented 1 year ago

Another option would be, instead of flat datasette.json/datasette.yaml files, we could instead use a Python file, like datasette_config.py. That way one could dynamically generate config (ex dev vs prod, auto-discover credentials, etc.). Kinda like Django settings.

Another option would be, instead of flat datasette.json/datasette.yaml files, we could instead use a Python file, like datasette_config.py. That way one could dynamically generate config (ex dev vs prod, auto-discover credentials, etc.). Kinda like Django settings.

I'm not a fan of that. I feel like software history is full of examples of projects that implemented configuration-as-code and then later regretted it - the most recent example is setup.py in Python turning into pyproject.yaml, but I feel like I've seen that pattern play out elsewhere too.

I don't think having people dynamically generate JSON/YAML for their configuration is a big burden. I'd have to see some very compelling use-cases to convince me otherwise.

That said, I do really like a bias towards settings that can be changed at runtime. Datasette has suffered a bit from some settings that can't be easily changed at runtime already - hence my gnarly https://github.com/simonw/datasette-remote-metadata plugin.

For things like Datasette Cloud for example the more people can configure without rebooting their container the better!

I don't think live reconfiguration at runtime is incompatible with JSON/YAML configuration though. Caddy is one of my favourite examples of software that can be entirely re-configured at runtime by POSTING a big blob of JSON to it: https://caddyserver.com/docs/quick-starts/api

asg017 commented 1 year ago

That said, I do really like a bias towards settings that can be changed at runtime

Does this include things like --settings values or plugin config? I can totally see being able to update metadata without restarting, but not sure if that would work well with --setting, plugin config, or auth/permissions stuff.

Well it could work with --setting and auth/permissions, with a lot of core changes. But changing plugin config on the fly could be challenging, for plugin authors.

dvizard commented 1 year ago

To chime in from a poweruser perspective: I'm worried that this is an overengineering trap. Yes, the current solution is somewhat messy. But there are datasette-wide settings, there are database-scope settings, there are table-scope settings etc, but then there are database-scope metadata and table-scope metadata. Trying to cleanly separate "settings" from "configuration" is, I believe, an uphill fight. Even separating db/table-scope settings from pure descriptive metadata is not always easy. Like, do canned queries belong to database metadata or to settings? Do I need two separate files for this?

One pragmatic solution I used in a project is stacking yaml configuration files. Basically, have an arbitrary number of yaml or json settings files that you load in a specified order. Every file adds to the corresponding settings in the earlier-loaded file (if it already existed). I implemented this myself but found later that there is an existing Python "cascading dict" type of thing, I forget what it's called. There is a bit of a challenge deciding whether there is "replacement" or "addition" (I think I pragmatically ran update on the second level of the dict but better solutions are certainly possible).

This way, one allows separation of settings into different blocks, while not imposing a specific idea of what belongs where that might not apply equally to all cases.

dvizard commented 1 year ago

https://docs.python.org/3/library/collections.html#collections.ChainMap

dvizard commented 1 year ago

https://pypi.org/project/deep-chainmap/

dvizard commented 1 year ago

This also makes it simple to separate out secrets.

datasette --config settings.yaml --config secrets.yaml --config db-docs.yaml --config db-fixtures.yaml

settings.yaml

settings:
  default_page_size: 10
  max_returned_rows: 3000
  sql_time_limit_ms": 8000
plugins:
  datasette-ripgrep:
    path: /usr/local/lib/python3.11/site-packages

secrets.yaml

plugins:
  datasette-auth-github:
            client_secret: SUCH_SECRET 

db-docs.yaml

databases:
  docs:
    permissions:
      create-table:
        id: editor

db-fixtures.yaml

databases:
  fixtures:
    tables:
      no_primary_key:
        hidden: true
    queries:
      neighborhood_search:
        sql: |-
          select neighborhood, facet_cities.name, state
          from facetable join facet_cities on facetable.city_id = facet_cities.id
          where neighborhood like '%' || :text || '%' order by neighborhood;
        title: Search neighborhoods
        description_html: |-
          <p>This demonstrates <em>basic</em> LIKE search
simonw commented 1 year ago

This also makes it simple to separate out secrets.

datasette --config settings.yaml --config secrets.yaml --config db-docs.yaml --config db-fixtures.yaml

Having multiple configs that combine in that way is a really interesting direction.

To chime in from a poweruser perspective: I'm worried that this is an overengineering trap. Yes, the current solution is somewhat messy. But there are datasette-wide settings, there are database-scope settings, there are table-scope settings etc, but then there are database-scope metadata and table-scope metadata. Trying to cleanly separate "settings" from "configuration" is, I believe, an uphill fight.

I'm very keen on separating out the "metadata" - where metadata is the slimmest possible set of things, effectively the data license and the source and the column and table descriptions - from everything else, mainly because I want metadata to be able to travel with the data.

One idea that's been discussed before is having an optional mechanism for storing metadata in the SQLite database file itself - potentially in a _datasette_metadata table. That way you could distribute a DB file and anyone who opened it in Datasette would also see the correct metadata about it.

That's why I'm so keen on splitting out metadata from all of the other stuff - settings and plugin configuration and authentication rules.

So really it becomes "true metadata" v.s. "all of the other junk that's accumulated in metadata and settings.json".

simonw commented 1 year ago

I've been thinking about what it might look like to allow command-line arguments to be used to define any of the configuration options in datasette.yml, as alternative and more convenient syntax.

Here's what I've come up with:

datasette \
  -s settings.sql_time_limit_ms 1000 \
  -s plugins.datasette-auth-tokens.manage_tokens true \
  -s plugins.datasette-auth-tokens.manage_tokens_database tokens \
  -s plugins.datasette-ripgrep.path "/home/simon/code-to-search" \
  -s databases.mydatabase.tables.example_table.sort created \
  mydatabase.db tokens.db

Which would be equivalent to datasette.yml containing this:

plugins:
  datasette-auth-tokens:
    manage_tokens: true
    manage_tokens_database: tokens
  datasette-ripgrep:
    path: /home/simon/code-to-search
databases:
  mydatabase:
    tables:
      example_table:
        sort: created
settings:
  sql_time_limit_ms: 1000

Here's a prototype implementation of this:

import json
from typing import Any, List, Tuple

def _handle_pair(key: str, value: str) -> dict:
    """
    Turn a key-value pair into a nested dictionary.
    foo, bar => {'foo': 'bar'}
    foo.bar, baz => {'foo': {'bar': 'baz'}}
    foo.bar, [1, 2, 3] => {'foo': {'bar': [1, 2, 3]}}
    foo.bar, "baz" => {'foo': {'bar': 'baz'}}
    foo.bar, '{"baz": "qux"}' => {'foo': {'bar': "{'baz': 'qux'}"}}
    """
    try:
        value = json.loads(value)
    except json.JSONDecodeError:
        # If it doesn't parse as JSON, treat it as a string
        pass

    keys = key.split('.')
    result = current_dict = {}

    for k in keys[:-1]:
        current_dict[k] = {}
        current_dict = current_dict[k]

    current_dict[keys[-1]] = value
    return result

def _combine(base: dict, update: dict) -> dict:
    """
    Recursively merge two dictionaries.
    """
    for key, value in update.items():
        if isinstance(value, dict) and key in base and isinstance(base[key], dict):
            base[key] = _combine(base[key], value)
        else:
            base[key] = value
    return base

def handle_pairs(pairs: List[Tuple[str, Any]]) -> dict:
    """
    Parse a list of key-value pairs into a nested dictionary.
    """
    result = {}
    for key, value in pairs:
        parsed_pair = _handle_pair(key, value)
        result = _combine(result, parsed_pair)
    return result

Exercised like this:

print(json.dumps(handle_pairs([
    ("settings.sql_time_limit_ms", "1000"),
    ("plugins.datasette-auth-tokens.manage_tokens", "true"),
    ("plugins.datasette-auth-tokens.manage_tokens_database", "tokens"),
    ("plugins.datasette-ripgrep.path", "/home/simon/code-to-search"),
    ("databases.mydatabase.tables.example_table.sort", "created"),
]), indent=4))

Output:

{
    "settings": {
        "sql_time_limit_ms": 1000
    },
    "plugins": {
        "datasette-auth-tokens": {
            "manage_tokens": true,
            "manage_tokens_database": "tokens"
        },
        "datasette-ripgrep": {
            "path": "/home/simon/code-to-search"
        }
    },
    "databases": {
        "mydatabase": {
            "tables": {
                "example_table": {
                    "sort": "created"
                }
            }
        }
    }
}

Note that -s isn't currently an option for datasette serve.

--setting key value IS an existing option, but it isn't completely compatible with this because it maps directly just to settings.

Although... we could keep compatibility by saying that if you call --setting known_setting value and that known_setting is in this list then we treat it as if you said -s settings.known_setting value instead:

https://github.com/simonw/datasette/blob/bdf59eb7db42559e538a637bacfe86d39e5d17ca/datasette/app.py#L114-L204

pkulchenko commented 1 year ago

@simonw, FWIW, I do exactly the same thing for one of my projects (both to allow multiple configuration files to be passed on the command line and setting individual values) and it works quite well for me and my users. I even use the same parameter name for both (https://studio.zerobrane.com/doc-configuration#configuration-via-command-line), but I understand why you may want to use different ones for files and individual values. There is one small difference that I accept code snippets, but I don't think it matters much in this case.

simonw commented 1 year ago

Something notable about this design is that, because the values in the key-value pairs are treated as JSON first and then strings only if they don't parse cleanly as JSON, it's possible to represent any structure (including nesting structures) using this syntax. You can do things like this if you need to (settings for an imaginary plugin):

datasette data.db \
  -s plugins.datasette-complex-plugin.configs '{"foo": [1,2,3], "bar": "baz"}'

Which would be equivalent to:

plugins:
  datasette-complex-plugin:
    configs:
      foo:
        - 1
        - 2
        - 3
      bar: baz

This is a bit different from a previous attempt I made at the same problem: https://github.com/simonw/json-flatten - that used syntax like foo.bar.[0]$int = 1 to specify an integer as the first item of an array, which is much more complex.

That previous design was meant to support round-trips, so you could take any nested JSON object and turn it into an HTMl form or query string where every value can have its own form field, then turn the result back again.

For the datasette -s key value feature we don't need round-tripping with individual values each editable on their own, so we can go with something much simpler.

simonw commented 1 year ago

@simonw, FWIW, I do exactly the same thing for one of my projects (both to allow multiple configuration files to be passed on the command line and setting individual values) and it works quite well for me and my users. I even use the same parameter name for both (https://studio.zerobrane.com/doc-configuration#configuration-via-command-line), but I understand why you may want to use different ones for files and individual values. There is one small difference that I accept code snippets, but I don't think it matters much in this case.

That's a neat example thanks!

rclement commented 1 year ago

If I may, the "path-like" configuration is great but one thing that would be even greater: allowing the same configuration to be provided using environment variables.

For instance:

datasette -s plugins.datasette-complex-plugin.configs '{"foo": [1,2,3], "bar": "baz"}'

could also be provided using:

export DS_PLUGINS_DATASETTE-COMPLEX-PLUGIN_CONFIGS='{"foo": [1,2,3], "bar": "baz"}'
datasette

(I do not like mixing - and _ in env vars but I do not have a best easily reversible example at the moment)

FYI, you could take some inspiration from another great open source data project, Metabase: https://www.metabase.com/docs/latest/configuring-metabase/config-file https://www.metabase.com/docs/latest/configuring-metabase/environment-variables

simonw commented 1 year ago

That's a really good call, thanks @rclement - environment variable configuration totally makes sense here.

Need to figure out the right syntax for that. Something like this perhaps:

DATASETTE_CONFIG_PLUGINS='{"datasette-ripgrep": ...}'

Hard to know how to make this nestable though. I considered this:

DATASETTE_CONFIG_PLUGINS_DATASETTE_RIPGREP_PATH='/path/to/code/'

But that doesn't work, because how does the processing code know that it should split on _ for most of the tokens but NOT split DATASETTE_RIPGREP, instead treating that as datasette-ripgrep?

I checked and - is not a valid character in an environment variable, at least in zsh on macOS:

% export FOO_BAR-BAZ=1
export: not valid in this context: FOO_BAR-BAZ
simonw commented 1 year ago

The other thing that could work is something like this:

export AUTH_TOKENS_DB="tokens"
datasette \
  -s settings.sql_time_limit_ms 1000 \
  -s plugins.datasette-auth-tokens.manage_tokens true \
  -e plugins.datasette-auth-tokens.manage_tokens_database AUTH_TOKENS_DB

So -e is an alternative version of -s which reads from the named environment variable instead of having the value provided directly as the second value in the pair.

I quite like this, because it could replace the really ugly $ENV pattern we have in plugin configuration at the moment: https://docs.datasette.io/en/1.0a4/plugins.html#secret-configuration-values

plugins:
  datasette-auth-github:
    client_secret:
      $env: GITHUB_CLIENT_SECRET
simonw commented 1 year ago

Just spotted this: https://github.com/simonw/datasette/blob/17ec309e14f9c2e90035ba33f2f38ecc5afba2fa/datasette/app.py#L328-L332

https://github.com/simonw/datasette/blob/17ec309e14f9c2e90035ba33f2f38ecc5afba2fa/datasette/app.py#L359-L360

Looks to me like that second bit of code doesn't yet handle datasette.yml

This code does though:

https://github.com/simonw/datasette/blob/17ec309e14f9c2e90035ba33f2f38ecc5afba2fa/datasette/app.py#L333-L335

parse_metadata() is clearly a bad name for this function:

https://github.com/simonw/datasette/blob/d97e82df3c8a3f2e97038d7080167be9bb74a68d/datasette/utils/__init__.py#L980-L990

That @documented decorator indicates that it's part of the documented API used by plugin authors: https://docs.datasette.io/en/1.0a4/internals.html#parse-metadata-content

So we should rename it to something better like parse_json_or_yaml() but keep parse_metadata as an undocumented alias for that to avoid any unnecessary plugin breaks.