Open josephschorr opened 1 year ago
One thing that would be really useful for our use case is the ability to merge definition
blocks across schema files.
Our schema has a relatively small number (~6) of definitions but a relatively large number (~60) of permissions. We're using the Cloud IAM model from the blog post as a template, so we have a bunch of permissions and relations that are independently defined and independently computed.
It'd be nice to be able to separate our schema by feature, which would see the same 6 definition
blocks in each of the schema fragments but different relations, permissions, and assertions in each.
I understand that this doesn't really follow normal import semantics and could end up being a relatively large complexity jump in a filetype where simplicity and duplication might be strongly preferable to flexibility and indirection, though.
@reify-tanner-stirrat Would you prefer a mixin style approach, ala:
mixin addsfoo {
relation foo: bar
}
definition someresource {
with addsfoo
}
or would you just prefer definitions in different files to "squash" together?
I don't know how different they are, but I'd prefer something like javascript object merge semantics to a mixin:
definition sub_def_1 {
relation: sub_1
permission permission_1 = sub_1
}
definition sub_def_2 {
relation: sub_2
permission permission_2 = sub_2
}
definition full_definition {
...sub_def_1
...sub_def_2
}
Duplicates could be overwritten according to the order of declaration to stick with the semantic, or else error out as a collision.
The problem with the above is now sub_def_1
is also a valid type; that doesn't seem intentional?
Yeah, I was just thinking about that...
It isn't, and I wouldn't want that.
To be honest, most of our pain points are around 1. needing to cat
our schema into a temporary file to validate it because the validate
command wants yaml that looks like a playground file and 2. wanting to split tests across multiple files. I'm okay with the size and shape of our schema file as it is.
Well, that's why I proposed having a mixin
keyword; we could still have them compose using ...whatever
like you have in your example
I think a mixin approach would allow us to decompose the schema but not necessarily the tests, because I'd need a subschema composed entirely of mixins to be able to test a subsection of the scheme, so it doesn't feel to me like it's solving the right problem (for us).
because I'd need a subschema composed entirely of mixins to be able to test a subsection of the scheme
Not sure I follow... mixins won't impact testing. Tests will always run on the full schema
Ah, so the test blocks can be imported/collated independently of the mixin/definition blocks under this model?
The test blocks are pulled in and run against the entire merged schema
Would that mean we can define test files outside of a single schema file and then just import them? I think that would serve us well.
You could have a .zed
file that just imported a bunch of schema files and had tests, yes (or import a file with just tests)
I'd be curious about import mechanics as well. What would we import exactly? Mixins or merging as @reify-tanner-stirrat mentions will be important. Maybe you can borrow from something like graphql-modules:
type Query {
user(id: ID!): User
}
type User {
id: ID!
email: String!
}
extend type Query {
me: User
}
type Mutation {
login(username: String!, password: String!): User
signup(username: String!, password: String!): User
}
extend type User {
username: String!
}
Using the example of a JIRA-like product.
Questions that come up:
I think in the example below, we just need to specify file imports (no field selection) to construct our dependency graph.
zed build project.zed
issue.zed
import ./issue.zed
definition project {
relation member: user
relation manager: user
permission manage = manager
}
// I guess we could have a user.zed as well, but you get the idea
definition user { }
project
definitionextend definition project {
relation all_issues_manager: user // Can edit all issues
}
definition issue {
relation project: project
relation assigned: user
permission read = project->member
permission edit = assigned + project->all_issues_manager
}
+1 to @reify-tanner-stirrat 's object merge semantics.
We're working on a model inspired by the Google IAM blog post where each service provider has a subschema that with a domain-specific hierarchy, and has a number of definitions that aggregate relations/permissions from all service providers (roles, rolebindings, projects, etc), and it would be super useful if each service provider's needs could be described in a file, including the relations/permissions that need to be added to shared definitions.
I agree with @edorivai, this proposal could really take inspiration from GraphGL Federation. It is a nice way to have separation of concerns for different microservices.
The import is really just saying, "I need definitions from ..." I think the intent here is perfect but the arrow created by the import is backward.
Something like this would work:
// project.zed
definition project {
relation member: user
relation manager: user
permission manage = manager
}
definition user { }
And then to reference and extend the above:
// issue.zed
import ./project.zed
extend definition project {
relation all_issues_manager: user // Can edit all issues
}
definition issue {
relation project: project
relation assigned: user
permission read = project->member
permission edit = assigned + project->all_issues_manager
}
The aggregation or compile process would accept all of the files at once and should be able to check the validity of all of the references before combining them, as well as globally after they are stitched together. Redefinitions should not be allowed to avoid circumventing upstream definitions. Any changes should be done by the 'owning' team.
Essentially, this makes the extend keyword very simple to think about as it is just saying, "you've seen this before... please add this to it."
A quick pass to prevent circular definitions between imports and this is finished.
I am not a fan of having a source file doing the aggregation. Something like a top-level file containing all of the imports to cause them to be compiled together. Systems where files are more completely independent scale better. Cross-group coordination is not as critical.
The import is really just saying, "I need definitions from ..." I think the intent here is perfect but the arrow created by the import is backward.
I like this proposal for extending definitions and adding new definitions. Could you give an example of how it would look for test relationships and assertions?
i think what is lacking is that when using a type from another file you should re-declare the relations that you are just "importing to be used" but not adding yourself (in graphql federation this is done by marking them as @external
), this way, you can test each schema file in isolation as it is self contained.
for all intents and purposes an individual file should be "usable" without any import.
i really encourage you to look at https://www.apollographql.com/docs/federation/ for inspiration.
@enriquedacostacambio Unless I'm misunderstanding, that won't work. Consider:
File 1:
definition user {}
definition organization {
relation admin: user
permission can_admin = admin
}
File 2:
from .someorgfile import organization
definition resource {
relation viewer: user
relation org: organization
permission view = viewer + org->can_admin
}
If file 2 was declared without organization
actually being present, tests cannot run, because we have no idea how can_admin
is defined
@josephschorr I was actually proposing the opposite; that you duplicate everything you want to use the other file:
File 1:
definition user {}
definition organization {
relation admin: user
permission can_admin = admin
// ... a bunch of other things ...
}
File 2:
extends definition organization {
external relation admin: user
external permission can_admin = admin
}
definition resource {
relation viewer: user
relation org: organization
permission view = viewer + org->can_admin
}
For imported relations this works fine, for imported permissions it could pull a lot of the schema from the other file. If you want to avoid that then the assertions need to have a way to specify something like "assuming can_admin is true", and then you can declare the permission names without specifying the rules behind them.
@enriquedacostacambio The downside of the above approach is that it breaks the ownership: the team supervising organization
should be in charge of how can_admin
is defined; that's why using an import feels more natural to me: it allows for different owners while also allowing for shared reuse
@josephschorr correct, that's the drawback I was referring to. If you want to avoid it, this is the solution I was suggesting:
File 1:
definition user {}
definition organization {
relation admin: user
permission can_admin = admin
// ... a bunch of other things ...
}
File 2:
extends definition organization {
external permission can_admin: user
}
definition resource {
relation viewer: user
relation org: organization
permission view = viewer + org->can_admin
}
and then for testing purposes, you would treat the external permission as a relation:
organization:TEST_ORG#can_admin@user:TEST_ADMIN_USER
for all intents and purposes an individual file should be "usable" without any import.
I may not be thinking this all the way through, but on its face, it appears this concept conflicts with distributed definitions in composed schemas. If the individual file is fully usable without any import, then it is fully self contained, and doesn't have any dependency on any other schema. But for a distributed definition, I want any "extends" file to be dependent on the imports that build the original definition.
Since we are dealing with YAML files, actually writing a file wouldn't care about the dependencies, it is just text. When working in the playground or testing the schema, any updates to the example resource definition below require the organization.zed file to be imported for the schema to have its dependencies.
File: Organization.zed
definition user {}
definition organization {
relation admin: user
permission can_admin = admin
}
File Resource.zed
imports ./organization.zed
definition resource {
relation viewer: user
relation org: organization
permission view = viewer + org->can_admin
}
@amadard I'm sorry I'm probably doing a poor job at explaining. In the approach I'm suggesting you don't even need the organization.zed
file in order to load and test the resource.zed
schema, you can use it in standalone mode as long as the interpreter drops the extends
and external
keywords, and interprets external permission
as a relation
. With this you could microservice teams independently work on/test their own subset of the graph. When SpiceDB wants to load the composed schema it would not do this, but instead just check that the overlapping definitions match.
There are not my original ideas, I'm adapting Apollo federation ideas, their explanations are far better than mine, you might want to check it out.
@enriquedacostacambio I did some reading, and I think I'm understanding the concept. I'm comfortable with having the composed schema files dependent on each other, since at the end of the day, they are all going to be stitched together when getting imported in to SpiceDB. It requires the writer of the composed schema to have visibility into what definitions/relations/permissions exist in the imported file, but it simplifies how the composed files are written and keeps it closer to our current syntax.
@enriquedacostacambio What is your concern around imports? As @amadard mentioned, the schema will have to be combined before written, so having explicitly dependencies should reflect the actual result
@josephschorr no concern, i just think the composition approach is better in terms of ownership, for example, the sub-schemas could live in their respective microservice repos without even needing references to other services' sub-schemas, and they would be testable in isolation by that repo's tests. Imports force a monorepo layout which might or might not be the desired architecture of spicedb users.
@enriquedacostacambio That's actually why the proposal supports git-based imports:
from github.com/mycomp/myauthzteam@v2 import user
That way, you can define your import based on another repository entirely (and properly versioned)
@josephschorr I'm not understanding one part of the zed build process. If I have multiple "sub-schemas" that import a single primary file, but don't reference each other, how does the zed build know all of the files that need to be combined to get the final output?
E.g.
File: Organization.zed
definition user {}
definition organization {
relation admin: user
permission can_admin = admin
}
File Resource.zed
imports ./organization.zed
definition resource {
relation viewer: user
relation org: organization
permission view = viewer + org->can_admin
}
File UnrelatedResource.zed
imports ./organization.zed
definition unrelatedresource {
relation viewer: user
relation org: organization
permission view = viewer + org->can_admin
}
The final build needs Organization.zed, Resource.zed AND UnrelatedResource.zed. If I were build with just Resource.zed, it knows to import Organization.zed, but has no knowledge of UnrelatedResource.zed. Does the build need the full list of zed files to import to create the final schema?
You'd define an overall "permissionsystem.zed" and import those files or give them all to the build command
Does the build need the full list of zed files to import to create the final schema?
Yes, same as any other compilation process
@enriquedacostacambio Thoughts on the import semantics vs the externs?
@josephschorr I still think the extending leads to a cleaner architecture. For example, if I have C imports B imports A, where most likely A, B and C map each to an individual microservice, the extensions approach would let me test C independently, without having to test the whole chain (something I would defer to an end to end test). My tests can be focus on narrower concerns making the whole permissions system more testable. Plus the tests that my microservices run during build won't even need to have the files for A and B available (or know their appropriate versions), simplifying my setup. I think we can come up with more benefits that stem from the sub-schema files being usable independently, deferring the need for all files to composition-time, when you actually need to run spicedb with the resulting combined schema.
@enriquedacostacambio if C
is included in B
, then it would be testable independently, since dependency is the other direction?
Can you come up with an example where an import of a child-type is not sufficient?
Oops, I reversed the direction of the imports. I'll fix the comment, but you get the idea.
@enriquedacostacambio Gotcha, just making sure :)
So in the case where microservice C
depends on B
and A
, why is the import of the explicitly versioned dependency "heavier" than an externs? IMO the safety afforded outweighs the extra work the compiler has to do to check out the versioned bundle
@josephschorr it is heavier because when using extensions, I can test C in isolation giving it a provider state (in the sense used by PACT in contract driven testing) that defines what the values for the relationships/permissions that are external to C, which act as the "contract" between C and B. Using imports, tests in the C domain are forced to know and set up the provider state for all relationships in the graph: A & B. This doesn't preclude the need for e2e tests on the composed schema, but lets you have more modular development & (unit?) tests.
A question regarding aliasing or packages: E.g., suppose two different teams define a resource called user:
// Import statement allows for importing of object type definitions and caveats from other .zed files locally or via git
from "./service1" import user
from "./service2" import user
// Schema can reference an imported type
definition resource {
relation viewer: user // Ambiguous which user we are talking about here
}
It would be great if we could either:
a) Alias (from "./service1" import user as service1_user
)
b) Qualify via namespaces (leave import as is) but usage looks like relation viewer: service1.user
Any thoughts/suggestions? An alternative would be to enforce uniqueness of names on the client side, but qualified/aliased imports would be really nice.
It would be great if we could either: a) Alias (
from "./service1" import user as service1_user
) b) Qualify via namespaces (leave import as is) but usage looks like relation viewer:service1.user
Supporting as service1_user
seems pretty straightforward and powerful
@tin-canva one thing that came up in discussion is that definition names aren't an internal implementation detail, but rather a part of the external API of your schema, because the name of the definition is the name that you're using when you write a relationship. This means that aliasing can't really be used other than as a local convenience without potentially breaking an assumption on the part of the author of the imported schema fragment, and that definition names still need to be globally unique within the system.
Background
Schema in SpiceDB is written as a single unit via the
WriteSchema
call. This is an important requirement for type checking and validation, but comes with a cost of making it harder for different teams using the same unified permission system to manage changes to their own portions of a schema.Furthermore, the current file format passed to tooling such as Playground,
zed validate
and the https://github.com/authzed/action-spicedb-validate GitHub Action only supports a single set of test relationships and associated assertions/expected relationships, making the testing of different scenarios much harder.Proposed Solution
Define a new modified file format for
.zed
files that allows for the definition of both schema data (caveat
anddefinition
), as well as a set of validation tests, each with their own relationships, assertions and expected relationships. This new file format will also supportimport
-ing of other.zed
files via a newimport
keyword.To create the "final" schema for
WriteSchema
, a newzed build
command will be supplied that performs import resolution, type checks the combined schema, runs the validation tests (if any) and, finally, returns or saves the combined schema to be sent to SpiceDBProposed File Format Example
Proposed new and updated commands for zed
zed build
zed build myfile.zed
will run a full import, type check, validation and build process, producing a combined schema to be applied viaWriteSchema
, with all imports resolved and removed, and all test data removed.zed validate
zed validate myfile.zed
- runs all validation tests found, recursively, through the schema file and any imported dependencieszed validate myfile.zed -run SomeSortOfThing
- runs only those validation tests whose name matchesSomeSortOfThing
.zed validate myfile.zed -run SomeSortOfThing/assertions/document:somedoc
- runs only the single assertionzed docgen
zed docgen
would produce Markdown or other form of documentation from the doc comments found in the schema (see: https://github.com/authzed/spicedb/issues/735)Imports
Imports will support two modes:
from .somelocalfile import ...
, which are unversionedfrom somegithost.com/some/repo@tagname import ...
which will always use a specific tag to pull in that schema