ziglang / zig

General-purpose programming language and toolchain for maintaining robust, optimal, and reusable software.
https://ziglang.org
MIT License
34.97k stars 2.55k forks source link

Version selection system for the official package manager #8284

Open kristoff-it opened 3 years ago

kristoff-it commented 3 years ago

Accepted Proposal

Premise

Go's minimal version selection

Some reading about Minimal Version selection (including some criticism of it):

https://research.swtch.com/vgo (contains multiple articles, all relevant) https://peter.bourgon.org/blog/2020/09/14/siv-is-unsound.html

My takeaway: it's weird and clunky, but like many things in Go, while you might disagree with the solution, it points at some very real truths:

Distro maintainers

Recent discussions both in favor and against what distro maintainers do:

https://blogs.gentoo.org/mgorny/2021/02/19/the-modern-packagers-security-nightmare/ https://utcc.utoronto.ca/~cks/space/blog/tech/BundlingHelpsSoftwareAuthors

My takeaway: distro maintainers want to manage applications in a way that makes developers sometimes uncomfortable and there is a tension between the system package manager and the package manager that the language provides (npm, cargo, ...). Distro maintainers expect software to be high quality and stable enough to be malleable to their operations, but software not always is and when things break everyone gets angry. Users want their systems to be both stable and up to date when it comes to security. Distro maintainers want to fulfill that wish, and also themselves wish it would take little effort to upgrade a dependency for all packages that depend on it.

Zig's goals relevant to package management

My takeaway:

Obviously there is tension between all these points. Hard for things to be robust if you have a ton of dependencies, but anything non-trivial will require a stupid amount of duplicated effort if you decide to avoid depending on other people's code. MVS, or even better/worse vendoring everything, helps with stability but then the entire development model goes against what distro maintainers expect to be able to do: upgrade your deps when needed (because of security reasons or whatever).

On top of that there is a fact to consider that is somewhat unique to Zig: there is a very high bar to clear for a Zig library to be considered high quality: it needs to work on a vast array of platforms, be compatible with various memory allocation schemes, and if it deals with I/O it also needs to account somehow for blocking and non blocking mode (and actually the same is true for anything that touches upon concurrency, non just I/O). That's a pretty big design space to cover and definitely not something that your average lib on GH usually has to deal with.

The proposal

  1. For packages < 1.0, the package manager applies MVS, prioritizing stability and reproducibility of builds over everything else.
  2. Packages >= 1.0 can only depend on other >= 1.0 packages.
  3. For packages >= 1.0, the package manager selects the highest-available compatible version.

In all these cases care should be taken to find a balance in terms of what minor/patch version restrictions a package can declare.

This is an incomplete idea that should be fleshed out by looking at the small details, but, as a starting point, it aims to model the duality of software development: software is unstable and really offering a 1.0 experience in terms of stability and retro-compatibility requires a drastic shift of mindset.

"But won't this mean that basically nobody is going to bump up major version to 1 ever?"

Maybe, but if nobody wants to put in that kind of effort then wouldn't that be a lie anyway?

"What is the point of forcing v1+ packages to only depend on other v1+ packages?"

That's the most important part of the design. In a general sense, how can something be considered stable if it depends on unstable code? But more specifically, it's what makes it possible to have "version fluidity" among your dependencies. If we accept that v1+ software should be amenable to what distro maintainers do, you must preserve this property across the entire dependency chain. The nice thing is that is also makes sense in terms of other Zig goals, namely robustness and maintainability.

In other words, stability is a transitive property in my opinion.

"What if somebody erroneously introduces incompatibilities in a v1.x release?"

Obviously, something will eventually go wrong, and we should have some kind of escape hatch. This is why there should be a way of specifying "upper" bounds when it comes to minor/patch versions, but it should not something that one routinely does. Go for example only allows specifying restrictions at the topmost level of the dependency chain (so when you are building the final application, basically), maybe that's something that we should take inspiration from for our v1+ model even though we don't use MVS there. I do believe that allowing all kinds of weird restrictions everywhere is bad in practice and fundamentally at odds with this model.

"That's a ridicolously high bar to clear, how am I even supposed to make changes to my v1+ package without risking destabilizing everyone that depends on my package?"

  1. Keep the scope and API surface of your package small.
  2. Find out who your most important dependents are and either setup a CI that runs their tests against your changes or coordinate with them so that they do it. You need a big, serious, testing routine anyway to test different platforms etc.
  3. Get to a point where you can consider your package "done" and only support bug fixes at that point.

Alternatively:

  1. Stay at v0.
mattnite commented 3 years ago

I will note here that zig's native package system is extremely robust and well thought out, if we need to have v1.0.0, v1.0.1, v1.0.2 in our dependency tree it's technically trivial and something we don't have to worry about (some explanation here).

  1. Packages >= 1.0 can only depend on other >= 1.0 packages.

To me this is the most interesting part of the proposal, it adds more inferred meaning to a library because a library or project at > 1.0 is not only supposed to be high caliber, but also depend on high caliber software. I'm down.

For packages < 1.0, the package manager applies MVS, prioritizing stability and reproducibility of builds over everything else. For packages >= 1.0, the package manager selects the highest-available compatible version.

I would assume that you'd want the latest and greatest software pre 1.0, things might break here and there, but in general that's expected of <1.0 software. At the very least users are going to want highest compatible version some/most of the time I think. For post 1.0 software my first instinct is that MVS works well here, slower and more reliable, but I think you're correct here because if a library already has good effort with a disciplined team behind it (as proposal point 2 is trying to do with >1.0), then you should be able to trust that upgrades will just work.

I really like what you're putting down. If anything needs to change in this proposal, I think it comes down to the inclusion of MVS, and this decision comes down to the experience developers are going to want in the <1.0 world: do they want things to reliably build with older versions, or are they going to want the latest of a lib and have the ability to easily downgrade when a build fails?

ktravis commented 3 years ago

stability is a transitive property in my opinion

Love this!

I would assume that you'd want the latest and greatest software pre 1.0, things might break here and there, but in general that's expected of <1.0 software. At the very least users are going to want highest compatible version some/most of the time I think.

Agreed, I think the priority pre-1.0 sounds backwards - I'd expect less stability/reproducibility (this would be an incentive to promote to stable 1.0+).

For packages >= 1.0, package manager selects the highest-available compatible version

To clarify, is the compatibility check here based upon conforming to the standard of semantic versioning (the way Go works), where anything later with the same major version is presumed to be compatible, or is there another method in mind here?

Go for example only allows specifying restrictions at the topmost level of the dependency chain

Just anecdotally: in most of the big Go projects I've worked with, this has meant that consumers of a module with requires just have to discover those (usually by trying to add the module as a dependency, seeing that there is a - usually transitive - dependency that is incompatible, and then remembering to check the go.mod of the module being directly referenced) and copy them over into their own go.mod - now with even less context about why and when they could be removed. In the best-case scenario, the dependency author will include a comment with a link to an issue that describes why the require was necessary, which the user can start to track. If this module is used by another, the chain continues further downstream. In some sense I think this is good because it discourages use of require, but it seems like the least of the disincentive falls on the originally module author who introduces the require and propagates it - when really they have the most responsibility to avoid that situation.

matu3ba commented 3 years ago

Packages >= 1.0 can only depend on other >= 1.0 packages.

I think this kind of guarantee is very useful, as it is a long-term commit for trusting the package content. However, this is a very hard split between packages, which can be bad for proper testing via package users.

Say for example a package has fundamental flaws and there is a non-stable replacement. How should a package maintainer be flexible enough to switch to the unstable one? Abolishing the code base with a hard fork and new name? Having a separate version number? Waiting years until the package becomes stable?

Take another example: the widely used nix crate. They dont want to commit to a stable interface, because this does not allow them to change things. However, maintainers (usually) only use a very small amount of the API and fixing the API is very easy. So is such a package really that unstable to be unusable for any >= 1.0 package? It takes years to stabilize such APIs in a proper way.

I think the intention is good, but this option should be a necessary separate package information, because the version number is not enough to estimate the guarantees that package maintainers are giving. Maybe asking the community would give you a more complete picture: What are your criteria for choosing+changing package dependencies? How should these be communicated in a clear, concise form? How should deprecated, broken (what is broken?) or unmaintained packages be communicated?

I dont think a version number is enough. Rust has therefore another auditing system and gives information on crates.io. It would make alot sense to describe all edge cases and how to deal with them, before committing to a non-changable versioning schema. (Or do you want to make this changable for iteration? This could mean alot of churn.)

And finally to keep in mind: How do you want to deal with other language interop dependencies(mostly C). Ignore it? Can this considered be "stable"?

jayschwa commented 3 years ago

how can something be considered stable if it depends on unstable code? [...] In other words, stability is a transitive property in my opinion.

I disagree. It's entirely possible and reasonable for a v1 package to preserve its own API stability but depend on a v0 package for some internal implementation detail. The main downside of a v0 transitive dependency is that the package manager will have no latitude to automatically select a different-but-compatible version, because that cannot be assumed in v0.

It seems like this proposal may be conflating "API stability" and "robustness", which are independent attributes. It's possible for a v0 package to be rock solid and for a v1 package to be filled with bugs and landmines. I would agree that robustness (not API stability) is transitive, but that is not something that can be deduced from a semantic version. You need to count GitHub issues and FIXME comments for that. 😜

https://peter.bourgon.org/blog/2020/09/14/siv-is-unsound.html

I hadn't seen this before, but I was nodding while reading it. I generally like Go's package management, but semantic import versioning drives me nuts. I don't understand how Go can claim to support semver when a new major version effectively means renaming the package. I understand the motivation behind it, but I wish they had made it so that import paths only need to include the major version if it would otherwise be ambiguous (e.g. a package imports both v1 and v2 of a dependency).

kristoff-it commented 3 years ago

From @mattnite:

I would assume that you'd want the latest and greatest software pre 1.0, things might break here and there, but in general that's expected of <1.0 software. At the very least users are going to want highest compatible version some/most of the time I think.

From @ktravis:

Agreed, I think the priority pre-1.0 sounds backwards - I'd expect less stability/reproducibility (this would be an incentive to promote to stable 1.0+).

I currently maintain one library (okredis) and one application (bork). Both can be considered unstable by most definitions and bork itself depends on other unstable software (okredis has no deps for now). In the case of bork, having the package manager be optimistic and select a version of a dependency that's higher than what I've selected has an overwhelming chance of breaking the build and I don't want to deal with bug reports caused by that. This is why the package manager should prefer stability over everything else. That said, by my own admission, bork is not currently being maintained as "high quality" software and that's why I aggressively vendor my deps. I just want for bork to build successfully and not having to deal with bug reports caused by a package manager.

If I think about the same thing from the perspective of okredis, the same kinda applies. I have a test suite, but it's not extensive enough to cover the full API (increasing the chance of introducing API breakages even when meaning to only do retro compatible changes). The day I add a dependency to, say, iguanaTLS, if Alex doesn't want to offer "high quality" support for it, distro maintainers should not expect to be able to mess with my deps and not encounter dragons.

When software is inherently unstable IMHO you don't want to have the pkg manager concoct mystery mixes for no reason. This model is not ideal of course, but it tries to make the most out of the situation and properly communicates that this class of software is imperfect, that people that want dependable software should avoid depending on it, and that if a distro decides to package it, they are also buying into the extra effort required to make it work with all its imperfections.

From @ktravis:

To clarify, is the compatibility check here based upon conforming to the standard of semantic versioning (the way Go works), where anything later with the same major version is presumed to be compatible, or is there another method in mind here?

I was thinking of vanilla semver, yes (for v1+, for clarity). One thing to keep in mind though is that in this model, for v1 packages, adding upper limit restrictions on minor/patch versions of a package goes against the model, while in other situations like npm or cargo, extensive use of such constraints is not considered inherently wrong.

From @matu3ba:

Say for example a package has fundamental flaws and there is a non-stable replacement. How should a package maintainer be flexible enough to switch to the unstable one? Abolishing the code base with a hard fork and new name? Having a separate version number? Waiting years until the package becomes stable?

If you have no other option than depending on a new unstable package, you have to deprecate your current package (assuming the fundamental flaw that you depend on is truly fundamental), hard fork and stay at v0 until your new dependency goes v1.0. I get that this seems extreme (it is) but at the same time you presented an extreme situation. You should probably take care to avoid depending on a "fundamentally flawed" package and decide to go v1 while depending on it. We're talking about a pretty abstract situation so it's hard for me to understand the limits of what's possible in this context, but there might be other solutions, like having a soft fork of the unstable lib, bump it to v1 and offer high quality maintainership on its limited set of features, while you wait for the main project to reach maturity and eventually switch back to it.

From @matu3ba:

Take another example: the widely used nix crate. They dont want to commit to a stable interface, because this does not allow them to change things. However, maintainers (usually) only use a very small amount of the API and fixing the API is very easy. So is such a package really that unstable to be unusable for any >= 1.0 package? It takes years to stabilize such APIs in a proper way.

If a distro maintainer can't bump up a version of that crate without breaking all packages that depend on it, according to this model, it should stay at v0 and consequently all packages that depend on it should stay at v0. If people wanted to enable non-breaking upgrades they could have a deprecation scheme for APIs that they want to get rid of and bump up major version every 2-3 releases. That said, it's perfectly fine not being willing to go the full length, effort is not infinite. You just stick to v0 in that case.

From @matu3ba:

I think the intention is good, but this option should be a necessary separate package information, because the version number is not enough to estimate the guarantees that package maintainers are giving. Maybe asking the community would give you a more complete picture: What are your criteria for choosing+changing package dependencies? How should these be communicated in a clear, concise form? How should deprecated, broken (what is broken?) or unmaintained packages be communicated? I dont think a version number is enough. Rust has therefore another auditing system and gives information on crates.io. It would make alot sense to describe all edge cases and how to deal with them, before committing to a non-changable versioning schema. (Or do you want to make this changable for iteration? This could mean alot of churn.)

That's a good point. Maybe semver alone is not enough, or maybe it is, but the way the general programmer population thinks aobut it is too different compared to what I'm trying to model in this proposal. I am completely open to communicating intent through other means and we should definitely spend some time thinking of these details. In general keep in mind that I am only making a proposal about version selection and not the entire package manager system. For now my main goal is get feedback on the non-standard ideas behind this proposal.

From @jayschwa:

I disagree. It's entirely possible and reasonable for a v1 package to preserve its own API stability but depend on a v0 package for some internal implementation detail.

I understand this is the case in "normal" package ecosystems. In this proposal I want do discuss a different model that assumes that a stable package has to be amenable to distro maintainer operations. The idea is that the end user ecosystem (the distro) wants to be able, just to make an example, to take your software and update some dependencies for security reasons. If one of those dependencies happens to be the v0 one, then pain happens. Distro maintainers make a mess of your application, end users experience crashes and whatnot, and you start receiving bug reports that you can't reproduce easily.

From @jayschwa:

The main downside of a v0 transitive dependency is that the package manager will have no latitude to automatically select a different-but-compatible version, because that cannot be assumed in v0.

That's a reasonable point and it's entirely my fault for not having clarified it adequately in my initial post. As I mentioned early in this post, I'm presenting a model and maybe it doesn't map well to semver.

From @jayschwa:

It seems like this proposal may be conflating "API stability" and "robustness", which are independent attributes. It's possible for a v0 package to be rock solid and for a v1 package to be filled with bugs and landmines. I would agree that robustness (not API stability) is transitive, but that is not something that can be deduced from a semantic version.

Under this model the two properties are pretty much intertwined based on the types of operation that we expect v1/stable/high-quality packages to be amenable to. Sure, the version system cannot ensure high quality software, but it can clarify expectations. If you think we should use a different system to communicate this, I'm open to ideas.

matu3ba commented 3 years ago

@kristoff-it

After thinking about this abit longer, I think this works for any smaller software projects in zig as it is very feasible to support all platforms sufficiently good.

Maybe I am just overly concerned about this, but I have an example as potential consequence:

Example Developer D has a shiny software tool T in zig that relies on platform-specific abstractions to have sufficient performance. D did commit to stable versioning. Now user U wants the framework on his platform and creates Pull Requests to an unstable abstraction on Us platform.

Developer D can now

  1. ask U to change Us platform abstraction to stable versioning (giving false impressions).
  2. deny and U forks the project to have the tool working for his platform.
  3. fork the project and link to the new one to begin again with an unstable versioning system. D must deal with all the fallout of unhappy users that rely on Ds shiny tool T.
  4. create a wrapper or patching system around the own project with hacks to get around the version system to support user U and other unstable users. D must maintain 2 CIs and alot things will get duplicated.

Outcome of 1: ecosystem quality degrades with packages giving false claims. Outcome of 2: U must maintain a fork of the project. Outcome of 3: packages stay at unstable versioning Outcome of 4: D has much more work.

From my point of view the absolute stability guarantee might not give sufficient flexibility to support unconventional platforms that do not have sufficient developers. It might corner zig somewhere it does not want to get: "it needs to work on a vast array of platforms, be compatible with various memory allocation schemes, and if it deals with I/O it also needs to account somehow for blocking and non blocking mode (and actually the same is true for anything that touches upon concurrency, non just I/O). That's a pretty big design space to cover and definitely not something that your average lib on GH usually has to deal with."

Maybe we should let developers be explicit about their stability guarantees for different platforms, because platforms might have different ecosystem coverage/support? We already have the support triples of the compiler, so we could utilize that.

Will be playing with target triplet to find a useful way to combine them. We probably dont want to have a giant table with symbols presented to the user/developer.

komuw commented 3 years ago

https://jayconrod.com/posts/118/life-of-a-go-module (Author works on Go/modules at google)

matu3ba commented 3 years ago

@kristoff-it Instead of making a fixed rule as to require >=1.0 packages only to depend on >=1.0 packages, I would prefer explicit metrics that guide developers. The idea is that audits (I called them security for now) also check the validity of the projects self-description of the basic information to remove clutter ie from people maintaining indexes. Something like an outdated information or false information package list might work there and after a while those information gets pinned to the "index of trustable packages" or negative list.

What is wrong with semver? -> does not reflect different usecases properly -> support changes over time and platforms will get deprecated -> does only work for proper working OS abstraction (mainstream platforms) -> How do we want to make non-mainstream platforms discoverable?

CHECKABLE METRICS DEPENDENCY CONTROL

  1. no external dependencies
  2. no moving dependencies
  3. moving dependencies

SECURITY

  1. project history+usage,
  2. trustworthiness of users,
  3. self-information of projects are factual

description:

  1. more than 5 audits from trustworthy users, used by at least 1 nonforked project with open source codebase > 10k LOC. (What is realistic to allow security reviews?)
  2. user with activity every 2 weeks on a project and commit + issue history of > 6 month on a project
  3. information must be updated in the next release

SELF-ESTIMATION on each release (may be documented or not) TESTS

  1. core dev team tested for at least 1 month (uncheckable)
  2. community tested for at least 1 month (uncheckable)
  3. CI/externally tested => for at least 1 month
  4. should compile on the platform

MAINTENANCE

  1. implementation formally verified (on platform) commit hash + verification code (with hash)
    • free accessible article + journal is linked
  2. each component is tested for edge cases
  3. program is fuzzed
  4. problem domain is complex
rtfeldman commented 3 years ago

I'm intrigued by the "Packages >= 1.0 can only depend on other >= 1.0 packages" idea, although I wanted to raise a consideration: this rule would in some cases create pressure for people to go 1.0 prematurely.

Say a popular library is at 0.x and they depend on my 0.x package. They are all ready to go 1.0...except for that pesky 0.x dependency on my package. I'm now holding up the whole show! Lots of people like that popular package and are anxious to see it go 1.0 so they can use it in their 1.0+ projects, and here I am preventing that.

That's a lot of pressure for me to go 1.0 even if I'm not ready! (Maybe there's even talk of publishing a competing fork of my package which is the same except listed as 1.0 so the other package can use it at 1.0; that would make me sad after all the work I put into this package.) Supposing I give in to the pressure - now I've published a 1.0 package I felt was not yet high quality and stable enough to merit that version number. I only did it because of pressure created by the "1.0+ can only depend on 1.0+" rule, which means in this case, that rule actually weakened the assumption of "1.0 means high quality and stable."

Not saying this is the end of the world or anything, but I do think it's worth considering!

kyle-github commented 3 years ago

As others have noted there are two things here: 1) API stability, 2) internal code stability/lack of bugs.

These are orthogonal. That said, I think what @kristoff-it is proposing here is perfectly in line with Zig's opinionated nature on other fronts (such as formatting) and solves one really key issue: how do distro/platform maintainers know when they can upgrade packages safely.

For safe upgrading, all you need is the API guarantee. There will always be bugs. If you narrow down this proposal to just saying:

This gives distro/platform maintainers the freedom to upgrade x.y to x.z without (as much) worry. That is huge.

I think all the notes about "but it is possible that a package is stable but is version 0.x; it is just that the API might change!" are missing the point. This is not for the package maintainers ease of use. It is for package aggregators and package users. And I would argue that all packages have bugs. Some are worse that others. But if I have a guarantee about the package API, that package can be updated along the lines of vanilla SemVer, as @kristoff-it shows, without breaking my code. Thus you can dig your way out of bugs by upgrading without breaking every dependent package.

If my package depends on a 0.x package then mine is not stable. Period. Because the author of that dependency has decided that it is not stable. It does not matter if it is perfect code. It is not stable because the API is not stable.

If you build a package that depends on a package in, for instance, C that is a 0.x version, that is outside the Zig packager manager's domain. And you have the choice of making your Zig wrapper 0.x as well or guaranteeing an API to your users and taking the hit if the underlying package changes it by doing the API translation yourself to keep the breakage to a minimum. Or by updating your package to another major version.

The one change I would make to @kristoff-it's well thought out proposal is for 0.x packages: I would say that they only use the exact version they build with for any 0.x dependency. Not lower. Not higher. You cannot trust any 0.x package not to break in either direction.

Code quality is orthogonal to the API presented. They have nothing to do with each other. This proposal puts SemVer firmly in charge of API guarantees which provides distro and platform maintainers with powerful guarantees about what is safe to update and what is not. If someone has a 0.x API then they are saying to the world, "do not depend on my code!" very loudly.

This is a new ecosystem. It can be opinionated. Just like with formatting, if you decide not to follow the tooling, you are on your own.

Just my $0.02. I have dealt with this for industrial systems, and 0.x packages were the bane of my existence. My own open source package has had one breaking API change in nine years because it is in a field where software is rarely upgraded. And I got lots of flack when I changed the API. Sometimes years later.

mattnite commented 3 years ago

The vibe I'm getting from this "v1 depending on v1" rule is either full agreement or cautious optimism which is great. As a way to gather some real data I've added the policy to astrolabe.pm.

jayschwa commented 3 years ago

Packages >= 1.0 can only depend on other >= 1.0 packages.

With no central package publisher, how will this be enforced in practice? Will the toolchain refuse to fetch dependencies and compile a project if the rule is violated?

Should pre-releases (e.g. vX.Y.Z-alpha) be treated similarly to v0 versions? According to the spec, they should not be considered API stable either.

For packages >= 1.0, the package manager selects the highest-available compatible version.

How will reproducible builds be handled? A "version lock" file similar to Node's npm or Rust's cargo?

kristoff-it commented 3 years ago

@rtfeldman thank you for your comment. It's a big question and I don't know the answer. I think you're hinting at something that can only be discovered once people use the system, I fear. The best I can say is that this seems to me something that can be balanced with community culture. That said, there are probably many more problems like this that would be nice to be able to observe before committing to a design.

@kyle-github thank you, I think you explained the idea better than I did. As for the version selection pre 1.0, one important point is what to do when the same package is used by two or more other packages in the chain of dependencies. Not doing any resolution would bloat the final executable or force a lot of manual work, which might even be a necessity (without any automation) when packages end up exposing types that get passed around. It's a part of the design where I don't feel like I've done all my due diligence, but in general we should be aware that, given how I presented the idea in the first post, we are squeezing in v0 both packages that are just getting started, and packages that are much more complex but that have no intention of offering the type of guarantees that distro maintainers need. This second type of package IMO is the closest to your average package on npm/pip/..., which has potentially enough dependencies to make full manual version selection cause too much friction. Maybe this could be another argument that hints at the fact semver may not be the best way of representing this model. I don't have a strong opinion either way for now.

@jayschwa

With no central package publisher, how will this be enforced in practice? Will the toolchain refuse to fetch dependencies and compile a project if the rule is violated? Should pre-releases (e.g. vX.Y.Z-alpha) be treated similarly to v0 versions? According to the spec, they should not be considered API stable either.

Uh, that's a good question. On one hand we want to put a hard barrier that really enforces the v1 rule, but on the other hand one needs at least to be able to build a project using a prerelease in order to ensure that an upcoming release doesn't break any test. I haven't thought through how that would be precisely implemented but, as a first approximation, maybe the "zig make-release" command should error out if the v1 rule is broken and package indexes too should refuse to accept a malformed package. Finally the user should have some way of overriding deps while building so that they can "break" the rule only for the purpose of building the project. Go for example has a section of the go.mod file that only applies to the topmost package (i.e. the project being build).

I don't have a final design about these details but it seems to me there are no dangerous gotchas, it's just a matter of thinking everything through.

How will reproducible builds be handled? A "version lock" file similar to Node's npm or Rust's cargo?

If you don't do any version resolution or if you use MVS, you can have reproducible builds without any lock file. If you "bump up" to the highest compatible version available, you need to have a lock file to have reproducible builds, there is no alternative afaik.

This means that we too would have lock files like npm and cargo.

andrewrk commented 2 years ago

Here's what's accepted. It's slightly different than what is in the original proposal above:

nektro commented 2 years ago

what about the use case of not using semver?

kristoff-it commented 2 years ago

what about the use case of not using semver?

That would make sense for applications I think, in which case you can just bump up major version for every release. If you have a Zig library and you want to take part to the package management process you need to communicate your intent in a compatible way. Or were you referring to something else?

nektro commented 2 years ago

being able to not using versions for dependencies at all and always tracking master like what Zigmod does. I suppose I can always increment the 0.x minor version but that can become rather a hassle

nektro commented 2 years ago

Go gets around this by allowing versions like v0.0.0-5e0467b6c7ce / v0.0.0-20220112180741-5e0467b6c7ce

omega-tree commented 2 years ago

I agree with almost everything proposed here. But it seems to me that we are trying to cram the identification of stability into semver. while I agree with the idea of versions less than 1 being unstable. The semver proposal leaves no room for smooth major version iterations. It does not account for the inevitable creation of a v2, etc. with new features that are unstable and need public exposure and usage to become stable.

MichaelByrneAU commented 1 year ago

There’s already been lots of high quality discussion around this both on here and in other places and I think the ideas are intriguing. However, I would like to build upon what some other people have already mentioned which is that by introducing the hard barrier at 1.0, this is going to create what I feel might be unintended incentives surrounding versioning that might otherwise not exist. I must emphasise, this has little to do with the actual merit of the 1.0 policy - just the fact that it creates this hard boundary.

As an example, the Rust ecosystem sometimes finds itself in a position where people are extremely reluctant to create a 1.0 release just because of the cultural pressure surrounding stability. There isn’t anything material that accompanies such a release, just community expectations. Despite this, it still clearly makes a mark on people’s decision making. The Zig proposal is even more extreme than this, adding concrete implementation consequences to a 1.0 release. Whilst I cannot look forward into the future, I feel that this boundary will create a similar situation (maybe this isn’t a problem) or the opposite problem (where people blow past 1.0 quickly) to avoid the dependency limitations. You can’t control what people end up doing - you can only guide them through incentives.

Nirlah commented 8 months ago

The stability enforcement proposal has its merits. Alongside the obvious benefits, I’d like to join the concerned voices and discuss some of its shortcomings.

In order to achieve the greater goal of quality software, Zig is strict to the point where ‘it’s the Zig way or the high-way’. As far as I can tell, so far it went rather well… But when it comes to enforcing a strict dependency policy, this may turn-out to sabotage the Package Manager’s mission.

An approach of “stay at v0” may be a tad narrow-sighted as it weighs strong on the micro-level of a single package rather on the relationships between packages and the greater community who develop and evolve them. It’s not unlikely that this decision will have negative consequences that go beyond the package manager. The following are a few possible impacts…

Fragmentation

I fear this policy can single-handedly promote increased fragmentation in two main areas:

  1. Source decoupling – the policy divides the ecosystem into two factions: the Vee Zeros and the One Plusers. The strict policy forces One Plusers to fork or embed hard copies of Vee Zeros to their own codebase. This directly goes against the Package Manager’s mission. Forks will split the efforts of evolving and stabilizing the packages, embedded duplication will probably rot and miss-out on bug fixes and improvements from upstream.
  2. When Zig introduced the Official Package Manager it consolidated the efforts of various PM solutions created by the community. By introducing this policy we’re risking a rebirth of another wave of multiple alternative PMs, those will likely offer a sub-par hacky integration with Zig’s build system. A status-quo in this area is crucial for a thriving eco-system!

Fragmentation may reduce the gatherings of enthusiasts around potentially impactful packages, this in turn will increase maintenance churn on individuals or smaller teams; which can lead to less stable packages and even abandoning excellent packages. Let’s be mindful of the community’s (and profession’s) natural evolution without introducing foreign motives, incentives, and stress that will sabotage Zig’s adoption and growth.

Skewed Perception

False-positives and false-negatives are potential pitfalls. Some One Plusers may “stay at v0” while some Vee Zeros may disguise as v1.0+ if they are asked by their dependents. An unwanted scenario is that the critical mass of the ecosystem will either remain ‘stuck’ in the Vee Zeros realm, or alternatively rush into the One Plusers kingdom to be where the rest of the pack is… By embracing SemVer Zig should highly discourage miss-representation, as the whole point is to have meaningful semantics in the versioning. A skewed perception that can arise by a mass of unstable packages under disguise may affect how the community is perceived in a negative manner – which can deter great contributors from hoping on to the Zig wagon.

Beginner Friction

Imagine a developer who feels they have started “to get hang of it”. Suddenly... they are stumbled with the inability to use a package they need to progress their project. They research and find out it’s all because a rather insignificant (to them) version number. We can argue that they shouldn’t have market their package as v1.0 in the first place (which I agree). Yet, this leaves them in a frustrating high-friction scenario; it likely arrives early on in their learning path, going against the popular expectations they have from other ecosystems. It may seem absurd for the more experienced, but for a newbie this can shatter confidence, and demotivate to the extent of even abandoning their Zig journey – missing out on all the goodness it has to offer. Zig’s simplicity, accessibility, and intuitiveness are some of its greatest features; as we welcome new learners, minimizing early hurdles prevents the falloff of potentially great engineers joining the Zig ecosystem.

Proposed Solution

To maintain the positive merits of the strict dependencies stability policy I suggest the introduction of a stability level option either in build.zig or in the cli via --stable [strict|warn|off].

Stability levels

For v0.x packages strict should behave like warn.

Global opt-in

I believe making this opt-in is crucial to mitigate the discussed concerns. The beauty of Zig is its simplicity; while the advanced bits are opt-in complexity (for ex. comptime). By opting-in, developers and organizations can assess and adapt their risk tolerance while benefiting from the ecosystem and reciprocally contributing to it.

* When using the --stable cli option without a value, I suggest defaulting to strict unless provided an explicit value.

Per-dependency opt-out

In the scenario the policy will be accepted strictly, or the options approach will be adopted. I suggest allowing opting-out the strict dependency stability by adding an option to ignore stability enforcement on a per-dependency basis; in build.zig.zon or in build.zig.

perillo commented 7 months ago

About

"But won't this mean that basically nobody is going to bump up major version to 1 ever?"

Maybe, but if nobody wants to put in that kind of effort then wouldn't that be a lie anyway?

See how users may abuse this: https://package.elm-lang.org/packages/lue-bird/elm-bounded-nat/latest/ (the latest package version is 35.0.0).

kj4tmp commented 1 month ago
  • Packages with versions >= 1.0.0 may not depend on packages < 1.0.0.

Does this mean that no package may mark v1 until zig reaches 1.0?