Closed andrewrk closed 1 year ago
Here's some thoughts from someone who's only dabbled in Zig but is very intrigued by it. This is what I'd be looking for in Zig's package manager, as I keep Zig in mind for future projects. I really do hope for Zig's success (and am a GitHub sponsor, in case that gives me any extra weight... ;-)).
For context about my background, at work, I lead a team responsible for about 10 applications, written in Elixir (which uses the mix tool), TypeScript/JS (uses npm), and Python (we switched from pipenv to poetry). Our apps are higher level and range from web apps to applications to manage transit data. One of my responsibilities is to ensure we keep our various apps' dependencies up to date and secure, so I have some opinions here! At my last job, we used Ruby with bundler.
I'd like to put in a vote for not spending any of your "strangeness budget" on the package manager and going with a ruby Bundler-lineage approach. Zig is already quite innovative with its colorblind async, explicit allocator conventions, comptime, and easy cross compilation. I think it would be prudent to just go with the flow here, conventional in my corner of software development, and implement something along the lines described by Russ Cox in this comment about his package manager investigation:
tagged semantic versions, a hand-edited dependency constraint file known as a manifest, a separate machine-generated transitive dependency description known as a lock file, a version solver to compute a lock file satisfying the manifest, and repositories as the unit of versioning
I will say immediately that I acknowledge that npm
is horrible and cargo
has issues, but that's not inherent to this kind of package manager, but rather unique issues for those ecosystems, for reasons I will try to explain later. I'd like to highlight Elixir's mix tool and hex registry as the best of this bunch. At work, both the npm and python ecosystems are absolute disasters leading to us taking a vastly disproportionate amount of our time keeping dependencies up to date, while the Elixir one is quick and easy and gets out of our way. As an example, here's one web app we maintain. We list about 25 Elixir direct dependencies, a handful earmarked for "dev" and "test" only. This brings in a total of 48 total dependencies to our project, counting transitive. By contrast, the same project's frontend (which is not any more complicated, IMO, than the backend) lists 17 direct JS dependencies, which results in this behemoth, 25k line lock file and 867 (!!!) transitive dependencies. A fresh install of the dependencies takes of 15MB for the Elixir side and 212MB for the JS side.
Upgrading Elixir dependencies is quick and easy. The lock file is digestible and human reviewable. Merge conflicts are easily resolved. And the culture of the Elixir ecosystem is that there are rarely breaking changes. You can work offline no problem (other than downloading new dependencies the first time, of course)! In short, it's possible for the npm style of managing dependencies to not be the dumpster fire that is the npm ecosystem.
A few things about the Elixir approach I'd like to highlight are:
mix deps.get
, the command that downloads your dependencies per the lock file, downloads to a deps
folder in your project directory. This makes it very easy to either vendor them if you're into that, or .gitignore the directory if not. I love that the folder is in the project and not elsewhere in your file system, since it's very easy in your editor to peek at your dependencies' code. The build process does not use code from anywhere else (other than standard library).mix
won't change the lock file if the versions in it are compatible with the manifest, this gives me confidence that just checking in the manifest and lock file to version control will result in the correct code being used by everyone and CI. mix deps.get
will download the dependencies without changing the lock file, and will know if something has been tampered with.mix compile
doesn't need the network. As long as you've previously done a mix deps.get
then you can work on your project all you want without internet. It will quickly verify the lock file agrees with the deps folder. If they're in sync, it will compile, if not it will abort and tell you to fetch your dependencies. (In other words mix compile
always works offline, while mix deps.get
expectedly requires internet. But if you don't touch your manifest, mix compile
will continue to work.)Incidentally, while I quoted Russ Cox, Go ended up not going with that approach, replacing a constraint solver with a simple "Minimal Version Selection" algorithm which eliminates the need for a lock file. I don't have any experience with Go, and asked about it here, and it seems some people like it, so this might be decent variant on the general package manager approach I'm advocating. They also allow multiple major versions of a dependency at a time, which is an interesting allowance.
I'm afraid that npm
has burned all the good will of people here for this approach, but I'd like to point out a few reaons that I think contributes to the state of its ecosystem, which are not inherent to the approach:
I see the main point of a package manager as being able to manage dependencies and their relationships as easily as possible. In particular, I think it's absolutely crucial for libraries, since they don't know what context they'll be used in, to specify that they depend on other libraries without committing to a particular version of it, which then implies the package manager has to solve this web of requirements. In addition, ideally it would be a tool to easily add a new dependency (and its dependencies) into your app in a cohesive way, and to upgrade or downgrade existing libraries.
Decentralization makes all this difficult, or at least hurts performance. If I use A and B, both of which use C, the package mangager has to decide what version of C to use. So the package manager needs to know 1) the C referred to by A and the C referred to by B are the same, and 2) not just where C is, but where a listing of all versions of C is, and then the different transitive dependencies required by C for each of those versions. With a centralized package repository, you can fetch an index quickly. With decentralization, you'll be making tons of network requests to discover more packages to make more network requests and so on.
As I mentioned, Elixir's mix tool allows you to specify arbitrary git endpoints for dependencies, in addition to those listed in the centralized repo, and we use those to some extent at work. Checking for updates there is the slowest part of the whole process. Or consider this HN comment about the majority of time in Go module loading is network requests and figuring out dependencies.
In addition, centralized package repos, which allow authenticated users to update their libraries with new versions behind a TLS URL basically get you the "I trust so and so" benefit without needing to worry about web-of-trust and passing around public keys. Indeed, from experience there are a handful of Elixir developers who I trust and will always prefer to use their version of a package over someone else. For instance, at work, someone wanted to use hammox, a nicer version of mox. But since I don't know the authors of hammox and the author of mox is Jose Valim, the creator of Elixir, I didn't feel the potential small benefit of the "nicer" library was worth it and vetoed that.
But maybe federation can get us some of the way to where you're envisioning? As I mentioned before, Elixir lets you self host your package repository. However, as this wasn't released until well after hex.pm was standardized, it's basically just for companies to do it internally. Elixir also doesn't work well with self hosting and other repos.
But what if we got the ecosystem started right away with self hosting package repositories? And then you would add the decentralized repositories to your project manifest. So maybe you'd add the official ziglang
repository as well as amazon
, and your company's own private myco
. And then when specifying your packages it would be like ziglang/parser
, amazon/ec2
, amazon/rds
, myco/proprietary_tech
, or something like that? That would keep you from being beholden to any one centralized source, but solve some of the trust issues and get some of the performance gains if the cardinality of the repos is high enough.
Probably you would want those libraries to be able to specify dependencies in other repos, and so maybe that would be part of the review when adding a new library: it would show you the new repos it wants to add. I imagine there would be some natural consolidation, and so normally adding a new library wouldn't pull in a new repo, but you never know. But you could also whitelist the allowed repos (e.g. just myco
and trustedrepo
) and so freely add those dependenencies and it will fail if any of those try to add code from somewhere else.
Just briefly to touch on a couple topics related to SemVer that have been brought up.
One, a small thing, is that contra some of the pseudocode given, is SemVer permits non-integer versions. In particular, things like 1.0.0-rc.1
are pretty common and useful to support. Just FYI.
The other, is the idea that somehow the compiler or package manager can check that upgrades are safe, if they keep to the same API or something. I really disagree with this. Even if the types check out, that's still no guarantee that semantically there's no breaking change in there. A developer must review all the changes, always. To my mind, the package manager should be a useful tool here, allowing easy speculative upgrades of this dependency and that one, so you can run your test suite, deploy to a staging server for visual inspections, etc, but that's about it. For instance here is a repo at work where dependabot opens PRs with speculative upgrades of our dependencies. It even runs our test suite against them, which is stronger than static analysis alone. But I still like to review the changelog, peek at the commits, etc, before approving and merging.
Elixir as the gold standard of this approach
I would add that another item that makes the Elixir package system spectacular is the built in documentation generator, standard, and automatic hosting. hex.pm for your packages and hexdocs.pm for docs for every package in Elixir. IMHO, that documentation also has a much easier to read format and tends to be higher quality than in similar tools (e.g. Rust).
Don't forget of the supply chain attacks :)
Maybe a hybrid approach would be nice, I don't like the idea of it being centralized maybe it could use Git?
zig install https://github.com/xyz/xyz
Then I could make a lib
folder and handle linking and all of that complex stuff
Note I don't know much about Zig yet and just was wondering if it had a package manager
Btw, nix-flake will use https instead of git when using github:username/repo
URI schema which will download and extract the tarball since git cloning (even it's a shallow clone) is slower than a direct download. However, I'm not sure if it also handles git submodules 🤔 (especially when it's recursive)
Quoted from the docs
These are downloaded as tarball archives, rather than through Git. This is often much faster and uses less disk space since it doesn't require fetching the entire history of the repository. On the other hand, it doesn't allow incremental fetching (but full downloads are often faster than incremental fetches!).
Btw, nix-flake will use https instead of git when using github:username/repo URI schema
I could be wrong but I remember reading recently that accessing github repos via raw https links is deprecated and will be removed in a few years. I know they already removed the ability to login via cloning by https. They want you to do everything through git://
I could be wrong but I remember reading recently that accessing github repos via raw https links is deprecated and will be removed in a few years.
I think what you mean is https://raw.githubusercontent/
, not the archive. If they block the archive url (e.g https://github.com/ziglang/zig/archive/master.tar.gz
) then it will surely break a lot of source-based packages managers/registries like AUR, Gentoo/Portage, opam, and others.
I am no expert in this, but since Zig is open in nature, centralization might actually be beneficial for a package repository, it could keep some sort of graph database for dependencies, and clients might search for packages either by fully-qualified package name + semver, or by query with fields like repository, vendor, package name, semantic version, checksum, etc.
The resources themselves might be in whatever repository the package metadata URL points to.
Just an idea.
Given there are a few prototypical package managers for Zig currently are any of these looking like they might be chosen or is the floor still open for new kinds of implementations etc? I'm looking to try and contribute a package manager solution.
Just my two cents -- zig should definitely have a "default" package manager to avoid a situation like the python ecosystem.
Just my two cents -- zig should definitely have a "default" package manager to avoid a situation like the python ecosystem.
Seconded. It could work similar to how Emacs treats its package management, where one is the default, but users are free to install other repositories as they see fit.
Having a default is also a great help for newcomers as they don't necessarily need to worry about finding libraries like they would for C++, for example.
@sam0x17 Could you elaborate on what that situation is, and how a "default" package manager would solve that?
Be careful with proliferation of lockfiles. Javascript and Python have some serious issues with this. For example in Nodejs ecosystem, they currently have 3 different lockfiles:
However, only package-lock.json can be converted into yarn.lock or pnpm-lock.yaml, and none of them can be converted into each other. The annoying part is both yarn and pnpm only produce their own lock version, no package-lock.json. This gives me a headache when integrating it in other build systems https://github.com/NixOS/nixpkgs/blob/master/doc/languages-frameworks/javascript.section.md
I like the idea of source-based lockfiles (e.g dependencies.zig) which serve as commons abstraction for other package managers. Meaning package manager must produce \<lockfiles>.zig so that it can be consumed by build.zig. But I think common fetch function (e.g fetchTarball
, fetchGit
) should be implemented first and build.zig should be able to generate the JSON format of the build pipeline/steps along with the dependencies graph so that it's easier to integrate it with other build systems like Nix, Bazel/Please, and xmake.
@sam0x17 Could you elaborate on what that situation is, and how a "default" package manager would solve that?
pip is just really terrible in a lot of ways, and python often ships without it by default (or with a super out-of-date version that is incompatible with stable python), leaving users high and dry.
Additinoally, Updating dependencies is an extremely painful and manual process. As a user of globally installed packages, there is simply no way to update all pip packages on your system without doing some bash scripting. Globally installed packages are plagued by sudo
issues where the default installation instructions / prompts often result in broken permissions and require wiping a large number of directories. There is also no good high-level way to tell if a package will work on X platform because that was left up to the packages to figure out themselves. Even worse, a lot of linux package managers depend on python to work, and I've seen situations where upgrading packages to the point where pip is working properly will actually break your system package manager.
P.S. the lockfile hell in the JS ecosystem presently is mostly a function of the sheer number of people using JS. There are just many competing package managers in that ecosystem. JS would be in much better shape if it had a much smaller community (or a community with much fewer junior devs).
IMHO cargo gets pretty much everything right and should be emulated to a certain extent here. It's like the good parts of bundle (ruby) and yarn (js) combined into one and in a way that is extremely sensitive to the needs of targeting many different CPU targets. I come from the Ruby ecosystem originally so my expectation is that I can cd into a zig/rust/crystal/go/whatever project and type a single command like cargo build
or bundle install
to get it working.
Having a default is also a great help for newcomers as they don't necessarily need to worry about finding libraries like they would for C++, for example.
Having a default also makes it easier to develop an ecosystem (i.e. standard tooling, web frameworks, etc) in zig, which right now is tricky just from the package management standpoint.
pip is just really terrible in a lot of ways, and python often ships without it by default (or with a super out-of-date version that is incompatible with stable python), leaving users high and dry
Originally pip is designed to be a system package manager. So it's not mean to be used in a project, userspace, or dev environment.
P.S. the lockfile hell in the JS ecosystem presently is mostly a function of the sheer number of people using JS. There are just many competing package managers in that ecosystem. JS would be in much better shape if it had a much smaller community (or a community with much fewer junior devs).
The main culprit is Nodejs doesn't align nor is willing to deprecate things that don't align with ECMAScript standards (e.g import/export module, .importmap, ...) which in my guess promotes proliferation. Also in the early days, npm doesn't have cache systems that make yarn emerging. People are willing to replace their current package managers if others can handle caching better.
IMHO cargo gets pretty much everything right and should be emulated to a certain extent here. It's like the good parts of bundle (ruby) and yarn (js) combined into one and in a way that is extremely sensitive to the needs of targeting many different CPU targets. I come from the Ruby ecosystem originally so my expectation is that I can cd into a zig/rust/crystal/go/whatever project and type a single command like cargo build or bundle install to get it working.
I want to shout out that the best thing about Cargo is you can define both features and patch per dependencies. I suggest zig adopt this while using Nim semantics for the conditional compilation.
I want to shout out that the best thing about Cargo is you can define both features and patch per dependencies. I suggest zig adopt this while using Nim semantics for the conditional compilation.
I would second that those semantics look really useful
Originally pip is designed to be a system package manager. So it's not mean to be used in a project, userspace, or dev environment.
If that was pip's origin, it certainly isn't used that way anymore. The only sane way to use pip at the system level is to use --prefix=/opt/stow/pip
and stow --dir=/opt/stow --target=/usr/local pip
; that way you can just nuke it and re-install if something goes wrong. Pip should only be used in confined environments like virtualenv or docker builds.
There is also no good high-level way to tell if a package will work on X platform because that was left up to the packages to figure out themselves.
Python has defined platform compatibility tags. And, it is always up to the packages themselves to figure out what platforms they support, except for languages on hosted runtimes (e.g. Java, .NET). Zig packages will have to define this themselves too.
As a user of globally installed packages ...
That's probably why you hate Pip. Python is my main language, and I've never had Pip give me a headache, but I also don't do that. In any case, any points about Pip's shortcomings for system package installation are moot, because this issue is about a programming language package manager, not a system package manager. Language package managers should never aim to perform system functions.
There are a lot of things Pip could do better, that's for sure. For one, the need to build packages in order to determine the dependency tree is a big sore spot (but that's really setuptools fault). But ultimately, for python development it works well enough; that's why it is still the de-facto standard, even though there are alternatives. IMO Pip's best feature is that it is replaceable. If you don't like it, you can use Conda, Nix, make your own fancy package manager, or just raw setuptools (which has its own problems). Of all the shortcomings with Python packaging, I don't think any of them stem from the fact that there is no "default" package manager.
IMHO cargo gets pretty much everything right ...
Cargo does a lot of things right. But two things it does not are offline builds (it has been a little while since I've checked, so maybe it got better), and that you are out of luck if Cargo doesn't work for you because there are no alternative options (not even "no package manager").
Some of Rust without Cargo in the wild:
Quoted from "Integrating Rust into AOSP"
Cargo was not designed for integration into existing build systems and does not expose its compilation units. Each Cargo invocation builds the entire crate dependency graph for a given Cargo.toml, rebuilding crates multiple times across projects.
Btw, Cargo now has
--offline
mode.
How about something like this for dependency tracking? (it's just a raw idea)
pub const Package = struct {
id: []const u8 = "00000000-0000-0000-0000-000000000000",
provider: Provider,
name: []const u8,
description: []const u8,
version: SemVer,
build: u32,
unstable: bool = false,
targets: []const Target,
depends: []const Package,
compiler: std.builtin.Version.Range,
repositories: []const Repository,
licenses: []const License,
checksum: []const u8,
pub const Provider = struct {
name: []const u8,
contacts: []const Contact,
pub const Contact = struct {
pub const Kind = enum {
website,
email,
chat,
phone,
other,
};
address: []const u8,
};
};
pub const SemVer = struct {
major: u32,
minor: u32,
patch: u32 = 0,
};
pub const Repository = struct {
name: []const u8,
address: []const u8,
pub const Method = enum {
http,
rsync,
ftp,
torrent,
other,
};
};
pub const License = struct {
spdx: []const u8,
name: []const u8,
revision: []const u8,
};
};
I think enforcing a package version to follow semver is a bad idea. Some corporate might have their own convention of doing versioning. Maybe the version type should be either SemVer or string.
@HugoFlorentino Btw for SemVer
, you forgot \<pre-release> and \<build> (according to \<valid semver> grammar)
Actually I included the build field but as a direct field, outside of semver struct, because some devs prefer keeping a continuous incremental build number regardless of semver. I also included an unstable boolean field as a better uniform way to convey if a package is not yet ready for production.
In any case it was merely a rough idea, if Semver is not used, something else must be used to define if a version bump is a breaking change, or a backwards-compatible change with new functionality, or an optimization or bug fix of strictly existing functionality, etc.
Additionally, it might be useful having zig use a separate directory for packages which are a development external to the control of the package provider, like golang's vendor directory. Then if one links to a dependency in github and its author introduces breaking changes in a minor release, at least one's package will contain everything needed for others to compile. This does increment storage use, but hopefully as developers get used to the package management, they will use semantic versioning properly.
. Then if one links to a dependency in github and its author introduces breaking changes in a minor release, at least one's package will contain everything needed for others to compile
That said, one of the biggest mistakes NPM made was not adding node_modules
to .gitignore
by default when creating a new package / project / whatever, so we want to avoid repeating that fiasco.
Andrew has weighed in on things like generating a git repo or .gitignore previously https://github.com/ziglang/zig/issues/6912#issuecomment-720150156
Hello, people!
I have not read the entire thread yet, but I think I should ask anyway:
Can the package manager be made, ehr, NixOS-friendly?
I say this in the following sense: Nix package manager strongly enforces reproducibility, and for this to happen, it sandboxes the builds and blocks any external access (including internet). Routinely we employ custom patches in many projects in order to ensure this.
Sometimes it is easy - just downloads the dependencies and copy them to the corresponding places. )Meson makes it very easy indeed.) Sometimes it is hard - rewrite cmake rules, reimplement shell logic etc.
Yes, I know it can be a bit entitled, but at least a "offline mode with alternative source pool" could be implemented. I think it isn't so hard after all.
@AndersonTorres based on the discussion so far it looks like whatever zig does it will probably be the most nix-friendly thing humanly possible ;)
Consider updating the Style Guide to have guidance about package naming as well.
On github I've been seeing various formats for repository names, like zig-foo
, foo-zig
, foo.zig
, etc.
Something as simple as package naming can really effect the developer experience.
Some references:
Consider updating the Style Guide to have guidance about package naming as well.
On github I've been seeing various formats for repository names, like
zig-foo
,foo-zig
,foo.zig
, etc. Something as simple as package naming can really effect the developer experience.Some references:
I am also very interested in this topic.
today the zig package was approved on conda-forge: https://github.com/conda-forge/zig-feedstock/
so, since today we can have zig libraries on conda-forge and it would be important to know the standard for the package name ..
for example, for R packages on conda-forge, the convention is to use r-
+ package name, maybe for zig, we could use the standard zig
+ package name (at least for conda packages).
On github I've been seeing various formats for repository names, like
zig-foo
,foo-zig
,foo.zig
, etc. Something as simple as package naming can really effect the developer experience.
You say the https://github.com/
\<repository name>?
zig-
is the prefix I use for all my packages https://github.com/search?q=user%3Anektro+language%3AZig
Can the package manager be made, ehr, NixOS-friendly?
I say this in the following sense: Nix package manager strongly enforces reproducibility, and for this to happen, it sandboxes the builds and blocks any external access (including internet). Routinely we employ custom patches in many projects in order to ensure this.
This is pretty standard for traditional Linux distributions as well, I know Fedora does this.
Recently I've been thinking about how the package manager should choose dependency revisions and I've come up with what I think is the single-most important criteria:
"The package manager should only resolve dependencies to a revision that has passed the tests on the current platform."
I argue that a package manager that can fulfill this criteria will result in one that optimally selects packages that are going to work. Versioning schemes like SemVar at best can only serve as "hints" as to what "might work". Experienced developers who've learned about leaky interfaces/Hyrum's Law (https://www.hyrumslaw.com/) will attest to that. I don't think there is a perfect solution to this problem. I believe the criteria I've given above is "optimal" meaning it's the best we could ever do, and the degree to which it works will be dependent on how well the package manager identifies the "platform" on which the packages have been tested. This is further shown by considering the extreme case of the criteria, where "the current platform" literally refers to the machine in question, which by definition will pass the tests. The practical solution will be somewhere in the middle, such as saying that two machines with the same OS/CPU-ARCH are considered "the same".
If we consider a package manager design that fulfills the criteria I've given above, other design decisions naturally resolve themselves. For example we no longer need to consider a scheme that tries to assign meaningful version numbers to each revision and algorithms to select those versions. The algorithm becomes very simple, "select the latest revision that has passed the tests". When a project is changed, that new version does not propagate to any other projects until it passes those projects' tests. This means that if a project makes a breaking change, all its dependents will remain on the current version until either it or the dependency is changed again and the tests are fixed. This also solves the problem where multiple dependencies depend on the same package indirectly. Say we have the following 4 packages "Browser" which depends on "Graphics" and "Sound", which in turn depend on the "Log" package.
|--> Graphics -->|
Browser -->| |--> Log
|--> Sound -->|
Remember our package manager guarantees that it only resolves dependencies to versions that work (at least by default). Given this, we can assume that starting out, our "Browser" project will "pass the tests". Now assume that the "Log" package makes a change. The other projects will remain on the same version until the new version is tested. If the new revision passes all tests with all 3 projects then we're done, but what if it passes some but fails others? Let's say the new "Log" package works with the "Graphics" package, but not the "Sound" package. In this case, the "Graphics" package will move to the new version when it's built by itself, but when it's built as a dependency of the "Browser" project, it will continue to use the current version that also works with "Sound". It is only once the Browser project passes the tests that it will receive the new version of "Log". If the Graphics package then makes a change of its own, the Browser project will only take that new change if it passes the test, otherwise it will remain on the current version. Then let's say the "Sound" project changes and now works with the new version of "Log". This would trigger a new test run for the "Sound" project, which would then receive the Log update, then this would in turn trigger a new run for the "Browser" project which would test the latest version of "Log", but if it fails it continues to remain on the current version.
Also keep in mind that with this scheme we could consider "zig" itself to be a package. The package manager would know which version of Zig the project works on and would only update that version once it's been tested. This in turn solves the issues of knowing what version of Zig you'll need to build any project.
For example we no longer need to consider a scheme that tries to assign meaningful version numbers to each revision and algorithms to select those versions.
Well, versioning is something NP-complete afaicr. Usually it boils down to ostrich approach, "ignore it" (by not auto-updating)...
"The package manager should only resolve dependencies to a revision that has passed the tests on the current platform." @marler8997
I have a problem with that specific proposal, because
Given that, I still find the idea interesting, but more as an optional addon/plugin or such, that helps with upgrading dependencies by running the test suites and checking them for compatibility. This would require lock files, which imo should be supported, even if just optional (and otherwise default to minimum allowed version to avoid breaking the build), which would allow this use case; The lock file would override the selected versions, I think the package manager should print a warning if the lockfile downgrades a dependency below the minimum allowed version, which would otherwise be selected, in order to give downstream maintainers / distro packagers the ability to forcefully invoke a version of dependencies that is known to work.
Small detail here 😊, but what about changing the wording to release.version.patch
instead of major.minor.patch
, I think it would be easy to grasp for beginners, and I kinda like it better
Small detail here 😊, but what about changing the wording to
release.version.patch
instead ofmajor.minor.patch
, I think it would be easy to grasp for beginners, and I kinda like it better
That makes it inconsistent with the terminology on https://semver.org/
I listened to the recent conf at Milan and wanted to share some thoughts about the package manager milestone.
In short: what about nix flakes? In more details:
nix develop
without any additional step required 😏@uael although integration with nix might be a good idea (I use NixOS as my primary linux distro, preferred over Gentoo, Debian and Fedora), keep in mind that afaik primary reasons for more / language-specific package managers are:
(1) gets solved by nix. ok (2) doesn't get solved by nix because nix has no windows support, and the file system interface and container interface is different enough from linux/unix that it would be a pretty big amount of work. (it was already tried, but afaik there was no obvious way forward) see also
For (2) it might be a good idea to consider integration with vcpkg
and winget
, but as I usually don't develop on windows, especially due to the aforementioned annoyances, I don't have a clear idea how that could work.
Adding nix as a dependency for zig would be bad for multiple reasons, in my opinion:
I think it would be a great idea to plan for nix integration or support with any new package manager. Unfortunately, as just a lowly user I don't think it would be prudent to use nix as the zig package manager.
I think it would be a great idea to plan for nix integration or support with any new package manager. Unfortunately, as just a lowly user I don't think it would be prudent to use nix as the zig package manager.
that makes sense to me. maybe the same would apply to conda. currently we can use conda for packaging, as zig is already available on conda-forge .. also it could be possible to have an specific conda channel just for zig ...
but I guess that something that covers all needs for zig would be the best approach, but maybe it would take more time to have it ready for usage ...
is there any action in progress about this? deadlines? plan? I would love to be envolved in this in order to learn more about the topic and contribute.
In short: what about nix flakes? In more details:
Oh no no no. Bad idea, trust me. (Disclaimer: yes I am a Nix user.)
The most obvious is sovereignty: Zig will become dependent on a third party to provide this service of package management. If Nix changes anything in its policy, from dropping a system approach to drop a whole platform (e.g. Nix is not made to run in Windows), Zig will automatically suffer the same.
Certainly, it would be fun to employ the approaches and algorithms from Nix project in order to provide the same or a similar experience on Zig, however making Nix a part of Zig is not a good idea.
The second one is consistency: Zig is planning to release a (mostly) LLVM-free experience in the future. It would be insane to become dependent on another project to do packaging stuff.
The third is that Nix, in and by itself, proposes to solve a larger problem than those smaller language-restricted package managers.
using an in house one sounds more like a trend.
Zig needs to be bootstrappable. Using an external package manager makes it harder.
zig is a modern language that fixes old systems languages, nix a modern package manager that fixes packaging. Each of them do their job very well, why reinventing the wheel ?
Because sometimes we need better wheels made of different materials.
I would personally love to nix develop without any additional step required
Nah. This is just a flake.nix
distance away.
I would prefer to rewrite Nix in Zig instead of the reverse.
What if, besides doing the package-manager things, the Zig package manager could help us vendor our dependencies to protect ourselves from things disappearing off the internet?
I chatted with Andrew in Milan about this (to be clear, the idea-at-the-time was dismissed after about 5 seconds). I thought about it more and wrote down: https://jakstys.lt/2022/smart-bundling/
I don't think you want to tie a package manager to any specific version control software (downstream) - but, being able to have the package manager just work with you checking your dependencies into whatever vcs system you are using is really important (imo).
Being able to have the confidence of being able to grab a zipped up copy of a clean working directory of your project and knowing it will just build with the correct corresponding version of the zig compiler is crucial.
It's how I currently work with zig libs that I use (copy & paste them into a libs folder in my repo), if a package manager is just a simple tool that makes it easier to control the upgrade process for that kind of thing- that would be the dream. Although I do understand there are use cases where you would not want to do that.
What if, besides doing the package-manager things, the Zig package manager could help us vendor our dependencies to protect ourselves from things disappearing off the internet?
Well, sometimes I pick some projects in the wild and put them on Museoa
However it looks like a Software Heritage job
It's how I currently work with zig libs that I use (copy & paste them into a libs folder in my repo), if a package manager is just a simple tool that makes it easier to control the upgrade process for that kind of thing- that would be the dream. Although I do understand there are use cases where you would not want to do that.
Yes, sane opinionated defaults with full ability to customize is the best philsophy for this stuff imo
I'll add my thought on:
Enforcing Semver is an interesting concept, and projects like Elm have done it.
I used Elm extensively and I think this is the most appreciable feature concerning maintainability. But for zig it is a bit harder to define what would be considered a stable API. For example, consider this:
Library code:
const RTCConfig = struct { wakup_interval: u16 };
pub fn init_rtc(config: RTCConfig) void {
}
User code:
init_rtc(.{.wakeup_interval: 3})
Now a field is added to RTCConfig
, like so:
const RTCConfig = struct { wakup_interval: u16, alarm_interval: u16 = 0};
Current zig code would need no change to work with the new library, and the behavior would not change either, so for me this could be a nudge from 1.2.3
to 1.2.4
or 1.2.3
to 1.4.0
if the option does impact behavior in significant way. But Elm policy would force a bump from 1.2.3
to 2.0.0
. Which I would find way too drastic in this case.
I don't have the solution (doc attribute to mark as API...), and I am not saying that zig should implement it. But I want to emphasis how well it works for Elm and how incredible to never have a build break because of an upgrade, it is the absolute opposite of the JS experience.
It could also be an external tool.
But Elm policy would force a bump from 1.2.3 to 2.0.0. Which I would find way too drastic in this case.
Why is it too drastic? It's exactly the point of semver: you can't upgrade to the next version without making changes to your calling code.
@cweagans But the point being made is no calling code would need to change, in the example given its an API change that is 100% backwards compatible.
This is a consequence of zig being a source first language as the old calling code will be recompiled with the new library and as all previous usages of the API are still valid nothing breaks.
@cweagans But the point being made is no calling code would need to change, in the example given its an API change that is 100% backwards compatible.
Yes, and imagine the case where you fixed a critical security vulnerability in your lib and you clearly want a patch release, but because you added a padding field in a struct for the fix (or anything of the sort), the package manager forces a big version bump.
The difference with elm is that normally elm has no security implication because it runs in a sandbox, this makes the above scenario rare or even impossible (you'll have to fix the browsers).
Again, I don't know what should zig do, but I think, even with my comments on how different it is from elm, we should consider some sort of mechanism to check source compatibility.
The scenario that should be avoided is to upgrade dependencies and having to rewrite large chunk of codes and re-read documentation while doing so (hindering code re-use).
But, on the other hand, apps should be "source upgradable" easily, for example, imagine a library does a major rewrite to move from xorg to wayland, that would be clearly be a x.y.z
to x+1.0.0
, but, because abstraction was properly done, it is source compatible. The package manager should say "I kept XXX back, but it is source compatible and would still compile, are you happy to go to new version? The changelog is: print CHANGELOG
".
Dependency tracking could use a range, too.
Latest Proposal
Zig needs to make it so that people can effortlessly and confidently depend on each other's code.
~Depends on #89~