Open takluyver opened 2 years ago
- But I happen to prefer doing editable installs with symlinks
I agree :-)
I like 2, in particular because using many tools may force you to write wrapper shell script. It's pretty easy to write a script that does build and publish, but does not stop on build if there are issues.
I'm ok as well for "infinite deprecation", where basically you can do something, but it's not the recommended way.
Even if build
and twine
are "standard", they still need to be installed on users machines, and especially when getting people started on Python packaging it might be a pain, or at least an extra complexity to have to remember multiple tools.
So all in all, at least for now I don't think removing deprecating any of those commands is the right move.
I’m totally in love 💜 with the current commands, but…
I have to admit that having a different behavior with flit build
and python -m build
can be surprising. Distro packagers find the current situation very disappointing, and everything that could help them to follow a standard path to generate and install packages is the way to go in my opinion.
The situation is really complicated, because it would require to either (1) make build_sdist
require git/mercurial, or (2) drop the 'gold standard' feature. And we don’t want that, do we?
So… I like 2, as long as the differences between flit build
and python -m build
, or flit install
and pip
, are well documented. We can assume that developers who use Flit know Flit, they can use Flit the way they want to generate the wheels and the source packages, and to upload them on PyPI.
So all in all, at least for now I don't think removing deprecating any of those commands is the right move.
👍
On the other side, but that’s another story, packagers should have a "standard" way to create packages for Linux distributions. As long as Python source packages generated by Flit work with "standard" tools, we can give Linux packagers a way that always works the same way (something like: download source package from PyPI, build wheel, install wheel). They could thus use the same shell commands, and let these tools (namely pip
) take care of the build/installation dependencies and details.
What about a cross between 1 & 2? Mention the "standard" way to do things (which is what I'd always use anyway - with pipx run
it's really easy), and mention the "classic" flit command too. My ideal would be for packaging.python.org to go over the standard ways to do things with flit-core and other PyPA tools, and Flit would go over the flit way to do things, but https://github.com/pypa/packaging.python.org/pull/1031 didn't go anywhere.
I think the CLI was the main reason poetry
become popular. You can create & publish a new Python package in just 3 commands:
poetry new my_project & cd my_project/
poetry build
poetry publish
I personally really like option 3: having a unified, generic CLI which would work with any PEP 517 backend (flit
, poetry
, setuptools
) would be great and much less confusing than the current build
, twine
combo.
But I don't think this CLI should be called flit
nor live in flit/
repo (as it would be confusing for end users).
If it exists, this CLI should only be a very slim wrapper around pip
/build
/twine
, without any hardcoded backend knowledge.
tl;dr; My very personal opinion is: Deprecate the flit
CLI and replace it instead by a new minimal generic CLI which work with any backend (in a new repo):
pyproject new my_project --backend=flit & cd my_project/
pyproject build
pyproject publish
The main issue would be to initialise the toml
file with pyproject new
/ pyproject init
. But there could be some plugin system where backend can register hook to setup the original .toml
file.
Vs. today:
pipx run cookiecutter gh:scikit-hep/cookie & cd my_project
pipx run build
pipx run twine upload dist/*
I think if we embraced pipx a bit more, a lot of "everything must be merged into one tool" would just go away.
I think if we embraced pipx a bit more, a lot of "everything must be merged into one tool" would just go away.
We also cannot forget that in most project templates out there (cookiecutter, PyScaffold, etc), most of these tasks can be performed with tox -e build
or nox -s publish
, etc...
I think if we embraced pipx a bit more, a lot of "everything must be merged into one tool" would just go away.
I personnally think that flit publish
/ poetry publish
is a better user experience than pipx run twine upload dist/*
, so I would still want a clean intuitive CLI.
I use and like flit commands! I’m fine with editing the config file myself, and really like that I have simple commands to make and upload artifacts, with docs in one place. The decoupling of build backends from installer frontends is a big achievement with a lot of efforts over the last 10 years, but when developping on a project it’s nice to have one packaging tool for packaging tasks, without requiring the use of pip machinery or build-the-tool-with-bad-name to build the package. So definitely option 2 🙂
I’m quite happy using python to run code, pytest to execute tests, tox to run pytest with isolation, flit to package, pip to install dev tools. I do not really enjoy all-in-one jack-of-all-trades tools in other ecosystems (or python tools inspired by them).
Thanks everyone for the input. That looks to me like a rough consensus for option 2, where we keep the flit
CLI without any drastic changes in its scope. I guess that's also the easiest option, and the one that people already interested in Flit and reading its issue tracker are most likely to favour, but it's still good to hear from a few different people. :slightly_smiling_face:
If people want to continue discussing a unified frontend tool aimed at package authors (option 3 but not called Flit), I think the packaging board on Discourse might be a good place for that. Or go directly to making it and getting people to use it!
So, that being the case, how do we clean up the discrepancies I mentioned in the original issue?
flit install --symlink
to something like flit symlink
, and pointing people to use pip for other kinds of installation (IIRC, Flit actually had flit symlink
way back, and I changed it to flit install --symlink
at some point). Or do we like flit install
even if it's just the same as pip install .
?pyproject.toml
file, that implies (I think) that flit_core
will respect it as well. If it's a command line option, it's easy to forget when I flit publish
and accidentally make a 'minimal' sdist when I wanted the 'gold standard' for release.Maybe if we pip installed the package, then replaced pieces with symlinks
I'm thinking about how we might add support for symlinks to wheels. It's an often-requested feature for other reasons as well as for symlink-based editables. I don't have the appetite to start a public discussion on it right now, but I do have a basic design that might work, and in general I expect symlink support to get added to wheels at some point. So unless there's a rush, doing the simplest thing to preserve the current behaviour might be sufficient for the short term.
One issue I just encountered - if you create a new project with flit init
, it's not initially under VCS. At this point, flit build
still works - presumably following the same rules as the flit_core
backend. However, when you (maybe later) do a git init
, the behaviour changes, initially refusing to build because unreacked files exist.
IMO, if flit build
will continue long-term to behave differently from flit_core
(with flit build
deciding what to include based on VCS, but flit_core
not doing so) then flit build
should fail when used in a directory that's not under VCS, to avoid surprises. (This may be more of an issue for me than most, because I typically start a project without VCS, and only set up VCS when everything's in a working state.)
Is it worth changing flit install --symlink to something like flit symlink, and pointing people to use pip for other kinds of installation […]. Or do we like flit install even if it's just the same as pip install .?
Are you saying: rename the editable install command to be flit symlink
, and remove flit install
because pip exists?
That sounds good to me. It’s nice to have a flit command to do a development packaging task (symlink
), even if pip install -e .
would work too, and removing the ambiguous flit install
because it’s not for development and does double duty with the regular install command pip install path
(or other installer of choice) does not seem bad.
On reflection, one thing that bothers me about the differing behaviours of flit build
and flit_core
is that it means that you can build a sdist (using flit build
) such that when you unpack that sdist and build a sdist from it, then what you end up could be missing a lot of the files from the original sdist.
In practical terms this is likely not that important (if you have a sdist, why build a sdist from it?) but it seems to me that it would be a reasonable assumption to think that building a sdist from a sdist gets back what you started with. So I could imagine tools working on that assumption, for whatever reason.
Maybe if we pip installed the package, then replaced pieces with symlinks (& updated RECORD)... 🤔
This is the approach that was taken by frontend-editables, for what it's worth. Not that there was much uptake, but I didn't experience any issues in my own use of the tool :)
One problem (sort of hidden in the discussion) is that flit build
and pipx run build
are creating very different SDists. I'd really love to to see the standard tooling prioritized and treated as equal! I'd much rather teach users that it's okay to run pipx run build
regardless of backend, and I'd like to have a single CI formula that builds SDists, and I support 11 different backends in scikit-hep/cookie, and flit is literally the only one of the 11 which produces a "sub-par" SDist if you don't use the custom tooling.
and it was reasonable to assume you did that from a git checkout with the git command available.
So where is the source coming from for pipx run build
, then? It's also a git checkout, because that's were the non-SDist source lives. I think the use VCS/not-use-VCS should be based on whether it is in VCS, not wether someone is using flit build
or pipx run build
. This creates a problem; if you use flit build
, then I come along, checkout your repository, see that it's using flit_core, then run pipx run build
, I will get an SDist missing some files (include LICENSE.md
), and I'll get a wheel also missing those files, since build creates the wheel from the SDist!
I think the use VCS/not-use-VCS should be based on whether it is in VCS, not wether someone is using flit build or pipx run build.
Sorry, but I'm not going to go for any solution where the backend can give you a different sdist depending on whether the build comes from a VCS repo or not, nor whether the relevant VCS commands are on PATH.
It makes sense in the simple case of a developer directly running python -m build
in place of flit build
to do that, but the backend could be invoked by any sort of build or install tool on any kind of source tree (e.g. unpacked from an sdist, or from a git archive) in any environment.
I'll get a wheel also missing those files, since build creates the wheel from the SDist!
The way the backend creates the sdist should include all files that will go into the wheel. Flit doesn't offer you any ways to create or rearrange files at build time, so we can tell which they are. If something doesn't go into the sdist which does go into the wheel, I'd say that's a bug we can fix.
"VCS" is a "dirty" checkout, which is why you ask VCS what files are not supposed to be there. Anything else is a "clean" checkout, and you can start with all files.
So the formula is:
graph TD
A[Input files] --> B[includes]
B --> C[excludes]
If you are not in VCS, "input files" is all files, everything in the extracted SDist / extracted archive, whatever. If you are in VCS, then you are probably a developer, so you should instead make "input files" only files that are in VCS, assuming any extra ones are "dirty".
You do not get different files based on if you are in a VSC or not! You only get different files if you are not in an VCS and there are extra files that shouldn't be there - but arbitrary other source trees and environments shouldn't be "developing".
Sorry, I'm rejecting any solution where the backend has a VCS behaviour and a fallback behaviour. I can see how it 'should' make sense to distinguish things that way, but things are just too messy, and I don't believe they'll stick to your assumptions well enough not to cause problems. It's a requirement for me that the flit_core
backend does not behave differently based on whether it can get VCS metadata.
Then I think flit build
should also not care if it can see VCS metadata, at least by default. As it is, users are creating flit packages that miss important parts (like LICENSE.md files!) but pretend to support PEP 517. I should be able to build an SDist of any package with pipx run build
, and currently many Flit packages don't support that - and user's don't know that because they are using flit build
, which does work.
An idea on a separate thread would be to add
[tool.flit]
vcs = true
And/or have --no-vcs
and/or --use-vcs
flags. Ideally, though, the "default" behavior of flit build
needs to match pipx run build
; otherwise packages will not be standards compliant.
I view requiring git to build an sdist the same as requiring a C compiler to build an extension module: it's an external tool requirement of the build process. So for me, requiring git to be there doesn't bother me.
I'm also with @henryiii about naively expecting flit build
to effectively be an alias for pipx run build
. I had assumed flit build
was filling a gap in tooling that's now been filled.
Lastly, if the various build backends don't like the idea of relying on git being installed, then we should look to standardize on how to specify what goes into an sdist. Tools could obviously provide their own bonus features, but if we need a baseline then that may be the next thing to add to pyproject.toml.
Personally, I come to flit/poetry/setuptools every six months when I'm starting a new package and thinking "what's the best way to write my pyproject.toml to do PEP517/660/621...?" Accordingly, I see option 1 as the best Single Responsibility for flit.
I'm just a layman with this stuff, but in general I think @brettcannon is right that sdist might need some standardization, particularly with the ambiguity of static vs dynamic metadata. tl;dr: in some contexts dynamic means "at build time" and in others int means "after pyproject.toml is written". Getting version from VCS for an sdist fits right between those points.
I am a simple Flit user, not an expert in packaging at all. I personally use Flit because of its publish
command. When I have a library of the simplest possible kind (i.e. a bunch of .py
files), having to learn python -m build
and twine upload dist/*
looks like a bug in the ecosystem (which Flit fixes). So I would use anything that gives me this frictionless publishing functionality. That said, if there are any aspects where Flit diverges from the modern standards, I would say that it is better, by default, to follow the standards (because people implicitly expect it), and allow customizing behaviour via additional CLI arguments if that is needed (e.g. flit build --policy flit
).
Regarding the flit build
/flit_core
backend discrepancy. I can see various people in this thread either proposing or implying the following requirements for flit:
flit build
and flit_core.buildapi.build_sdist(…)
should produce the same sdist. (Personally, I think that it should only be possible to build one sdist for a project, so I agree with this.)flit_core.buildapi.build_sdist
should not build different sdists depending on whether it can locate VCS metadata (from https://github.com/pypa/flit/issues/522#issuecomment-1126065832).flit build
should be able to use VCS metadata to decide which files to include in the project (implied by the fact that it already does that, and nobody seems to want this functionality removed).However, these are incompatible with each other. Specifically, (1) and (3) imply that flit_core
should be able to use VCS metadata. Then, (2) implies that the decision on whether VCS metadata should be used must be based on the project configuration (like a setting in pyproject.toml
). But then if you make a Git archive of a project that is configured to use VCS metadata, and try to build an sdist from that archive, flit will try to use VCS metadata, and since there isn't any, it will fail, which violates (4).
So at least one of these requirements has to be dropped, and IMO, it makes the most sense to drop (4). I think it's reasonable for a project to not support building itself from a raw Git archive, since the VCS metadata is as much a part of the project as the code. Without (4), the following solution is possible, which satisfies all other requirements:
pyproject.toml
setting, for example, tool.flit.sdist.files_from_vcs
.flit_core
behaves now.FLIT_SOURCES
in the project root, the initial file list is taken from that file. Otherwise, it's taken from VCS metadata, and if there is none, the build fails.tool.flit.sdist.{include,exclude}
settings.FLIT_SOURCES
.The purpose of the FLIT_SOURCES
file is to make it possible to build an sdist from an sdist, and get the same sdist as a result.
For what it's worth, as a user I personally would be fine dropping (3). I doubt that's going to happen, but I just wanted to be clear that "nobody seems to want this functionality removed" is not quite true. But I'm without a doubt an outlier here, so I'm happy if the consensus is to use VCS data (it's easy enough for me to switch to another backend).
Having said that, the setting files_from_vcs = false
that you propose would suit me fine.
- flit should be able to build a project from a git archive
In the same spirit as https://github.com/pypa/flit/issues/522#issuecomment-1133748682, I think if you configure pyproject.toml to require VCS, it's fair to reject a VCS archive.
I'd prefer if there's a way to keep as much information in [project]
(i.e. dyanmic = ["version"]
) rather than [tool.flit.sdist]
. It's nice to be able to use flit
as a backend if you don't need to learn any additional tool configuration beyond what's in the PEPs, which is an advantage over poetry
. But I also recognize that the right decision for flit might depend upon a PEP standardizing sdists.
After a few more months thought on this, and a recent prompt from @pfmoore to revisit the question, my current plan is that the get-files-from-VCS behaviour will become optional for flit build
and flit publish
. Initially it will still be on by default, and then later (maybe in 4.0) off by default. This will mean that it will create sdists the same way as flit_core
used as a backend for other tools.
I'm thinking the switch will be command-line options, like --use-vcs
and --no-use-vcs
. There will be some overlap where it accepts the option that's the default anyway, like the --no-setup-py
option is still accepted but does nothing.
Once the default changes, the --use-vcs
option will probably also fail if it can't identify your VCS, whereas currently it falls back to the simpler mechanism of selecting files to include.
I think it might be better to move those bits of functionality into flit_core, and make them both opt-in in a 4.0 release.
I see the attraction of that, but I don't think the switch belongs in the pyproject.toml
file, or anywhere inside the project - whether you want this depends on the context, not on the project you're building. Passing settings in through a generic frontend is clumsy (--config-settings use_vcs=True
for pip?), and it's backend specific anyway - you can't just set this for arbitrary projects.
I'd also rather not have the added complexity and fragility of this feature in flit_core, which is used more often and in more contexts, and is focused on being simple and reliable. To my mind, the only time you really want this feature is when making a release, and then people that want it can use the flit
CLI.
whether you want this depends on the context, not on the project you're building.
For me, whether you decide for your own project if having a git checkout to build an sdist is like requiring any specific files to be available for that build. In this instance it just so happens you require .git
for a build instead of some other data files.
I'd also rather not have the added complexity and fragility of this feature in flit_core, which is used more often and in more contexts, and is focused on being simple and reliable.
But isn't that the primary use case for sdists which is what we are talking about here? I would assume most people are either building from an sdist to a wheel for bootstrapping reasons (in which case VCS isn't important), or they are building an sdist (in which case it's the people doing the release and VCS is important).
Requiring .git
for a build isn’t the point here. If you are building from sdist, you won’t have a .git
. The question is whether behaviour should be affected if .git
is present, with no other way to control this. IMO, this is obviously wrong. Putting a project under VCS isn’t necessarily a signal that I want to control what goes into my sdist using VCS commands. And adding a file to VCS doesn’t necessarily mean I want it in the sdist. It might, so having an option to use VCS is fine, but the choice should be separate from my choice to use VCS to version control my project.
The second, but IMO more important question, is whether flit build
should give different results than calling the PEP 517 build hooks. I feel that is a significant footgun. Project maintainers could choose one tool to build, but someone checking out the project to do a build (for whatever reason, a redistribution, contributor, or just for interest) could do something else (maybe by mistake, maybe because their processes are different, …). The project maintainers themselves could prefer to use flit build
for one part of their process, but ’py -m build` somewhere else. If, as a result, they get different sdists, that feels like it’s a problem to me.
So for me, a command line option to flit build
feels like the right solution. It can fail if there is no VCS available, preventing silent differences without making a non-checkout unbuildable. Next best is a project option which flit_core
also supports, as that at least ensures consistent behaviour.
Why does a command line option feel like the right solution to you? If the project expects its sdists to be built with the vcs
flag, that means that you still can't use py -m build
- that you are still gonna have to adjust your workflow. Therefore, to me, this seems like a choice you'd want to make on the project level, that should go into your pyproject.toml
and that flit_core
should respect when building from source.
Basically I consider it wrong if calling the PEP 517 build_sdist
hook on a sdist gives an error, rather than re-building the same sdist. Building a completely different sdist is clearly (IMO) a straight-up bug, but I don't believe that happens, so that's not relevant. And yet having flit build
and flit_core
behave differently in a checkout also seems wrong (and this latter case has resulted in actual problems for me when I've hit it) and is fundamentally the reason we're even having this discussion. So my view is that you either add VCS handling to flit_core
(and make it a project-level option) or make the basic flit build
command work like flit_core
and put the VCS handling behind an explicit command line option to flit build
. Of those two, I personally prefer the latter, because it allows for an error if you request a build that uses VCS data but that data isn't present - you can't do that with a project option as a sdist has no VCS data, and I don't think it's OK to make it impossible to rebuild the sdist from an unpacked sdist[^1].
I have a personal dislike of determining what files go in the sdist via VCS by default, because it means that irrelevant content like .gitignore
and .github
end up in the sdist. Yes, you can configure them to be omitted, but as I said, it's the default behaviour that I dislike. But that's my own personal preference, and I'm not arguing that what I like must be the default behaviour (which I thought was obvious, but I'll state it just to be explicit), just that the default should be consistent between flit build
and flit_core
.
Please remember, though that this is only my view. If you don't find my arguments persuasive, that's OK. After all, if things don't go the way I'm suggesting, I'll be fine, I'll just use a different build backend and no-one suffers.
[^1]: I will say that I have very little experience with tools that get metadata from VCS, like setuptools_scm
and flit's VCS behaviour, so maybe "you can't rebuild the sdist from itself" is a more common problem than I'm assuming. But that doesn't mean that I think it's OK 🙂
I broadly agree; I'll just say that a project-level option could be deactivated when not building from source or a VCS checkout. This could be predicated on e.g. the presence of PKG-INFO
, or the presence of the git
command. I would personally opt for the former, with the latter raising an error if it's not found when requesting a VCS build. (I appreciate that sdists are not particularly well defined, but I would consider PKG-INFO
being a very obvious marker of an sdist.) I feel that this is better than the alternative - silently generating a broken sdist cuz you expected a standard build to just work.
As a safeguard, and provided that flit-core
won't be gaining support for VCS builds, the flit-core
backend could raise if the project-level VCS flag is true, preventing accidental "slim" builds.
Basically I consider it wrong if calling the PEP 517
build_sdist
hook on a sdist gives an error, rather than re-building the same sdist. Building a completely different sdist is clearly (IMO) a straight-up bug, but I don't believe that happens, so that's not relevant.
I agree that both of those are wrong.
So my view is that you either add VCS handling to
flit_core
(and make it a project-level option) or make the basicflit build
command work likeflit_core
and put the VCS handling behind an explicit command line option toflit build
.
But aren't both of these solutions wrong by the above criteria? In the former case, the build_sdist
hook will not be able to build from an sdist (if this project-level option is set). In the latter case, building an sdist from an sdist will yield a different sdist (assuming that the first sdist was built with VCS enabled).
I've opened #625 as a concrete proposal to discuss (both adding the --use-vcs
and --no-use-vcs
flags, and improving the docs about what is included), but so long as I was clear above, it shouldn't bring any surprises.
It also occurs to me that we could make a tool which examines the committed files and the .gitignore
(/.hgignore
) and attempts to produce short include and exclude lists from this which you could copy into pyproject.toml
. E.g. if I have committed a bunch of files in doc
but ignored the _build
subfolder, a tool could guess that I want to include doc/
and exclude doc/_build/
.
This could be predicated on e.g. the presence of PKG-INFO, or the presence of the git command.
I dislike this whole class of solution, I'm afraid. In particular when Flit is behaving as the backend, I want it to work the same whether I'm working from a git checkout (with or without git
on PATH), an sdist, a git archive
tarball, a plain tarball with no other metadata, or any other way we might come up with of sharing the necessary files. This is part of why I've never found a solution I like for the popular 'get version from git tag' mode.
(If I'd anticipated making sdists as a backend to other tools, I'd probably never have made Flit use the VCS at all, but I didn't anticipate that.)
attempts to produce short include and exclude lists from this which you could copy
I'd also love it if this proposed command had a success/fail return code, with success if there are no files missing from the flit-core only SDist. I keep having to write a test that runs flit build
and python -m build
and compares the SDist, would be great if I could just call a command or an API call to do that! This would sort of be like a tool for setuptools, check-manifest.
Makes sense, though I'd probably do it only on an option like --check
, so it doesn't 'fail' when used to create such a list. Or separate commands for making and checking the lists, presumably with shared code.
I might not have time to work on this tool for a while, if anyone's interested in making it. I'd be open to including it in flit, although I might also take a while to get round to reviewing a PR.
It also occurs to me that we could make a tool which examines the committed files and the .gitignore (/.hgignore) and attempts to produce short include and exclude lists from this which you could copy into
pyproject.toml
This tool presumably doesn't even need to be part of flit. It could be a standalone tool with the ability to generate the include/exclude lists in the correct format for various backends.
This would sort of be like a tool for setuptools, check-manifest.
Actually, isn't this functionality something that could just be added to the existing check-manifest project (assuming they were willing, of course)? They already have the VCS scanning code from the look of it, so why reinvent that wheel?
FYI, I made a little tool to do this, https://github.com/henryiii/check-sdist - it compares the SDist with Git, regardless of backend. AFAICT from trying it a few places, it seems to be helpful.
Hi! Is Flit heading towards becoming a simple tool that follows all the modern standards by default? Throughout the discussion, I noticed that e.g. flit build
diverges from the standards, will this change in future?
Flit came about in a very different world, before various standards defined how packaging tools should work together (in particular, PEPs 517, 621 and 660). A lot of design decisions I took back then don't really fit with the world we have now, and I don't have a strong sense of how they should be reconciled. So, let's have a discussion. :slightly_smiling_face:
First, a couple of specific examples of where Flit doesn't fit in:
.pth
files to add entries tosys.path
. But I happen to prefer doing editable installs with symlinks, which was implemented in Flit long before there was a standardised API. So if you use the Flit command, this is still possible asflit install --symlink
. A fair bit of code which was more general now basically remains just for this feature. See also #512.flit build
creates an sdist using information from git (or mercurial) to decide which files to include. I envisaged that you would only build sdists to upload to PyPI, and it was reasonable to assume you did that from a git checkout with thegit
command available. But PEP 517 madebuild_sdist
a standard hook, and I didn't want to make the same assumption in the PEP 517 backend. Soflit build
gives you what I think of as the 'gold standard' sdist, but using e.g.python -m build
, which calls the PEP 517 hook, gives you a 'minimal' sdist unless you add include/exclude patterns in your config file. See also #513.But beyond the specifics, I want to ask a more general question about the role of Flit. Part of my goal was for Flit to provide a single CLI for tasks around making & sharing a package, similar to how pip is a single CLI for consuming packages.
flit init
helps you set up the metadata,flit install
lets you try it out, andflit publish
checks everything and uploads it to PyPI. I still use it that way - old habits die hard - but with the new wave of standards, people often recommend using tools that work with any backend -pip
to install,build
to, uh, build,twine
to upload. It's a worse experience if you're working on one particular package, but it works the same way for any package.So, what do we want? I see 3 main options:
flit
command around as an alternative interface for people (like me!) who want to use it. The status quo isn't actually broken, after all - though I'd like to clean up things like how to decide what files to put in an sdist.flit
command work through the PEP 517 interface as much as possible, and let it work with other backends. So you couldflit publish
a package which is built with setuptools, and it would work. It could still have some extra features (like installing as a symlink) for packages using Flit. I think a lot of people would like a tool like this, but perhaps it would be confusing that it appeared to be related to a specific backend (flit_core). :shrug:I'm particularly interested in what @Carreau @pradyunsg @gaborbernat think, as the people who've volunteered to help maintain Flit. But this question is also open to anyone else who's interested.