Closed beckermr closed 3 days ago
This seems like a good idea. However it may have moved a bit too fast
Looks like the lint doesn't understand fairly simple CUDA recipes. For example: https://github.com/conda-forge/cudnn-feedstock/pull/96#issuecomment-2489361187 or https://github.com/conda-forge/cuda-cuobjdump-feedstock/pull/16#issuecomment-2489326565
Also it notes there are logs, but doesn't actually link them. So there is nowhere to look for info
I can fix the logs. I thought the websevices included a link.
However, those are both hints, not lints. So you can ignore them.
Except this is an issue asking for core's feedback that got closed in less than 24hr and included in a release
Not to mention had asked in this comment ( https://github.com/conda-forge/conda-smithy/pull/2142#issuecomment-2486997149 ) to hold off on a release until confirming we fixed an installer bug. Note there is a new pixi
bug: https://github.com/conda-forge/conda-smithy/issues/2150
Think we may need a bit more process here:
I don't think we should be too process-driven about smithy releases and waiting for prs. We are in the situation where active work is blocked on smithy releases. Thus we should make as many as we need. I'll make one tonight if you need it.
I'd also be happy cutting a release for every pr merged to main automatically.
This last thing makes sense a lot of sense IMHO. I've been running the bot this way and it really streamlines fixes propagating into the system live. If we did this, we should move to calver.
In any case, if we'd like a well-defined release process, I suggest that a CFEP be drafted and we vote on it.
If you want something backed out or fixed or changed, I'm happy to discuss as usual.
I'll add that the need for a faster release cycle is really driven by our inability to write tests against external systems that change in the background.
Human code review is great for catching some kinds of errors (style, clarity, etc.), but not as good at catching bugs of various kinds. That doesn't mean we shouldn't do it, but we should recognize its limits.
Given that we can't write complete tests in many cases, we cannot anticipate all of the bugs, and that smithy has a large number of code options for various edge cases in CI builds, we're basically having to rely on running it live to test things out.
We do that by hand in many cases, but that is a huge pain.
So this leaves us with making more releases to push bug fixes as we use live runs as a test suite.
It is not ideal, but is a very natural approach given the constraints.
One idea, which I'm not a huge fan of but could make some people excited, is automate having the webservices rerender feedstocks using smithy versions from smithy PRs. This would reduce the friction around live testing and hopefully help us ship fewer bugs.
That all being said, I think fundamentally smithy dev is in many cases reactive to external things that block folks and pushing a bug fixes quickly definitely makes for a good user experience.
The bot and many other tools rely on something being able to parse the recipe YAML.
Right now there are a few such parsers for v0 recipes in existence. They include
I propose that we issue a
The new v1 recipes are always parseable so this would not apply to them.
Thoughts @conda-forge/core?