Open h-vetinari opened 1 week ago
The bot won't understand that exclusion or how to deal with it.
You'll need to do one of 2+ things:
The advantage of item 2 is that you don't have to go our special casing recipes.
Huh? This surely worked in the past. We did the exact same thing (https://github.com/conda-forge/arrow-cpp-feedstock/commit/403ae8657b84e85d7e7b09c45ec0fc5e4793243a) for 3.12, and the migration then moved on.
I am surprised and I think that must have been something else. The bot aggregates the links between feedstocks over all variants of the recipe and so would have had the link. See the logic here: https://github.com/regro/cf-scripts/blob/main/conda_forge_tick/feedstock_parser.py#L289. That logic has been in place for at least three years, if not longer.
This surely worked in the past.
Well, perhaps it was an accident then. Could it depend on which python version (CONDA_PY
?) is used when rendering recipes to generate the graph? Not sure I followed all the trails correctly, but the following calls into conda-build
https://github.com/regro/cf-scripts/blob/842a0d7f6730cd27ff0c1cccd716c0ff7c61913d/conda_forge_tick/utils.py#L885-L896
so this could play a role?
Specifically for new python migrations, if/once the bot runs on that newest version (here 3.13), it would ignore this case. This could be an explanation how this ended up "working" previously.
The bot is not supposed to use any of the CONDA_PY variables and those are only set by conda build during some, but not all, operations.
The intent is that each recipe is rendered directly with each of its ci_support files.
Understood about the intent, but this was actually a very good "accidental" feature. We're migrating python once per year, and it would be good to be able to keep optional dependencies without throwing the bot out of whack.
How would
extend the bot's logic to understand package dependent node links in the graph (hard)
work for a given migration (say python again)? Wouldn't that create an explosion of separate graphs? Or we'd have one graph, but map the selectors to the edges somehow, in a way that we can query the graph while ignoring certain edges?
Something like
graph_dup = copy.deepcopy(graph) # the reference graph with edges that have metadata
all_platforms = # ...
plat_allowed = migrator.get("platform_allowlist", all_platforms)
wrong_plat = get_edge_to_plat(graph_dup, plat_allowed) # edges that aren't in allowlist
graph_dup.remove_edges_from(wrong_plat)
# specific to python migrations: ignore edges that don't apply to newest python (e.g. # [py<313]` if 3.13 is newest)
if migration.startswith("python"):
# not sure how to determine newest for now
wrong_py = get_edge_to_py(graph_dup, "3.13")
graph_dup.remove_edges_from(wrong_py)
# rest of migrator processing
You'll need to do one of 2+ things:
Simplest solution for now is https://github.com/conda-forge/pyarrow-feedstock/commit/1a1668265d1af73af36a608ffa627928dd8f9aee. Though I'd still be very interested in finding a way that optional dependencies with a # [py<313]
selector don't end up hanging the bot.
Yeah that’s the idea. It’s not an easy thing to do for sure.
Just noticed on the status page that pyarrow is still marked as awaiting parents, specifically
numba
. However, pyarrow was already migrated for 3.13 a while ago, and we specifically excludedsparse
(which causes the dependence onnumba
) forpy<313
. So the bot is picking up incorrect metadata for the graph here.CC @beckermr