conda-forge / abseil-cpp-feedstock

A conda-smithy repository for abseil-cpp.
BSD 3-Clause "New" or "Revised" License
2 stars 12 forks source link

build string incompatibilities causing very old abseil to be picked #46

Open h-vetinari opened 2 years ago

h-vetinari commented 2 years ago

Arrow is trying to move to C++17 upstream, and running into problems because their environment resolution still pulls in a very old abseil. It looks like the issue appears on the google-cloud-cpp feedstock as well.

Forcing a higher pin then leads to something like

ValueError: Incompatible component merge:
  - 'mpi_mpich_*'
  - 'mpi_mpich_tempest*'

Aside from waiting for the conda PR that fixes this to get merged (much less to percolate throughout the ecosystem), we should fix this.

Right now I don't know where to start (but searching logs & artefacts is also trickier on a phone), but wanted to open the issue already at least.

CC @conda-forge/core

h-vetinari commented 2 years ago

@hmaarrfk You mentioned https://github.com/conda-forge/dagmc-feedstock/issues/15 but it seems the comment got deleted again...? 🤔

hmaarrfk commented 2 years ago

(Sorry for deleting the comment I'm just trying to stay out of this to cool things off)

Yeah, I had seen an issue with MPI pinnings before. I wanted to make you aware, but you had found the PR to conda, so I figured you had already gone pretty deep down tracing back the lineage.

Looking through the referenced links in arrow and google-cloud-cpp i couldn't find the conflict easily. Do you have a log that exemplifies it?

h-vetinari commented 2 years ago

(Sorry for deleting the comment I'm just trying to stay out of this to cool things off)

All good.

Yeah, I had seen an issue with MPI pinnings before. I wanted to make you aware, but you had found the PR to conda, so I figured you had already gone pretty deep down tracing back the lineage.

I just remembered the build string issue and linked it. Haven't managed to figure out which package this is coming from, but if we're unlucky like with https://github.com/conda-forge/conda-forge-pinning-feedstock/issues/2270, this might even be some "spooky action at a distance" sort of thing from unrelated packages.

Looking through the referenced links in arrow and google-cloud-cpp i couldn't find the conflict easily. Do you have a log that exemplifies it?

See these logs, coming from this commit and this comment.

hmaarrfk commented 2 years ago

Thanks uploading here for archival as well (azure deletes them) 14_raw_log.txt

Ok thanks. So how did it get resolved? It sees that the OSX Cis are passing now?

h-vetinari commented 2 years ago

Ok thanks. So how did it get resolved? It seems that the OSX Cis are passing now?

AFAICT @pitrou removed the constraint for the abseil version again...

hmaarrfk commented 2 years ago

abseil version again...

Ah got it....

And just so I understand. Was it specific to OSX? I presume not, but if one can recreate on linux it makes it easier for me to try and help.

pitrou commented 2 years ago

Ok thanks. So how did it get resolved? It seems that the OSX Cis are passing now?

AFAICT @pitrou removed the constraint for the abseil version again...

I simply disabled GCS on macOS in addition to Windows.


diff --git a/dev/tasks/conda-recipes/arrow-cpp/meta.yaml b/dev/tasks/conda-recipes/arrow-cpp/meta.yaml
index 59a3de085d97..87eb03499066 100644
--- a/dev/tasks/conda-recipes/arrow-cpp/meta.yaml
+++ b/dev/tasks/conda-recipes/arrow-cpp/meta.yaml
@@ -75,7 +75,10 @@ outputs:
         - c-ares
         - gflags
         - glog
-        - google-cloud-cpp
+        # On macOS and Windows, GCS support fails linking due to ABI
+        # issues with abseil-cpp
+        # (see https://github.com/conda-forge/abseil-cpp-feedstock/issues/45)
+        - google-cloud-cpp  # [linux]
         - grpc-cpp
         - libprotobuf
         - clangdev 10  # [not (osx and arm64)]
@@ -217,7 +220,7 @@ outputs:
         - pyarrow.parquet
         - pyarrow.plasma   # [unix]
         - pyarrow.fs
-        - pyarrow._s3fs
+        - pyarrow._s3fs    # [linux]
         - pyarrow._hdfs
         # We can only test importing cuda package but cannot run when a
         # CUDA device is not available, for instance, when building from CI.
hmaarrfk commented 2 years ago

Ok, i'm going to try to iterate a bit on this in https://github.com/hmaarrfk/staged-recipes/pull/2 I have a feeling that there might be an other challenge with clangdev being pinned to 10 while conda-forge has moved to 14.

I am not too familiar with OSX so I'm going to have to experiment a little more.

For windows, I think you might be right with the ABSEIL conflict.

Thank you very much for pointing me to the build log and the fixes that worked around it.

hmaarrfk commented 2 years ago

A small update, I was able to have mamba solve the recipe as you liked to above @h-vetinari without the patch that you provided @pitrou.

I suspect it has to do with a mamba/conda thing. I'm trying it again with conda as the solver (I think).

I did take a look at the arrow logs a little more closely. It seems you are doing a anaconda->conda-forge switch in the build steps. I've started to have a harder and harder time doing that switch over more recently. All the packages need to be updated, which is sometimes a hard solve.

You might have better luck installing miniforge/mambaforge directly in your CI. Churn.... I know....

So it might be that conda cannot solve the two constraints above, but that mamba can. To use mamba in your builds, generally:

hmaarrfk commented 2 years ago

It seems like I am able to solve the environment, even with conda.

I'm not sure if I can make PRs and see your CIs's results. If so, if you can create a reproducer (with all the latest bells and whistles), and point me to the commit, I can branch of that and iterate.