llvm / torch-mlir

The Torch-MLIR project aims to provide first class support from the PyTorch ecosystem to the MLIR ecosystem.
Other
1.28k stars 469 forks source link

Experiment: coordinate LLVM commits between torch-mlir and onnx-mlir #1135

Open sstamenova opened 2 years ago

sstamenova commented 2 years ago

Following up on a couple of conversations we've had about coordinating LLVM commits across projects (in onnx-mlir and in LLVM), we had a follow up discussion in the Torch MLIR community meeting yesterday (see this comment for more details).

The short-term solution that we decided to run as an experiment is to use Torch MLIR as the "source of truth" for what commit to use for other projects with the caveat that when the commit of LLVM is chosen for Torch MLIR, it would have gone through some testing that would pretty much guarantee that the commit would satisfy any requirements that onnx-mlir has as well. Based on the discussion, it should be fairly trivial to find a version of MHLO that uses the same LLVM commit as well, though MHLO might need some fixes for onnx-mlir (to make the shared library build work, for example).

We are going to pick the commit based on some internal testing that we do at Microsoft when we pick the LLVM commit that we merge with. This includes several platforms (Ubuntu, Windows, CentOS) and build tools (clang, VS, gcc) and both static and shared library builds. We'll communicate the commit here and we'll use this commit to update Torch MLIR as we are able (though ideally after we communicate the commit someone else will do the update some of the time). Then when it is time for onnx-mlir to update to a newer LLVM commit (which could also be done by us, though ideally not always), that project (as well as any others that want to stay in sync) can choose any of the recent LLVM commits on this list and we'll have at least one point a month where Torch MLIR and onnx-mlir are synced to the same LLVM commit.

The goal is to improve the process for everyone for the short-term while also working on a longer-term solution in LLVM (for example, pre-commit checks, etc.) and this is a relatively cheap first step towards all of these projects being in sync with the same LLVM dependencies.

A second step would be to start tagging the "green" commits automatically (see https://discourse.llvm.org/t/coordinate-llvm-commits-for-different-project/63990/65) instead of "announcing" them manually, but that will require some additional work we need to evaluate.

@AlexandreEichenberger @sjarus @stellaraccident @silvasean @joker-eph @ashay

So to recap:

  1. Once a week, we will identify a "green" LLVM commit and post it here
  2. Once a week, someone (sometimes us, though hopefully not always) will update Torch MLIR to this commit
  3. Once a month, someone (sometimes us, though hopefully not always) will update onnx-mlir to one of these "green" commits from the past month
  4. At whatever other cadence they choose, other projects can also use one of these "green" commits for their own LLVM updates to stay in sync with Torch MLIR and onnx-mlir

Let me know if I missed anything or if you have any other questions.

ashay commented 2 years ago

Based on last night's run, here's the green LLVM commit for this week: ec5def5e20f6ae9fe8cc30e5ee152d4b239e1e95.

However, MHLO is just a tad behind this commit, causing the Torch-MLIR and ONNX-MLIR build to break when I use that commit. Luckily, the patch to MHLO is short. @joker-eph is it possible to get this patch merged into MHLO?

diff --git a/lib/Dialect/mhlo/IR/hlo_ops.cc b/lib/Dialect/mhlo/IR/hlo_ops.cc
index 41a83026..ae41d474 100644
--- a/lib/Dialect/mhlo/IR/hlo_ops.cc
+++ b/lib/Dialect/mhlo/IR/hlo_ops.cc
@@ -715,7 +715,7 @@ void ConstantOp::build(OpBuilder& /*builder*/, OperationState& result,
     // All XLA types must be tensor types. In the build() method, we want to
     // provide more flexibility by allowing attributes of scalar types. But we
     // need to wrap it up with ElementsAttr to construct valid XLA constants.
-    type = RankedTensorType::get(/*shape=*/{}, value.getType());
+    type = RankedTensorType::get(/*shape=*/{}, value.cast<TypedAttr>().getType());
     value = DenseElementsAttr::get(type.cast<TensorType>(), value);
   }

@@ -728,7 +728,7 @@ void ConstantOp::build(OpBuilder& /*builder*/, OperationState& result,
 LogicalResult ConstantOp::inferReturnTypes(
     MLIRContext*, Optional<Location>, ValueRange, DictionaryAttr attributes,
     RegionRange, SmallVectorImpl<Type>& inferredReturnTypes) {
-  Type type = attributes.get("value").getType();
+  Type type = attributes.get("value").cast<TypedAttr>().getType();
   inferredReturnTypes.push_back(type);
   return success();
 }
@@ -8758,7 +8758,7 @@ LogicalResult deriveShapeFromOperand(
 Operation* MhloDialect::materializeConstant(OpBuilder& builder, Attribute value,
                                             Type type, Location loc) {
   // HLO dialect constants require the type of value and result to match.
-  if (type != value.getType()) return nullptr;
+  if (type != value.cast<TypedAttr>().getType()) return nullptr;
   // HLO dialect constants only support ElementsAttr unlike standard dialect
   // constant which supports all attributes.
   if (auto elementsAttr = value.dyn_cast<ElementsAttr>())
joker-eph commented 2 years ago

I'm happy to offer you a branch in the mlir-hlo repo for managing this, but the HEAD of the repo is cadenced by Google internal integration: we bump it when it passes all of our tests internally.

stephenneuendorffer commented 2 years ago

This is one of the things I've struggled with trying to define a cadence. If the cadence is short (say daily), then you can proceed with a fix-forward approach. Take fixes like this and push to head, then wait for tomorrow's build. With a monthly cadence, you've reached the point where it makes more sense to branch the dependent repos and patch in the branch, otherwise there's no chance of converging. The down side is more branch management.

BTW, super-excited to see this experiment happen, we'll definitely be looking at aligning our work to these points.

ashay commented 2 years ago

@joker-eph Thanks, if you could create a branch, that'd be great. I'm a bit worried that managing an extra branch might get cumbersome, but I'm happy to try it and see how often we need to make this short-term patches.

sjarus commented 2 years ago

Great to see this convergence @sstamenova ! TOSA has common code used in the TorchToTosa and ONNXToTOSA efforts and alignment here makes it much more reasonable to proceed with putting that code in the core MLIR llvm-project side so I can then pick it up from both Torch-MLIR and ONNX-MLIR.

powderluv commented 2 years ago

One other thing to consider is if we can take on an optional dependency on onnx/onnx-mlir "importer" so our CIs can run simple exports to onnx Dialect. I am not sure if that would require the deep dependency on LLVM/MHLO etc. Maybe eventually someone could wire up the onnx Dialect --> Linalg-on-tensors etc in torch-mlir too.

ashay commented 2 years ago

@joker-eph For patches required to MHLO to work with new LLVM commits, in addition to pushing them to a branch in the MHLO repo, I'm happy send PRs (that can be tested using your internal CI) if that'd mean less work for y'all during a subsequent LLVM update. For us, the benefit would be that these patches would live outside the master branch for a shorter time frame than otherwise. Let me know if that sounds like an idea worth pursuing.

stellaraccident commented 2 years ago

Just in case if people aren't aware, in order to have a submodule dependency, it is sufficient to have the commit reachable by any branch in the source repo. On the IREE side, when we create a cherrypick patch, we push it to a branch on the source repo (which we maintain mirrors of) with a date and index number in the name and then just don't overwrite them. A similar thing can be done here (ie. Push each week's patches to a dated branch in the mlir-hlo repo).

Further, we always start fresh on the next integrate. If the cherrypicks haven't landed in the respective upstream, then they can still be retrieved from the prior branch and reapplied. But making it manual biases away from long term divergence while still leaving a patch record that can be mined for future work.

ashay commented 2 years ago

Thanks @stellaraccident, that's a great idea, assuming the multitude of branches in the MHLO repo isn't a problem.

joker-eph commented 2 years ago

@ashay you should have an invite to joint the TensorFlow organization, which will give you write access to the mlir-hlo repository: I created a greencommit branch for you, but if you need more I would like that we keep some namespace logic for this (for example we create branches with a name that always start with greencommit/ followed by whatever scheme you'd like.

sstamenova commented 2 years ago

Thanks @joker-eph! I think @stellaraccident's suggestion of keeping branches in the mlir-hlo repo to correspond to each week's merge (as necessary) makes sense. If we have a single branch that moves forward, we run the risk of breaking previous versions of torch-mlir which will no longer correspond to a branch that works for them.

ashay commented 2 years ago

Edit: I realized that if we were to run git gc, then commits outside of branches (and thus, presumably unreachable) would be removed from the repo.

At the risk of sounding ignorant, can't we link the MHLO submodule in Torch-MLIR to different commit hashes in the greencommit branch? Since the commits don't go away, the old versions will continue to work, wouldn't they?

powderluv commented 2 years ago

Essentially you want to commit the mhlo greencommit roll into Torch-MLIR main (along with the corresponding greencommit LLVM) ? As long as the CI passes that should be ok (and if not we should increase the CI coverage).

But that doesnt prevent an MHLO SHA disappearing with git gc and friends if run on the main repo. You would need a greencommit/ tag/branch ref in the main mhlo repo to prevent against that or ensure the greencommit is part of main linear history.

ashay commented 2 years ago

Thanks @powderluv, I see the reason for the branch now.

The green LLVM commit for this week is 061e0189a3dab6b1831a80d489ff1b15ad93aafb.

sstamenova commented 2 years ago

I agree that it makes sense to follow a pattern such as greencommit/ so that we can match up branches whenever there are changes needed.

ashay commented 2 years ago

@joker-eph Since I don't have the permission to rename protected branches, could you rename the greencommit branch to greencommit/2022-08-04-ec5def5 please? That would accomplish two things:

  1. preserve the commit hash while archiving the branch for last week's update
  2. allow branches with the greencommit/ prefix, since git doesn't allow naming branches as greencommit/blah if a branch called greencommit exists.
joker-eph commented 2 years ago

You should be able to push a new branch instead of renaming it right?

joker-eph commented 2 years ago

Actually let me adjust the permission, seems like it is blocked...

ashay commented 2 years ago

Yes, pushing to a new branch works, but that'd change the commit hash, thus breaking commits from last week where we used the old commit hash.

joker-eph commented 2 years ago

I deleted greencommit so you can push under the namespace now

silvasean commented 1 year ago

This experiment is under way and quite successful in https://github.com/llvm/torch-mlir/issues/1178

We can close this issue for now and reopen / create a new issue when we need to course-correct.

powderluv commented 1 year ago

We have been seen some breakages down stream in SHARK (https://github.com/nod-ai/SHARK/actions/runs/3671729600/jobs/6207221342) and IREE-torch (https://github.com/iree-org/iree-torch/actions/runs/3578370372/jobs/6018456328) because IREE moves forward with LLVM rebases more aggressively than our current torch-mlir LLVM updates and that causes binary compatibility issues for torch-mlir downstream projects like.

Since IREE is already in sync with the TF / MHLO green commit would it be ok to pick the IREE LLVM SHA (https://github.com/iree-org/iree-llvm-fork) and try to run our "green commit" checks on it so the tagged commit. So we could pick the IREE LLVM SHA on Sunday night and try to get a green commit on Monday.

ashay commented 1 year ago

I don't think there would be an issue with picking the LLVM commits from the iree-llvm-fork, as long as the LLVM commit in that repo refreshes at least once daily. Right now, we have a scheduled job that runs once every day and picks HEAD from that time.

But as far as I can tell, we won't have green commits until this patch is either reverted or fixed. I added a comment to that patch explaining the problem and how to reproduce it.

ashay commented 1 year ago

Btw, since iree-llvm-fork can contain patches that aren't upstreamed, I don't think the resulting green commit can be shared broadly, can it? Are you asking about running an additional set of green commit checks besides the ones that we run currently?

powderluv commented 1 year ago

I was thinking we only need sync to the root commit iree-llvm-fork is rebased on (the local patches can be ignored).

sstamenova commented 1 year ago

How different is that from the weekly commit that we identify? How often does it update?

Also, I'm trying to understand the ask - is it to run tests on that commit to see if it is also green? Or is the ask to use that commit for the llvm update as it would have already run through sufficient validation (including Windows)?

powderluv commented 1 year ago

I think the requirements to bump LLVM would be more lightweight with the IREE / MHLO check. It doesn't roll every day but the requirements may not be as strict as the current green commit roll. Currently we could roll to the IREE / MHLO LLVM and iree-torch and SHARK will all work ok (we have a local fork like that for Nod.ai customers) but is held up because of the revert needed to this to get a green commit. So currently rolling torch-mlir LLVM would break windows in the ONNX flow but instead we have broken iree-torch and SHARK (and our downstream customers who use IREE).

Open to any suggestions how how to move this forward so any thoughts / ideas welcome. We just can't be stuck on Dec 2nd LLVM if they don't revert that commit.

sstamenova commented 1 year ago

If I understand correctly, iree-torch and SHARK are broken because they are on a newer commit of LLVM than torch-mlir. But moving to any newer commit will break Windows. So your proposal is to move torch-mlir forward to unbreak iree-torch and SHARK and break Windows instead?

This seems like a poor trade-off in general - situations like this with various platforms can happen at any time in the future and we don't really want to trade one failure for another. A better solution would be to address the issue in the commit that is causing the failure in the first place. Is there a reason we can't revert it?

powderluv commented 1 year ago

I agree the tradeoff is not ideal. I think if we aggressively address the hold up the status quo should still work - and has mostly worked and maybe we have to wait a few days for the next bump.

silvasean commented 1 year ago

In this particular case, the root cause seems to be the upstream LLVM patch and I do not see an immediate need to change our policy.

That patch breaks existing upstream tests in the LLVM Core Support Tier (not to mention us downstream) and should be reverted immediately. We do not need to wait for the author and should perform the revert ourselves (of course we should inform the author and provide repro instructions). It is in line with LLVM practices to "revert to green" in this case (see LLVM patch reversion policy).

One thing I am trying to understand is how this patch is still not reverted. Is -DLLVM_INCLUDE_TOOLS=ON not tested on any of the LLVM windows bots? I would have expected this to have made a bot red within hours and been reverted immediately.

ashay commented 1 year ago

Is -DLLVM_INCLUDE_TOOLS=ON not tested on any of the LLVM windows bots? I would have expected this to have made a bot red within hours and been reverted immediately.

Stella discovered last night that the MSVC compiler version is different between the pre-merge checks (v14.29, which is from Visual Studio 2019) versus our internal builds (v14.33 and v14.34, which is from Visual Studio 2022). I am about to try the older compiler version at my end to see if that is indeed the issue.

I had heard of previous efforts to update the Windows build bots, but folks ran into issues that I fail to recollect. I will check if that effort can be restarted now.

powderluv commented 1 year ago

Should we revert though while we fix the bot? Or we can temporarily sync to a newer commit. It's hard for me to quantify in this message the hardship this is causing downstream.

If we are not able to revert this today we will have to create a temporary fork to bump LLVM to ship to our downstream users which brings its own headaches.

silvasean commented 1 year ago

Yes let's please revert this patch immediately.

ashay commented 1 year ago

I'm in favor of reverting the patch. The alternative (creating a temporary fork) would be a lot of trouble.

silvasean commented 1 year ago

Is -DLLVM_INCLUDE_TOOLS=ON not tested on any of the LLVM windows bots? I would have expected this to have made a bot red within hours and been reverted immediately.

Stella discovered last night that the MSVC compiler version is different between the pre-merge checks (v14.29, which is from Visual Studio 2019) versus our internal builds (v14.33 and v14.34, which is from Visual Studio 2022). I am about to try the older compiler version at my end to see if that is indeed the issue.

I had heard of previous efforts to update the Windows build bots, but folks ran into issues that I fail to recollect. I will check if that effort can be restarted now.

Is the compiler version the problem here? Unless MSVC v14.29 is miscompiling LLVM, I think the -DLLVM_INCLUDE_TOOLS=ON would fail nonetheless. (those tests (and in general tests in LLVM) don't ever invoke the host compiler)

ashay commented 1 year ago

Is the compiler version the problem here?

That's my hunch, but my local build is still running. I should be able to confirm soon.

I realized that LLVM_INCLUDE_TOOLS is ON by default, so the pre-merge checks did run those tests and they passed (with MSVC v14.29), which makes me think that the older compiler is miscompiling (or there's a bug in the newer compiler version).

ashay commented 1 year ago

The build took a while, but the tests ran successfully when compiled with the older compiler. The revert for the original patch is running pre-merge checks and once it is reverted, either Stella or I should be able to run the green commit checks.

qiuxiafei commented 1 year ago

Hi, guys.

I am doing the recent llvm rebase. I have a problem on build-tes (macos-arm64, in-tree, ON). It complains that:

CMake Error at /Users/runner/work/torch-mlir/torch-mlir/externals/llvm-project/mlir/test/CMakeLists.txt:90 (file):
  Error evaluating generator expression:

    $<TARGET_FILE:mlir_runner_utils>

  No target "mlir_runner_utils"

I try to reproduce it on my mac, but it works. I can see the same clang version as in test jobs. The output is as follow:

cmake -GNinja -Bbuild_arm64 \
    -DCMAKE_BUILD_TYPE=Release \
    -DCMAKE_C_COMPILER=clang \
    -DCMAKE_CXX_COMPILER=clang++ \
    -DCMAKE_C_COMPILER_LAUNCHER=ccache \
    -DCMAKE_CXX_COMPILER_LAUNCHER=ccache \
    -DCMAKE_LINKER=lld \
    -DCMAKE_OSX_ARCHITECTURES=arm64 \
    -DLLVM_ENABLE_ASSERTIONS=ON \
    -DLLVM_ENABLE_PROJECTS=mlir \
    -DLLVM_EXTERNAL_PROJECTS="torch-mlir;torch-mlir-dialects" \
    -DLLVM_EXTERNAL_TORCH_MLIR_SOURCE_DIR="$GITHUB_WORKSPACE" \
    -DLLVM_EXTERNAL_TORCH_MLIR_DIALECTS_SOURCE_DIR="${GITHUB_WORKSPACE}/externals/llvm-external-projects/torch-mlir-dialects" \
    -DLLVM_TARGETS_TO_BUILD=AArch64 \
    -DLLVM_USE_HOST_TOOLS=ON \
    -DLLVM_ENABLE_ZSTD=OFF \
    -DMLIR_ENABLE_BINDINGS_PYTHON=ON \
    -DTORCH_MLIR_ENABLE_STABLEHLO=OFF \
    -DTORCH_MLIR_ENABLE_LTC=OFF \
    -DTORCH_MLIR_USE_INSTALLED_PYTORCH="ON" \
    -DMACOSX_DEPLOYMENT_TARGET=12.0 \
    -DPython3_EXECUTABLE="$(which python)" \
    $GITHUB_WORKSPACE/externals/llvm-project/llvm

-- The C compiler identification is AppleClang 14.0.0.14000029
-- The CXX compiler identification is AppleClang 14.0.0.14000029
-- The ASM compiler identification is Clang
-- Found assembler: /Library/Developer/CommandLineTools/usr/bin/clang
-- Detecting C compiler ABI info
-- Detecting C compiler ABI info - done
-- Check for working C compiler: /usr/bin/clang - skipped
-- Detecting C compile features
-- Detecting C compile features - done
-- Detecting CXX compiler ABI info
-- Detecting CXX compiler ABI info - done
-- Check for working CXX compiler: /usr/bin/clang++ - skipped
-- Detecting CXX compile features
-- Detecting CXX compile features - done
-- bolt project is disabled
-- clang project is disabled
-- clang-tools-extra project is disabled
-- compiler-rt project is disabled
-- cross-project-tests project is disabled
-- libc project is disabled
-- libclc project is disabled
-- lld project is disabled
-- lldb project is disabled
-- mlir project is enabled
-- openmp project is disabled
-- polly project is disabled
-- pstl project is disabled
-- flang project is disabled
-- torch-mlir project is enabled
-- torch-mlir-dialects project is enabled
-- Found libtool - /Library/Developer/CommandLineTools/usr/bin/libtool
-- Found Python3: /Users/qiuxiafei/miniforge3/envs/py38/bin/python (found suitable version "3.8.8", minimum required is "3.6") found components: Interpreter
-- Looking for dlfcn.h
-- Looking for dlfcn.h - found
-- Looking for errno.h
-- Looking for errno.h - found
-- Looking for fcntl.h
-- Looking for fcntl.h - found
-- Looking for link.h
-- Looking for link.h - not found
-- Looking for malloc/malloc.h
-- Looking for malloc/malloc.h - found
-- Looking for pthread.h
-- Looking for pthread.h - found
-- Looking for signal.h
-- Looking for signal.h - found
-- Looking for sys/ioctl.h
-- Looking for sys/ioctl.h - found
-- Looking for sys/mman.h
-- Looking for sys/mman.h - found
-- Looking for sys/param.h
-- Looking for sys/param.h - found
-- Looking for sys/resource.h
-- Looking for sys/resource.h - found
-- Looking for sys/stat.h
-- Looking for sys/stat.h - found
-- Looking for sys/time.h
-- Looking for sys/time.h - found
-- Looking for sys/types.h
-- Looking for sys/types.h - found
-- Looking for sysexits.h
-- Looking for sysexits.h - found
-- Looking for termios.h
-- Looking for termios.h - found
-- Looking for unistd.h
-- Looking for unistd.h - found
-- Looking for valgrind/valgrind.h
-- Looking for valgrind/valgrind.h - not found
-- Looking for fenv.h
-- Looking for fenv.h - found
-- Looking for FE_ALL_EXCEPT
-- Looking for FE_ALL_EXCEPT - found
-- Looking for FE_INEXACT
-- Looking for FE_INEXACT - found
-- Looking for mach/mach.h
-- Looking for mach/mach.h - found
-- Looking for CrashReporterClient.h
-- Looking for CrashReporterClient.h - not found
-- Performing Test HAVE_CRASHREPORTER_INFO
-- Performing Test HAVE_CRASHREPORTER_INFO - Success
-- Looking for pthread_create in pthread
-- Looking for pthread_create in pthread - found
-- Looking for pthread_rwlock_init in pthread
-- Looking for pthread_rwlock_init in pthread - found
-- Looking for pthread_mutex_lock in pthread
-- Looking for pthread_mutex_lock in pthread - found
-- Looking for dlopen in dl
-- Looking for dlopen in dl - found
-- Looking for clock_gettime in rt
-- Looking for clock_gettime in rt - not found
-- Looking for pfm_initialize in pfm
-- Looking for pfm_initialize in pfm - not found
-- Looking for pthread.h
-- Looking for pthread.h - found
-- Performing Test CMAKE_HAVE_LIBC_PTHREAD
-- Performing Test CMAKE_HAVE_LIBC_PTHREAD - Success
-- Found Threads: TRUE
-- Found ZLIB: /Library/Developer/CommandLineTools/SDKs/MacOSX13.1.sdk/usr/lib/libz.tbd (found version "1.2.11")
-- Looking for compress2
-- Looking for compress2 - found
-- Found LibXml2: /Library/Developer/CommandLineTools/SDKs/MacOSX13.1.sdk/usr/lib/libxml2.tbd (found version "2.9.13")
-- Looking for xmlReadMemory
-- Looking for xmlReadMemory - found
-- Looking for histedit.h
-- Looking for histedit.h - found
-- Found LibEdit: /Library/Developer/CommandLineTools/SDKs/MacOSX13.1.sdk/usr/include (found version "2.11")
-- Performing Test Terminfo_LINKABLE
-- Performing Test Terminfo_LINKABLE - Success
-- Found Terminfo: /Library/Developer/CommandLineTools/SDKs/MacOSX13.1.sdk/usr/lib/libcurses.tbd
-- Looking for xar_open in xar
-- Looking for xar_open in xar - found
-- The xar file format has been deprecated: LLVM_HAVE_LIBXAR might be removed in the future.
-- Looking for arc4random
-- Looking for arc4random - found
-- Looking for backtrace
-- Looking for backtrace - found
-- backtrace facility detected in default set of libraries
-- Found Backtrace: /Library/Developer/CommandLineTools/SDKs/MacOSX13.1.sdk/usr/include
-- Performing Test C_SUPPORTS_WERROR_UNGUARDED_AVAILABILITY_NEW
-- Performing Test C_SUPPORTS_WERROR_UNGUARDED_AVAILABILITY_NEW - Success
-- Looking for __register_frame
-- Looking for __register_frame - found
-- Looking for __deregister_frame
-- Looking for __deregister_frame - found
-- Looking for __unw_add_dynamic_fde
-- Looking for __unw_add_dynamic_fde - found
-- Looking for _Unwind_Backtrace
-- Looking for _Unwind_Backtrace - found
-- Looking for getpagesize
-- Looking for getpagesize - found
-- Looking for sysconf
-- Looking for sysconf - found
-- Looking for getrusage
-- Looking for getrusage - found
-- Looking for setrlimit
-- Looking for setrlimit - found
-- Looking for isatty
-- Looking for isatty - found
-- Looking for futimens
-- Looking for futimens - found
-- Looking for futimes
-- Looking for futimes - found
-- Looking for mallctl
-- Looking for mallctl - not found
-- Looking for mallinfo
-- Looking for mallinfo - not found
-- Looking for mallinfo2
-- Looking for mallinfo2 - not found
-- Looking for malloc_zone_statistics
-- Looking for malloc_zone_statistics - found
-- Looking for getrlimit
-- Looking for getrlimit - found
-- Looking for posix_spawn
-- Looking for posix_spawn - found
-- Looking for pread
-- Looking for pread - found
-- Looking for sbrk
-- Looking for sbrk - found
-- Looking for strerror
-- Looking for strerror - found
-- Looking for strerror_r
-- Looking for strerror_r - found
-- Looking for strerror_s
-- Looking for strerror_s - not found
-- Looking for setenv
-- Looking for setenv - found
-- Performing Test HAVE_STRUCT_STAT_ST_MTIMESPEC_TV_NSEC
-- Performing Test HAVE_STRUCT_STAT_ST_MTIMESPEC_TV_NSEC - Success
-- Performing Test HAVE_STRUCT_STAT_ST_MTIM_TV_NSEC
-- Performing Test HAVE_STRUCT_STAT_ST_MTIM_TV_NSEC - Failed
-- Looking for __GLIBC__
-- Looking for __GLIBC__ - not found
-- Looking for pthread_getname_np
-- Looking for pthread_getname_np - found
-- Looking for pthread_setname_np
-- Looking for pthread_setname_np - found
-- Looking for dlopen
-- Looking for dlopen - found
-- Looking for dladdr
-- Looking for dladdr - found
-- Looking for proc_pid_rusage
-- Looking for proc_pid_rusage - found
-- Performing Test HAVE_CXX_ATOMICS_WITHOUT_LIB
-- Performing Test HAVE_CXX_ATOMICS_WITHOUT_LIB - Success
-- Performing Test HAVE_CXX_ATOMICS64_WITHOUT_LIB
-- Performing Test HAVE_CXX_ATOMICS64_WITHOUT_LIB - Success
-- Performing Test LLVM_HAS_ATOMICS
-- Performing Test LLVM_HAS_ATOMICS - Success
-- Performing Test SUPPORTS_VARIADIC_MACROS_FLAG
-- Performing Test SUPPORTS_VARIADIC_MACROS_FLAG - Success
-- Performing Test SUPPORTS_GNU_ZERO_VARIADIC_MACRO_ARGUMENTS_FLAG
-- Performing Test SUPPORTS_GNU_ZERO_VARIADIC_MACRO_ARGUMENTS_FLAG - Success
-- Native target architecture is AArch64
-- Threads enabled.
-- Doxygen disabled.
-- Ninja version: 1.11.1.git.kitware.jobserver-1
-- Found ld64 - /Library/Developer/CommandLineTools/usr/bin/ld
-- Could NOT find OCaml (missing: OCAMLFIND OCAML_VERSION OCAML_STDLIB_PATH)
-- Could NOT find OCaml (missing: OCAMLFIND OCAML_VERSION OCAML_STDLIB_PATH)
-- OCaml bindings disabled.
-- Found Python module pygments
-- Found Python module pygments.lexers.c_cpp
-- Found Python module yaml
-- LLVM host triple: arm64-apple-darwin22.3.0
-- LLVM default target triple: arm64-apple-darwin22.3.0
-- Performing Test C_SUPPORTS_FPIC
-- Performing Test C_SUPPORTS_FPIC - Success
-- Performing Test CXX_SUPPORTS_FPIC
-- Performing Test CXX_SUPPORTS_FPIC - Success
-- Building with -fPIC
-- Performing Test C_SUPPORTS_FNO_SEMANTIC_INTERPOSITION
-- Performing Test C_SUPPORTS_FNO_SEMANTIC_INTERPOSITION - Failed
-- Performing Test CXX_SUPPORTS_FNO_SEMANTIC_INTERPOSITION
-- Performing Test CXX_SUPPORTS_FNO_SEMANTIC_INTERPOSITION - Failed
-- Performing Test SUPPORTS_FVISIBILITY_INLINES_HIDDEN_FLAG
-- Performing Test SUPPORTS_FVISIBILITY_INLINES_HIDDEN_FLAG - Success
-- Performing Test C_SUPPORTS_WERROR_DATE_TIME
-- Performing Test C_SUPPORTS_WERROR_DATE_TIME - Success
-- Performing Test CXX_SUPPORTS_WERROR_DATE_TIME
-- Performing Test CXX_SUPPORTS_WERROR_DATE_TIME - Success
-- Performing Test CXX_SUPPORTS_WERROR_UNGUARDED_AVAILABILITY_NEW
-- Performing Test CXX_SUPPORTS_WERROR_UNGUARDED_AVAILABILITY_NEW - Success
-- Performing Test CXX_SUPPORTS_MISSING_FIELD_INITIALIZERS_FLAG
-- Performing Test CXX_SUPPORTS_MISSING_FIELD_INITIALIZERS_FLAG - Success
-- Performing Test C_SUPPORTS_CXX98_COMPAT_EXTRA_SEMI_FLAG
-- Performing Test C_SUPPORTS_CXX98_COMPAT_EXTRA_SEMI_FLAG - Success
-- Performing Test CXX_SUPPORTS_CXX98_COMPAT_EXTRA_SEMI_FLAG
-- Performing Test CXX_SUPPORTS_CXX98_COMPAT_EXTRA_SEMI_FLAG - Success
-- Performing Test C_SUPPORTS_IMPLICIT_FALLTHROUGH_FLAG
-- Performing Test C_SUPPORTS_IMPLICIT_FALLTHROUGH_FLAG - Success
-- Performing Test CXX_SUPPORTS_IMPLICIT_FALLTHROUGH_FLAG
-- Performing Test CXX_SUPPORTS_IMPLICIT_FALLTHROUGH_FLAG - Success
-- Performing Test C_SUPPORTS_COVERED_SWITCH_DEFAULT_FLAG
-- Performing Test C_SUPPORTS_COVERED_SWITCH_DEFAULT_FLAG - Success
-- Performing Test CXX_SUPPORTS_COVERED_SWITCH_DEFAULT_FLAG
-- Performing Test CXX_SUPPORTS_COVERED_SWITCH_DEFAULT_FLAG - Success
-- Performing Test CXX_SUPPORTS_CLASS_MEMACCESS_FLAG
-- Performing Test CXX_SUPPORTS_CLASS_MEMACCESS_FLAG - Failed
-- Performing Test CXX_SUPPORTS_NOEXCEPT_TYPE_FLAG
-- Performing Test CXX_SUPPORTS_NOEXCEPT_TYPE_FLAG - Success
-- Performing Test CXX_WONT_WARN_ON_FINAL_NONVIRTUALDTOR
-- Performing Test CXX_WONT_WARN_ON_FINAL_NONVIRTUALDTOR - Success
-- Performing Test CXX_SUPPORTS_SUGGEST_OVERRIDE_FLAG
-- Performing Test CXX_SUPPORTS_SUGGEST_OVERRIDE_FLAG - Success
-- Performing Test CXX_WSUGGEST_OVERRIDE_ALLOWS_ONLY_FINAL
-- Performing Test CXX_WSUGGEST_OVERRIDE_ALLOWS_ONLY_FINAL - Success
-- Performing Test C_WCOMMENT_ALLOWS_LINE_WRAP
-- Performing Test C_WCOMMENT_ALLOWS_LINE_WRAP - Success
-- Performing Test C_SUPPORTS_STRING_CONVERSION_FLAG
-- Performing Test C_SUPPORTS_STRING_CONVERSION_FLAG - Success
-- Performing Test CXX_SUPPORTS_STRING_CONVERSION_FLAG
-- Performing Test CXX_SUPPORTS_STRING_CONVERSION_FLAG - Success
-- Performing Test C_SUPPORTS_MISLEADING_INDENTATION_FLAG
-- Performing Test C_SUPPORTS_MISLEADING_INDENTATION_FLAG - Success
-- Performing Test CXX_SUPPORTS_MISLEADING_INDENTATION_FLAG
-- Performing Test CXX_SUPPORTS_MISLEADING_INDENTATION_FLAG - Success
-- Performing Test C_SUPPORTS_CTAD_MAYBE_UNSPPORTED_FLAG
-- Performing Test C_SUPPORTS_CTAD_MAYBE_UNSPPORTED_FLAG - Success
-- Performing Test CXX_SUPPORTS_CTAD_MAYBE_UNSPPORTED_FLAG
-- Performing Test CXX_SUPPORTS_CTAD_MAYBE_UNSPPORTED_FLAG - Success
-- Performing Test LINKER_SUPPORTS_COLOR_DIAGNOSTICS
-- Performing Test LINKER_SUPPORTS_COLOR_DIAGNOSTICS - Failed
-- Looking for os_signpost_interval_begin
-- Looking for os_signpost_interval_begin - found
-- Performing Test macos_signposts_usable
-- Performing Test macos_signposts_usable - Success
-- Linker detection: ld64
-- Setting native build dir to /Users/qiuxiafei/workspace/torch-mlir/build_arm64/NATIVE
-- Performing Test HAS_WERROR_GLOBAL_CTORS
-- Performing Test HAS_WERROR_GLOBAL_CTORS - Success
-- Performing Test LLVM_HAS_NOGLOBAL_CTOR_MUTEX
-- Performing Test LLVM_HAS_NOGLOBAL_CTOR_MUTEX - Failed
-- Looking for __x86_64__
-- Looking for __x86_64__ - not found
-- Found Git: /usr/bin/git (found version "2.37.1 (Apple Git-137.1)")
-- Targeting AArch64
-- Performing Test C_SUPPORTS_WERROR_IMPLICIT_FUNCTION_DECLARATION
-- Performing Test C_SUPPORTS_WERROR_IMPLICIT_FUNCTION_DECLARATION - Success
-- Performing Test C_SUPPORTS_WERROR_MISMATCHED_TAGS
-- Performing Test C_SUPPORTS_WERROR_MISMATCHED_TAGS - Success
-- Found Python3: /Users/qiuxiafei/miniforge3/envs/py38/bin/python (found suitable version "3.8.8", minimum required is "3.6") found components: Interpreter Development Development.Module Development.Embed
-- Found Python3: /Users/qiuxiafei/miniforge3/envs/py38/bin/python (found suitable version "3.8.8", minimum required is "3.6") found components: Interpreter Development.Module NumPy
-- Found python include dirs: /Users/qiuxiafei/miniforge3/envs/py38/include/python3.8
-- Found python libraries: /Users/qiuxiafei/miniforge3/envs/py38/lib/libpython3.8.dylib
-- Found numpy v1.20.3: /Users/qiuxiafei/miniforge3/envs/py38/lib/python3.8/site-packages/numpy/core/include
-- Checking for pybind11 in python path...
-- found (/Users/qiuxiafei/miniforge3/envs/py38/lib/python3.8/site-packages/pybind11/share/cmake/pybind11)
-- Performing Test HAS_FLTO
-- Performing Test HAS_FLTO - Success
-- Performing Test HAS_FLTO_THIN
-- Performing Test HAS_FLTO_THIN - Success
-- Found pybind11: /Users/qiuxiafei/miniforge3/envs/py38/lib/python3.8/site-packages/pybind11/include (found version "2.10.3")
-- Found pybind11 v2.10.3: /Users/qiuxiafei/miniforge3/envs/py38/lib/python3.8/site-packages/pybind11/include
-- Python prefix = '', suffix = '', extension = '.cpython-38-darwin.so
-- Performing Test C_SUPPORTS_WERROR_GLOBAL_CONSTRUCTOR
-- Performing Test C_SUPPORTS_WERROR_GLOBAL_CONSTRUCTOR - Success
-- Performing Test CXX_SUPPORTS_WERROR_GLOBAL_CONSTRUCTOR
-- Performing Test CXX_SUPPORTS_WERROR_GLOBAL_CONSTRUCTOR - Success
-- Performing Test COMPILER_SUPPORTS_WARNING_WEAK_VTABLES
-- Performing Test COMPILER_SUPPORTS_WARNING_WEAK_VTABLES - Success
-- LTC Backend build is disabled
-- Adding LLVM external project torch-mlir-dialects (TORCH_MLIR_DIALECTS) -> /Users/qiuxiafei/workspace/torch-mlir/externals/llvm-external-projects/torch-mlir-dialects
-- Torch-MLIR in-tree build.
-- Building torch-mlir project at /Users/qiuxiafei/workspace/torch-mlir (into /Users/qiuxiafei/workspace/torch-mlir/build_arm64/tools/torch-mlir)
-- Found Python3: /Users/qiuxiafei/miniforge3/envs/py38/bin/python (found suitable version "3.8.8", minimum required is "3.6") found components: Interpreter Development Development.Module Development.Embed
-- Found Python3: /Users/qiuxiafei/miniforge3/envs/py38/bin/python (found suitable version "3.8.8", minimum required is "3.6") found components: Interpreter Development.Module NumPy
-- Found python include dirs: /Users/qiuxiafei/miniforge3/envs/py38/include/python3.8
-- Found python libraries: /Users/qiuxiafei/miniforge3/envs/py38/lib/libpython3.8.dylib
-- Found numpy v1.20.3: /Users/qiuxiafei/miniforge3/envs/py38/lib/python3.8/site-packages/numpy/core/include
-- Checking for pybind11 in python path...
-- found (/Users/qiuxiafei/miniforge3/envs/py38/lib/python3.8/site-packages/pybind11/share/cmake/pybind11)
-- Found pybind11: /Users/qiuxiafei/miniforge3/envs/py38/lib/python3.8/site-packages/pybind11/include (found version "2.10.3")
-- Found pybind11 v2.10.3: /Users/qiuxiafei/miniforge3/envs/py38/lib/python3.8/site-packages/pybind11/include
-- Python prefix = '', suffix = '', extension = '.cpython-38-darwin.so
-- Checking for PyTorch using /Users/qiuxiafei/miniforge3/envs/py38/bin/python ...
-- Found PyTorch installation at /Users/qiuxiafei/miniforge3/envs/py38/lib/python3.8/site-packages/torch/share/cmake
-- Checking PyTorch ABI settings...
CMake Warning at /Users/qiuxiafei/miniforge3/envs/py38/lib/python3.8/site-packages/torch/share/cmake/Torch/TorchConfig.cmake:22 (message):
  static library kineto_LIBRARY-NOTFOUND not found.
Call Stack (most recent call first):
  /Users/qiuxiafei/miniforge3/envs/py38/lib/python3.8/site-packages/torch/share/cmake/Torch/TorchConfig.cmake:127 (append_torchlib_if_found)
  ../../../python/torch_mlir/csrc/reference_lazy_backend/CMakeLists.txt:15 (find_package)

-- Found Torch: /Users/qiuxiafei/miniforge3/envs/py38/lib/python3.8/site-packages/torch/lib/libtorch.dylib (Required is at least version "1.11")
-- Found Python3: /Users/qiuxiafei/miniforge3/envs/py38/bin/python (found suitable version "3.8.8", minimum required is "3.6") found components: Interpreter Development Development.Module Development.Embed
-- Found Python3: /Users/qiuxiafei/miniforge3/envs/py38/bin/python (found suitable version "3.8.8", minimum required is "3.6") found components: Interpreter Development.Module NumPy
-- Found python include dirs: /Users/qiuxiafei/miniforge3/envs/py38/include/python3.8
-- Found python libraries: /Users/qiuxiafei/miniforge3/envs/py38/lib/libpython3.8.dylib
-- Found numpy v1.20.3: /Users/qiuxiafei/miniforge3/envs/py38/lib/python3.8/site-packages/numpy/core/include
-- Using explicit pybind11 cmake directory: /Users/qiuxiafei/miniforge3/envs/py38/lib/python3.8/site-packages/pybind11/share/cmake/pybind11 (-Dpybind11_DIR to change)
-- Found pybind11 v2.10.3: /Users/qiuxiafei/miniforge3/envs/py38/lib/python3.8/site-packages/pybind11/include
-- Python prefix = '', suffix = '', extension = '.cpython-38-darwin.so
-- Using cached Torch root = /Users/qiuxiafei/miniforge3/envs/py38/lib/python3.8/site-packages/torch/share/cmake
-- Checking PyTorch ABI settings...
CMake Warning at /Users/qiuxiafei/miniforge3/envs/py38/lib/python3.8/site-packages/torch/share/cmake/Torch/TorchConfig.cmake:22 (message):
  static library kineto_LIBRARY-NOTFOUND not found.
Call Stack (most recent call first):
  /Users/qiuxiafei/miniforge3/envs/py38/lib/python3.8/site-packages/torch/share/cmake/Torch/TorchConfig.cmake:127 (append_torchlib_if_found)
  ../../../python/torch_mlir/dialects/torch/importer/jit_ir/CMakeLists.txt:15 (find_package)

-- libtorch_python CXXFLAGS is ...
-- TORCH_CXXFLAGS=
-- TORCH_CXXFLAGS=
-- Building torch-mlir-dialects project at /Users/qiuxiafei/workspace/torch-mlir/externals/llvm-external-projects/torch-mlir-dialects (into /Users/qiuxiafei/workspace/torch-mlir/build_arm64/tools/torch-mlir-dialects)
-- torch-mlir-dialect is being built in-tree
-- Registering ExampleIRTransforms as a pass plugin (static build: OFF)
-- Registering Bye as a pass plugin (static build: OFF)
-- git version: v0.0.0-dirty normalized to 0.0.0
-- Version: 1.6.0
-- Looking for shm_open in rt
-- Looking for shm_open in rt - not found
-- Performing Test HAVE_CXX_FLAG_STD_CXX11
-- Performing Test HAVE_CXX_FLAG_STD_CXX11 - Success
-- Performing Test HAVE_CXX_FLAG_WALL
-- Performing Test HAVE_CXX_FLAG_WALL - Success
-- Performing Test HAVE_CXX_FLAG_WEXTRA
-- Performing Test HAVE_CXX_FLAG_WEXTRA - Success
-- Performing Test HAVE_CXX_FLAG_WSHADOW
-- Performing Test HAVE_CXX_FLAG_WSHADOW - Success
-- Performing Test HAVE_CXX_FLAG_WSUGGEST_OVERRIDE
-- Performing Test HAVE_CXX_FLAG_WSUGGEST_OVERRIDE - Success
-- Performing Test HAVE_CXX_FLAG_PEDANTIC
-- Performing Test HAVE_CXX_FLAG_PEDANTIC - Success
-- Performing Test HAVE_CXX_FLAG_PEDANTIC_ERRORS
-- Performing Test HAVE_CXX_FLAG_PEDANTIC_ERRORS - Success
-- Performing Test HAVE_CXX_FLAG_WSHORTEN_64_TO_32
-- Performing Test HAVE_CXX_FLAG_WSHORTEN_64_TO_32 - Success
-- Performing Test HAVE_CXX_FLAG_FSTRICT_ALIASING
-- Performing Test HAVE_CXX_FLAG_FSTRICT_ALIASING - Success
-- Performing Test HAVE_CXX_FLAG_WNO_DEPRECATED_DECLARATIONS
-- Performing Test HAVE_CXX_FLAG_WNO_DEPRECATED_DECLARATIONS - Success
-- Performing Test HAVE_CXX_FLAG_FNO_EXCEPTIONS
-- Performing Test HAVE_CXX_FLAG_FNO_EXCEPTIONS - Success
-- Performing Test HAVE_CXX_FLAG_WSTRICT_ALIASING
-- Performing Test HAVE_CXX_FLAG_WSTRICT_ALIASING - Success
-- Performing Test HAVE_CXX_FLAG_WD654
-- Performing Test HAVE_CXX_FLAG_WD654 - Failed
-- Performing Test HAVE_CXX_FLAG_WTHREAD_SAFETY
-- Performing Test HAVE_CXX_FLAG_WTHREAD_SAFETY - Success
-- Performing Test HAVE_THREAD_SAFETY_ATTRIBUTES
-- Performing Test HAVE_THREAD_SAFETY_ATTRIBUTES
-- Performing Test HAVE_THREAD_SAFETY_ATTRIBUTES -- failed to compile
-- Performing Test HAVE_CXX_FLAG_COVERAGE
-- Performing Test HAVE_CXX_FLAG_COVERAGE - Success
-- Performing Test HAVE_GNU_POSIX_REGEX
-- Performing Test HAVE_GNU_POSIX_REGEX
-- Performing Test HAVE_GNU_POSIX_REGEX -- failed to compile
-- Performing Test HAVE_POSIX_REGEX
-- Performing Test HAVE_POSIX_REGEX
-- Performing Test HAVE_POSIX_REGEX -- success
-- Performing Test HAVE_STEADY_CLOCK
-- Performing Test HAVE_STEADY_CLOCK
-- Performing Test HAVE_STEADY_CLOCK -- success
-- Configuring done
-- Generating done
CMake Warning:
  Manually-specified variables were not used by the project:

    MACOSX_DEPLOYMENT_TARGET

-- Build files have been written to: /Users/qiuxiafei/workspace/torch-mlir/build_arm64

Meanwhile, looking into log of other jobs, they both pass cmake configuration phase successfully. Do you have any advices? Thanks ~ @vivekkhandelwal1 @tanyokwok @powderluv @silvasean