google / ml-compiler-opt

Infrastructure for Machine Learning Guided Optimization (MLGO) in LLVM.
Apache License 2.0
612 stars 92 forks source link

Getting started #302

Open reedkotler opened 11 months ago

reedkotler commented 11 months ago

My first interest is to just use pretrained models and do some performance analysis.

I see the pretrained model for inlining-for-size.

Is there a pretrained model for register-allocation-for-performance ?

If so, how do I enable it.

TIA.

boomanaiden154 commented 11 months ago

The register allocation for performance release is available at https://github.com/google/ml-compiler-opt/releases/tag/regalloc-evict-v1.0. If you set the model path to the URL while compiling LLVM, it should download and embed the release into the build (assuming you have the necessary prerequisites like tensorflow) installed so that you can then enable it in the compilation of your project with the appropriate -mllvm flags.

reedkotler commented 11 months ago

Has anyone built the two demos recently from the instructions in the README.md? Tia

reedkotler commented 11 months ago

python 3.8.x/3.9.x/3.10.x. -- i have python3.8 installed but if i say python -V it has 3.7.3 but python3.8 -V will show 3.8.12 . Is this sufficient? will the scripts be finding the right things? tia reed

reedkotler commented 11 months ago

Sorry for so many questions but i'm under some huge time pressure from management. Is debian 10 going to work? The instructions say ubuntu 20

mtrofin commented 11 months ago

python: you need at least 3.8. One good way to tell if things work is by running the tests. You can maybe create a venv using your python3.8 - like python3.8 -m venv /where/you/want/it (see more at that link about how to use a venv)

OS: Debian 10 should work, our bots run off 10.2.1-6.

If all you need is to try out the pre-trained models, the easiest is to do this (following basically buildbot/buildbot_init.sh)

If you echo $TF_PIP it should say something like /work/python3.8/lib/python3.8/site-packages/tensorflow

cmake /work/llvm-project/llvm -DCMAKE_BUILD_TYPE=Release -DTENSORFLOW_AOT_PATH=${TF_PIP} -DLLVM_ENABLE_PROJECTS="clang" -GNinja -DLLVM_INLINER_MODEL_PATH=download -DLLVM_INLINER_MODEL_CURRENT_URL=https://github.com/google/ml-compiler-opt/releases/download/inlining-Oz-v1.1/inlining-Oz-99f0063-v1.1.tar.gz -DLLVM_RAEVICT_MODEL_PATH=download -DLLVM_RAEVICT_MODEL_CURRENT_URL=https://github.com/google/ml-compiler-opt/releases/download/regalloc-evict-v1.0/regalloc-evict-e67430c-v1.0.tar.gz -DPython3_ROOT_DIR=/work/python3.8

Of course, adjusting -DLLVM_ENABLE_PROJECTS as you need.

The mlgo-related flags are the -DTENSORFLOW_AOT_PATH and the 2 pairs about the models (download and "from where"). The last flag is because we use a venv and cmake ignores that by default (despite you running within the activated environment) and would pick the highest python in its default environment. We need the right python to run during build because we compile the saved models to .h + .o files and that compiler, which ships in the tensorflow pip package, is invoked through a python wrapper.

reedkotler commented 11 months ago

THanks. Let me try this.

I will need to eventually training this on something but I want to start from this simple place.

Reed


From: Mircea Trofin @.> Sent: Saturday, September 30, 2023 10:01 AM To: google/ml-compiler-opt @.> Cc: Reed Kotler @.>; Author @.> Subject: Re: [google/ml-compiler-opt] Getting started (Issue #302)

python: you need at least 3.8. One good way to tell if things work is by running the tests. You can maybe create a venvhttps://docs.python.org/3/library/venv.html using your python3.8 - like python3.8 -m venv /where/you/want/it (see more at that link about how to use a venv)

OS: Debian 10 should work, our bots run off 10.2.1-6.

If all you need is to try out the pre-trained models, the easiest is to do this (following basically buildbot/buildbot_init.sh)

If you echo $TF_PIP it should say something like /work/python3.8/lib/python3.8/site-packages/tensorflow

cmake /work/llvm-project/llvm -DCMAKE_BUILD_TYPE=Release -DTENSORFLOW_AOT_PATH=${TF_PIP} -DLLVM_ENABLE_PROJECTS="clang" -GNinja -DLLVM_INLINER_MODEL_PATH=download -DLLVM_INLINER_MODEL_CURRENT_URL=https://github.com/google/ml-compiler-opt/releases/download/inlining-Oz-v1.1/inlining-Oz-99f0063-v1.1.tar.gz -DLLVM_RAEVICT_MODEL_PATH=download -DLLVM_RAEVICT_MODEL_CURRENT_URL=https://github.com/google/ml-compiler-opt/releases/download/regalloc-evict-v1.0/regalloc-evict-e67430c-v1.0.tar.gz -DPython3_ROOT_DIR=/work/python3.8

Of course, adjusting -DLLVM_ENABLE_PROJECTS as you need.

The mlgo-related flags are the -DTENSORFLOW_AOT_PATH and the 2 pairs about the models (download and "from where"). The last flag is because we use a venv and cmake ignores that by default (despite you running within the activated environment) and would pick the highest python in its default environment. We need the right python to run during build because we compile the saved models to .h + .o files and that compiler, which ships in the tensorflow pip package, is invoked through a python wrapper.

— Reply to this email directly, view it on GitHubhttps://github.com/google/ml-compiler-opt/issues/302#issuecomment-1741814194, or unsubscribehttps://github.com/notifications/unsubscribe-auth/AAFO4LO5RXTQ3DNNV33IL6LX5BF57ANCNFSM6AAAAAA5NBA6AA. You are receiving this because you authored the thread.Message ID: @.***>

reedkotler commented 11 months ago

Hi Mircea,

Are you sure that your bots are running 10.2.1-6. ?

Some of the packages are debian 11 packages in your bot script.

Reed


From: Mircea Trofin @.> Sent: Saturday, September 30, 2023 10:01 AM To: google/ml-compiler-opt @.> Cc: Reed Kotler @.>; Author @.> Subject: Re: [google/ml-compiler-opt] Getting started (Issue #302)

python: you need at least 3.8. One good way to tell if things work is by running the tests. You can maybe create a venvhttps://docs.python.org/3/library/venv.html using your python3.8 - like python3.8 -m venv /where/you/want/it (see more at that link about how to use a venv)

OS: Debian 10 should work, our bots run off 10.2.1-6.

If all you need is to try out the pre-trained models, the easiest is to do this (following basically buildbot/buildbot_init.sh)

If you echo $TF_PIP it should say something like /work/python3.8/lib/python3.8/site-packages/tensorflow

cmake /work/llvm-project/llvm -DCMAKE_BUILD_TYPE=Release -DTENSORFLOW_AOT_PATH=${TF_PIP} -DLLVM_ENABLE_PROJECTS="clang" -GNinja -DLLVM_INLINER_MODEL_PATH=download -DLLVM_INLINER_MODEL_CURRENT_URL=https://github.com/google/ml-compiler-opt/releases/download/inlining-Oz-v1.1/inlining-Oz-99f0063-v1.1.tar.gz -DLLVM_RAEVICT_MODEL_PATH=download -DLLVM_RAEVICT_MODEL_CURRENT_URL=https://github.com/google/ml-compiler-opt/releases/download/regalloc-evict-v1.0/regalloc-evict-e67430c-v1.0.tar.gz -DPython3_ROOT_DIR=/work/python3.8

Of course, adjusting -DLLVM_ENABLE_PROJECTS as you need.

The mlgo-related flags are the -DTENSORFLOW_AOT_PATH and the 2 pairs about the models (download and "from where"). The last flag is because we use a venv and cmake ignores that by default (despite you running within the activated environment) and would pick the highest python in its default environment. We need the right python to run during build because we compile the saved models to .h + .o files and that compiler, which ships in the tensorflow pip package, is invoked through a python wrapper.

— Reply to this email directly, view it on GitHubhttps://github.com/google/ml-compiler-opt/issues/302#issuecomment-1741814194, or unsubscribehttps://github.com/notifications/unsubscribe-auth/AAFO4LO5RXTQ3DNNV33IL6LX5BF57ANCNFSM6AAAAAA5NBA6AA. You are receiving this because you authored the thread.Message ID: @.***>

boomanaiden154 commented 11 months ago

The specific information on what is on the buildbot is available on https://lab.llvm.org. If you go to this URL, search for ml-opt click on one of the three ml-opt-* builders, then click on one of the builds (like this one), you can select the Worker: tab which will give you the information buildbot records about the worker. In this case, it's the following:

How to reproduce locally: https://github.com/google/ml-compiler-opt/wiki/BuildBotReproduceLocally Linux ml-opt-devrel-x86-64-b1 5.10.0-14-cloud-amd64 #1 SMP Debian 5.10.113-1 (2022-04-29) x86_64 GNU/Linux Fri Sep 29 20:08:46 UTC 2023 cmake version 3.25.1 g++ (Debian 10.2.1-6) 10.2.1 20210110 GNU gold (GNU Binutils for Debian 2.35.2) 1.16 Architecture: x86_64 CPU op-mode(s): 32-bit, 64-bit Byte Order: Little Endian Address sizes: 46 bits physical, 48 bits virtual CPU(s): 24 On-line CPU(s) list: 0-23 Thread(s) per core: 2 Core(s) per socket: 12 Socket(s): 1 NUMA node(s): 1 Vendor ID: GenuineIntel CPU family: 6 Model: 63 Model name: Intel(R) Xeon(R) CPU @ 2.30GHz Stepping: 0 CPU MHz: 2299.998 BogoMIPS: 4599.99 Hypervisor vendor: KVM Virtualization type: full L1d cache: 384 KiB L1i cache: 384 KiB L2 cache: 3 MiB L3 cache: 45 MiB NUMA node0 CPU(s): 0-23 Vulnerability Itlb multihit: Not affected Vulnerability L1tf: Mitigation; PTE Inversion Vulnerability Mds: Mitigation; Clear CPU buffers; SMT Host state unknown Vulnerability Meltdown: Mitigation; PTI Vulnerability Spec store bypass: Mitigation; Speculative Store Bypass disabled via prctl and seccomp Vulnerability Spectre v1: Mitigation; usercopy/swapgs barriers and __user pointer sanitization Vulnerability Spectre v2: Mitigation; Retpolines, IBPB conditional, IBRS_FW, STIBP conditional, RSB filling Vulnerability Srbds: Not affected Vulnerability Tsx async abort: Not affected Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush mmx fxsr sse sse2 ss ht syscall nx pdpe1gb rdtscp lm constant_tsc rep_good nopl xtopology nonstop_tsc cpuid tsc_known_freq pni pclmulqdq ssse3 fma cx16 pcid sse4_1 sse4_2 x2apic movbe popcnt aes xsave avx f16c rdrand hypervisor lahf_lm abm invpcid_single pti ssbd ibrs ibpb stibp fsgsbase tsc_adjust bmi1 avx2 smep bmi2 erms invpcid xsaveopt arat md_clear arch_capabilities

Which does show Debian 10.2.1-6.

reedkotler commented 11 months ago

Thanks. I see 10.2.1-6 twice but not in answer to what the OS is. It is in reference to the packages being use, it seems. I'm not sure how to read this. It looks like g++ is from Debian 10.2.1-6 then in a reference to GNU gold.

When I try and run the build script on my Debian 10 machine, I get:

E: Unable to locate package python-is-python3 E: Release 'bullseye-backports' for 'cmake' was not found E: Release 'bullseye-backports' for 'cmake-data' was not found E: Unable to locate package libpthreadpool-dev

For example: https://packages.debian.org/unstable/python-is-python3

Seems to say that this is a debian 11 new package.

The instructions in the https://github.com/google/ml-compiler-opt

ask for:

Ubuntu 20 is from Debian 11.

I'm not sure why but all our machines we can use at work are Debian 10. Unforunately the whole company is in China (ByteDance) and they are on vacation for a week for some Chinese holiday.

I made a parallels machine at home on my mac and installed Ubuntu on that and built a compiler with.

I asked Bard how to build this (lol) and it gave me:

https://g.co/bard/share/051523490660

Do these instructions seem to be correct?

TIA.


From: Aiden Grossman @.> Sent: Saturday, September 30, 2023 11:09 PM To: google/ml-compiler-opt @.> Cc: Reed Kotler @.>; Author @.> Subject: Re: [google/ml-compiler-opt] Getting started (Issue #302)

The specific information on what is on the buildbot is available on https://lab.llvm.orghttps://lab.llvm.org/. If you go to thishttps://lab.llvm.org/buildbot/#/builders URL, search for ml-opt click on one of the three ml-opt-* builders, then click on one of the builds (like this onehttps://lab.llvm.org/buildbot/#/builders/6/builds/33159), you can select the Worker: tab which will give you the information buildbot records about the worker. In this case, it's the following:

How to reproduce locally: https://github.com/google/ml-compiler-opt/wiki/BuildBotReproduceLocally Linux ml-opt-devrel-x86-64-b1 5.10.0-14-cloud-amd64 #1 SMP Debian 5.10.113-1 (2022-04-29) x86_64 GNU/Linux Fri Sep 29 20:08:46 UTC 2023 cmake version 3.25.1 g++ (Debian 10.2.1-6) 10.2.1 20210110 GNU gold (GNU Binutils for Debian 2.35.2) 1.16 Architecture: x86_64 CPU op-mode(s): 32-bit, 64-bit Byte Order: Little Endian Address sizes: 46 bits physical, 48 bits virtual CPU(s): 24 On-line CPU(s) list: 0-23 Thread(s) per core: 2 Core(s) per socket: 12 Socket(s): 1 NUMA node(s): 1 Vendor ID: GenuineIntel CPU family: 6 Model: 63 Model name: Intel(R) Xeon(R) CPU @ 2.30GHz Stepping: 0 CPU MHz: 2299.998 BogoMIPS: 4599.99 Hypervisor vendor: KVM Virtualization type: full L1d cache: 384 KiB L1i cache: 384 KiB L2 cache: 3 MiB L3 cache: 45 MiB NUMA node0 CPU(s): 0-23 Vulnerability Itlb multihit: Not affected Vulnerability L1tf: Mitigation; PTE Inversion Vulnerability Mds: Mitigation; Clear CPU buffers; SMT Host state unknown Vulnerability Meltdown: Mitigation; PTI Vulnerability Spec store bypass: Mitigation; Speculative Store Bypass disabled via prctl and seccomp Vulnerability Spectre v1: Mitigation; usercopy/swapgs barriers and __user pointer sanitization Vulnerability Spectre v2: Mitigation; Retpolines, IBPB conditional, IBRS_FW, STIBP conditional, RSB filling Vulnerability Srbds: Not affected Vulnerability Tsx async abort: Not affected Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush mmx fxsr sse sse2 ss ht syscall nx pdpe1gb rdtscp lm constant_tsc rep_good nopl xtopology nonstop_tsc cpuid tsc_known_freq pni pclmulqdq ssse3 fma cx16 pcid sse4_1 sse4_2 x2apic movbe popcnt aes xsave avx f16c rdrand hypervisor lahf_lm abm invpcid_single pti ssbd ibrs ibpb stibp fsgsbase tsc_adjust bmi1 avx2 smep bmi2 erms invpcid xsaveopt arat md_clear arch_capabilities

Which does show Debian 10.2.1-6.

— Reply to this email directly, view it on GitHubhttps://github.com/google/ml-compiler-opt/issues/302#issuecomment-1741972181, or unsubscribehttps://github.com/notifications/unsubscribe-auth/AAFO4LL4DSH5XCQDQLAHUFLX5ECIBANCNFSM6AAAAAA5NBA6AA. You are receiving this because you authored the thread.Message ID: @.***>

reedkotler commented 11 months ago

I guess I can try all the build bot instructions after the package installs on my ubuntu 20 parallels machine. I'll try that while waiting to see if there is any response. I'm in california time.

I appreciate all the help in getting through this initial setup.


From: Reed Kotler @.> Sent: Sunday, October 1, 2023 1:23 AM To: google/ml-compiler-opt @.>; google/ml-compiler-opt @.> Cc: Author @.> Subject: Re: [google/ml-compiler-opt] Getting started (Issue #302)

Thanks. I see 10.2.1-6 twice but not in answer to what the OS is. It is in reference to the packages being use, it seems. I'm not sure how to read this. It looks like g++ is from Debian 10.2.1-6 then in a reference to GNU gold.

When I try and run the build script on my Debian 10 machine, I get:

E: Unable to locate package python-is-python3 E: Release 'bullseye-backports' for 'cmake' was not found E: Release 'bullseye-backports' for 'cmake-data' was not found E: Unable to locate package libpthreadpool-dev

For example: https://packages.debian.org/unstable/python-is-python3

Seems to say that this is a debian 11 new package.

The instructions in the https://github.com/google/ml-compiler-opt

ask for:

Ubuntu 20 is from Debian 11.

I'm not sure why but all our machines we can use at work are Debian 10. Unforunately the whole company is in China (ByteDance) and they are on vacation for a week for some Chinese holiday.

I made a parallels machine at home on my mac and installed Ubuntu on that and built a compiler with.

I asked Bard how to build this (lol) and it gave me:

https://g.co/bard/share/051523490660

Do these instructions seem to be correct?

TIA.


From: Aiden Grossman @.> Sent: Saturday, September 30, 2023 11:09 PM To: google/ml-compiler-opt @.> Cc: Reed Kotler @.>; Author @.> Subject: Re: [google/ml-compiler-opt] Getting started (Issue #302)

The specific information on what is on the buildbot is available on https://lab.llvm.orghttps://lab.llvm.org/. If you go to thishttps://lab.llvm.org/buildbot/#/builders URL, search for ml-opt click on one of the three ml-opt-* builders, then click on one of the builds (like this onehttps://lab.llvm.org/buildbot/#/builders/6/builds/33159), you can select the Worker: tab which will give you the information buildbot records about the worker. In this case, it's the following:

How to reproduce locally: https://github.com/google/ml-compiler-opt/wiki/BuildBotReproduceLocally Linux ml-opt-devrel-x86-64-b1 5.10.0-14-cloud-amd64 #1 SMP Debian 5.10.113-1 (2022-04-29) x86_64 GNU/Linux Fri Sep 29 20:08:46 UTC 2023 cmake version 3.25.1 g++ (Debian 10.2.1-6) 10.2.1 20210110 GNU gold (GNU Binutils for Debian 2.35.2) 1.16 Architecture: x86_64 CPU op-mode(s): 32-bit, 64-bit Byte Order: Little Endian Address sizes: 46 bits physical, 48 bits virtual CPU(s): 24 On-line CPU(s) list: 0-23 Thread(s) per core: 2 Core(s) per socket: 12 Socket(s): 1 NUMA node(s): 1 Vendor ID: GenuineIntel CPU family: 6 Model: 63 Model name: Intel(R) Xeon(R) CPU @ 2.30GHz Stepping: 0 CPU MHz: 2299.998 BogoMIPS: 4599.99 Hypervisor vendor: KVM Virtualization type: full L1d cache: 384 KiB L1i cache: 384 KiB L2 cache: 3 MiB L3 cache: 45 MiB NUMA node0 CPU(s): 0-23 Vulnerability Itlb multihit: Not affected Vulnerability L1tf: Mitigation; PTE Inversion Vulnerability Mds: Mitigation; Clear CPU buffers; SMT Host state unknown Vulnerability Meltdown: Mitigation; PTI Vulnerability Spec store bypass: Mitigation; Speculative Store Bypass disabled via prctl and seccomp Vulnerability Spectre v1: Mitigation; usercopy/swapgs barriers and __user pointer sanitization Vulnerability Spectre v2: Mitigation; Retpolines, IBPB conditional, IBRS_FW, STIBP conditional, RSB filling Vulnerability Srbds: Not affected Vulnerability Tsx async abort: Not affected Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush mmx fxsr sse sse2 ss ht syscall nx pdpe1gb rdtscp lm constant_tsc rep_good nopl xtopology nonstop_tsc cpuid tsc_known_freq pni pclmulqdq ssse3 fma cx16 pcid sse4_1 sse4_2 x2apic movbe popcnt aes xsave avx f16c rdrand hypervisor lahf_lm abm invpcid_single pti ssbd ibrs ibpb stibp fsgsbase tsc_adjust bmi1 avx2 smep bmi2 erms invpcid xsaveopt arat md_clear arch_capabilities

Which does show Debian 10.2.1-6.

— Reply to this email directly, view it on GitHubhttps://github.com/google/ml-compiler-opt/issues/302#issuecomment-1741972181, or unsubscribehttps://github.com/notifications/unsubscribe-auth/AAFO4LL4DSH5XCQDQLAHUFLX5ECIBANCNFSM6AAAAAA5NBA6AA. You are receiving this because you authored the thread.Message ID: @.***>

reedkotler commented 11 months ago

I don't see any clang builds in the build bot script.

Is it already pre-built on this machine it's running on ?

I'm not clear exactly how to build clang in ML 'development-mode' .

TIA


From: Reed Kotler @.> Sent: Sunday, October 1, 2023 1:35 AM To: google/ml-compiler-opt @.>; google/ml-compiler-opt @.> Cc: Author @.> Subject: Re: [google/ml-compiler-opt] Getting started (Issue #302)

I guess I can try all the build bot instructions after the package installs on my ubuntu 20 parallels machine. I'll try that while waiting to see if there is any response. I'm in california time.

I appreciate all the help in getting through this initial setup.


From: Reed Kotler @.> Sent: Sunday, October 1, 2023 1:23 AM To: google/ml-compiler-opt @.>; google/ml-compiler-opt @.> Cc: Author @.> Subject: Re: [google/ml-compiler-opt] Getting started (Issue #302)

Thanks. I see 10.2.1-6 twice but not in answer to what the OS is. It is in reference to the packages being use, it seems. I'm not sure how to read this. It looks like g++ is from Debian 10.2.1-6 then in a reference to GNU gold.

When I try and run the build script on my Debian 10 machine, I get:

E: Unable to locate package python-is-python3 E: Release 'bullseye-backports' for 'cmake' was not found E: Release 'bullseye-backports' for 'cmake-data' was not found E: Unable to locate package libpthreadpool-dev

For example: https://packages.debian.org/unstable/python-is-python3

Seems to say that this is a debian 11 new package.

The instructions in the https://github.com/google/ml-compiler-opt

ask for:

Ubuntu 20 is from Debian 11.

I'm not sure why but all our machines we can use at work are Debian 10. Unforunately the whole company is in China (ByteDance) and they are on vacation for a week for some Chinese holiday.

I made a parallels machine at home on my mac and installed Ubuntu on that and built a compiler with.

I asked Bard how to build this (lol) and it gave me:

https://g.co/bard/share/051523490660

Do these instructions seem to be correct?

TIA.


From: Aiden Grossman @.> Sent: Saturday, September 30, 2023 11:09 PM To: google/ml-compiler-opt @.> Cc: Reed Kotler @.>; Author @.> Subject: Re: [google/ml-compiler-opt] Getting started (Issue #302)

The specific information on what is on the buildbot is available on https://lab.llvm.orghttps://lab.llvm.org/. If you go to thishttps://lab.llvm.org/buildbot/#/builders URL, search for ml-opt click on one of the three ml-opt-* builders, then click on one of the builds (like this onehttps://lab.llvm.org/buildbot/#/builders/6/builds/33159), you can select the Worker: tab which will give you the information buildbot records about the worker. In this case, it's the following:

How to reproduce locally: https://github.com/google/ml-compiler-opt/wiki/BuildBotReproduceLocally Linux ml-opt-devrel-x86-64-b1 5.10.0-14-cloud-amd64 #1 SMP Debian 5.10.113-1 (2022-04-29) x86_64 GNU/Linux Fri Sep 29 20:08:46 UTC 2023 cmake version 3.25.1 g++ (Debian 10.2.1-6) 10.2.1 20210110 GNU gold (GNU Binutils for Debian 2.35.2) 1.16 Architecture: x86_64 CPU op-mode(s): 32-bit, 64-bit Byte Order: Little Endian Address sizes: 46 bits physical, 48 bits virtual CPU(s): 24 On-line CPU(s) list: 0-23 Thread(s) per core: 2 Core(s) per socket: 12 Socket(s): 1 NUMA node(s): 1 Vendor ID: GenuineIntel CPU family: 6 Model: 63 Model name: Intel(R) Xeon(R) CPU @ 2.30GHz Stepping: 0 CPU MHz: 2299.998 BogoMIPS: 4599.99 Hypervisor vendor: KVM Virtualization type: full L1d cache: 384 KiB L1i cache: 384 KiB L2 cache: 3 MiB L3 cache: 45 MiB NUMA node0 CPU(s): 0-23 Vulnerability Itlb multihit: Not affected Vulnerability L1tf: Mitigation; PTE Inversion Vulnerability Mds: Mitigation; Clear CPU buffers; SMT Host state unknown Vulnerability Meltdown: Mitigation; PTI Vulnerability Spec store bypass: Mitigation; Speculative Store Bypass disabled via prctl and seccomp Vulnerability Spectre v1: Mitigation; usercopy/swapgs barriers and __user pointer sanitization Vulnerability Spectre v2: Mitigation; Retpolines, IBPB conditional, IBRS_FW, STIBP conditional, RSB filling Vulnerability Srbds: Not affected Vulnerability Tsx async abort: Not affected Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush mmx fxsr sse sse2 ss ht syscall nx pdpe1gb rdtscp lm constant_tsc rep_good nopl xtopology nonstop_tsc cpuid tsc_known_freq pni pclmulqdq ssse3 fma cx16 pcid sse4_1 sse4_2 x2apic movbe popcnt aes xsave avx f16c rdrand hypervisor lahf_lm abm invpcid_single pti ssbd ibrs ibpb stibp fsgsbase tsc_adjust bmi1 avx2 smep bmi2 erms invpcid xsaveopt arat md_clear arch_capabilities

Which does show Debian 10.2.1-6.

— Reply to this email directly, view it on GitHubhttps://github.com/google/ml-compiler-opt/issues/302#issuecomment-1741972181, or unsubscribehttps://github.com/notifications/unsubscribe-auth/AAFO4LL4DSH5XCQDQLAHUFLX5ECIBANCNFSM6AAAAAA5NBA6AA. You are receiving this because you authored the thread.Message ID: @.***>

reedkotler commented 11 months ago

In the Fuscia demo instructions, I see:

cmake -G Ninja \ -DLLVM_ENABLE_LTO=OFF \ -DLINUX_x86_64-unknown-linux-gnu_SYSROOT=${SYSROOT_DIR}/linux-x64 \ -DLINUX_aarch64-unknown-linux-gnu_SYSROOT=${SYSROOT_DIR}/linux-arm64 \ -DFUCHSIA_SDK=${IDK_DIR} \ -DCMAKE_INSTALL_PREFIX= \ -DCMAKE_INSTALL_RPATH_USE_LINK_PATH=On \ -C ${LLVM_SRCDIR}/clang/cmake/caches/Fuchsia-stage2.cmake \ -C ${TFLITE_PATH}/tflite.cmake \ ${LLVM_SRCDIR}/llvm

C ${TFLITE_PATH}/tflite.cmake

Is I guess all I need for the clang build to make it ML development-mode?


From: Reed Kotler @.> Sent: Sunday, October 1, 2023 1:44 AM To: google/ml-compiler-opt @.>; google/ml-compiler-opt @.> Cc: Author @.> Subject: Re: [google/ml-compiler-opt] Getting started (Issue #302)

I don't see any clang builds in the build bot script.

Is it already pre-built on this machine it's running on ?

I'm not clear exactly how to build clang in ML 'development-mode' .

TIA


From: Reed Kotler @.> Sent: Sunday, October 1, 2023 1:35 AM To: google/ml-compiler-opt @.>; google/ml-compiler-opt @.> Cc: Author @.> Subject: Re: [google/ml-compiler-opt] Getting started (Issue #302)

I guess I can try all the build bot instructions after the package installs on my ubuntu 20 parallels machine. I'll try that while waiting to see if there is any response. I'm in california time.

I appreciate all the help in getting through this initial setup.


From: Reed Kotler @.> Sent: Sunday, October 1, 2023 1:23 AM To: google/ml-compiler-opt @.>; google/ml-compiler-opt @.> Cc: Author @.> Subject: Re: [google/ml-compiler-opt] Getting started (Issue #302)

Thanks. I see 10.2.1-6 twice but not in answer to what the OS is. It is in reference to the packages being use, it seems. I'm not sure how to read this. It looks like g++ is from Debian 10.2.1-6 then in a reference to GNU gold.

When I try and run the build script on my Debian 10 machine, I get:

E: Unable to locate package python-is-python3 E: Release 'bullseye-backports' for 'cmake' was not found E: Release 'bullseye-backports' for 'cmake-data' was not found E: Unable to locate package libpthreadpool-dev

For example: https://packages.debian.org/unstable/python-is-python3

Seems to say that this is a debian 11 new package.

The instructions in the https://github.com/google/ml-compiler-opt

ask for:

Ubuntu 20 is from Debian 11.

I'm not sure why but all our machines we can use at work are Debian 10. Unforunately the whole company is in China (ByteDance) and they are on vacation for a week for some Chinese holiday.

I made a parallels machine at home on my mac and installed Ubuntu on that and built a compiler with.

I asked Bard how to build this (lol) and it gave me:

https://g.co/bard/share/051523490660

Do these instructions seem to be correct?

TIA.


From: Aiden Grossman @.> Sent: Saturday, September 30, 2023 11:09 PM To: google/ml-compiler-opt @.> Cc: Reed Kotler @.>; Author @.> Subject: Re: [google/ml-compiler-opt] Getting started (Issue #302)

The specific information on what is on the buildbot is available on https://lab.llvm.orghttps://lab.llvm.org/. If you go to thishttps://lab.llvm.org/buildbot/#/builders URL, search for ml-opt click on one of the three ml-opt-* builders, then click on one of the builds (like this onehttps://lab.llvm.org/buildbot/#/builders/6/builds/33159), you can select the Worker: tab which will give you the information buildbot records about the worker. In this case, it's the following:

How to reproduce locally: https://github.com/google/ml-compiler-opt/wiki/BuildBotReproduceLocally Linux ml-opt-devrel-x86-64-b1 5.10.0-14-cloud-amd64 #1 SMP Debian 5.10.113-1 (2022-04-29) x86_64 GNU/Linux Fri Sep 29 20:08:46 UTC 2023 cmake version 3.25.1 g++ (Debian 10.2.1-6) 10.2.1 20210110 GNU gold (GNU Binutils for Debian 2.35.2) 1.16 Architecture: x86_64 CPU op-mode(s): 32-bit, 64-bit Byte Order: Little Endian Address sizes: 46 bits physical, 48 bits virtual CPU(s): 24 On-line CPU(s) list: 0-23 Thread(s) per core: 2 Core(s) per socket: 12 Socket(s): 1 NUMA node(s): 1 Vendor ID: GenuineIntel CPU family: 6 Model: 63 Model name: Intel(R) Xeon(R) CPU @ 2.30GHz Stepping: 0 CPU MHz: 2299.998 BogoMIPS: 4599.99 Hypervisor vendor: KVM Virtualization type: full L1d cache: 384 KiB L1i cache: 384 KiB L2 cache: 3 MiB L3 cache: 45 MiB NUMA node0 CPU(s): 0-23 Vulnerability Itlb multihit: Not affected Vulnerability L1tf: Mitigation; PTE Inversion Vulnerability Mds: Mitigation; Clear CPU buffers; SMT Host state unknown Vulnerability Meltdown: Mitigation; PTI Vulnerability Spec store bypass: Mitigation; Speculative Store Bypass disabled via prctl and seccomp Vulnerability Spectre v1: Mitigation; usercopy/swapgs barriers and __user pointer sanitization Vulnerability Spectre v2: Mitigation; Retpolines, IBPB conditional, IBRS_FW, STIBP conditional, RSB filling Vulnerability Srbds: Not affected Vulnerability Tsx async abort: Not affected Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush mmx fxsr sse sse2 ss ht syscall nx pdpe1gb rdtscp lm constant_tsc rep_good nopl xtopology nonstop_tsc cpuid tsc_known_freq pni pclmulqdq ssse3 fma cx16 pcid sse4_1 sse4_2 x2apic movbe popcnt aes xsave avx f16c rdrand hypervisor lahf_lm abm invpcid_single pti ssbd ibrs ibpb stibp fsgsbase tsc_adjust bmi1 avx2 smep bmi2 erms invpcid xsaveopt arat md_clear arch_capabilities

Which does show Debian 10.2.1-6.

— Reply to this email directly, view it on GitHubhttps://github.com/google/ml-compiler-opt/issues/302#issuecomment-1741972181, or unsubscribehttps://github.com/notifications/unsubscribe-auth/AAFO4LL4DSH5XCQDQLAHUFLX5ECIBANCNFSM6AAAAAA5NBA6AA. You are receiving this because you authored the thread.Message ID: @.***>

reedkotler commented 11 months ago

This seems to help if it is still accurate:

https://reviews.llvm.org/D77752


From: Reed Kotler @.> Sent: Sunday, October 1, 2023 2:01 AM To: google/ml-compiler-opt @.>; google/ml-compiler-opt @.> Cc: Author @.> Subject: Re: [google/ml-compiler-opt] Getting started (Issue #302)

In the Fuscia demo instructions, I see:

cmake -G Ninja \ -DLLVM_ENABLE_LTO=OFF \ -DLINUX_x86_64-unknown-linux-gnu_SYSROOT=${SYSROOT_DIR}/linux-x64 \ -DLINUX_aarch64-unknown-linux-gnu_SYSROOT=${SYSROOT_DIR}/linux-arm64 \ -DFUCHSIA_SDK=${IDK_DIR} \ -DCMAKE_INSTALL_PREFIX= \ -DCMAKE_INSTALL_RPATH_USE_LINK_PATH=On \ -C ${LLVM_SRCDIR}/clang/cmake/caches/Fuchsia-stage2.cmake \ -C ${TFLITE_PATH}/tflite.cmake \ ${LLVM_SRCDIR}/llvm

C ${TFLITE_PATH}/tflite.cmake

Is I guess all I need for the clang build to make it ML development-mode?


From: Reed Kotler @.> Sent: Sunday, October 1, 2023 1:44 AM To: google/ml-compiler-opt @.>; google/ml-compiler-opt @.> Cc: Author @.> Subject: Re: [google/ml-compiler-opt] Getting started (Issue #302)

I don't see any clang builds in the build bot script.

Is it already pre-built on this machine it's running on ?

I'm not clear exactly how to build clang in ML 'development-mode' .

TIA


From: Reed Kotler @.> Sent: Sunday, October 1, 2023 1:35 AM To: google/ml-compiler-opt @.>; google/ml-compiler-opt @.> Cc: Author @.> Subject: Re: [google/ml-compiler-opt] Getting started (Issue #302)

I guess I can try all the build bot instructions after the package installs on my ubuntu 20 parallels machine. I'll try that while waiting to see if there is any response. I'm in california time.

I appreciate all the help in getting through this initial setup.


From: Reed Kotler @.> Sent: Sunday, October 1, 2023 1:23 AM To: google/ml-compiler-opt @.>; google/ml-compiler-opt @.> Cc: Author @.> Subject: Re: [google/ml-compiler-opt] Getting started (Issue #302)

Thanks. I see 10.2.1-6 twice but not in answer to what the OS is. It is in reference to the packages being use, it seems. I'm not sure how to read this. It looks like g++ is from Debian 10.2.1-6 then in a reference to GNU gold.

When I try and run the build script on my Debian 10 machine, I get:

E: Unable to locate package python-is-python3 E: Release 'bullseye-backports' for 'cmake' was not found E: Release 'bullseye-backports' for 'cmake-data' was not found E: Unable to locate package libpthreadpool-dev

For example: https://packages.debian.org/unstable/python-is-python3

Seems to say that this is a debian 11 new package.

The instructions in the https://github.com/google/ml-compiler-opt

ask for:

Ubuntu 20 is from Debian 11.

I'm not sure why but all our machines we can use at work are Debian 10. Unforunately the whole company is in China (ByteDance) and they are on vacation for a week for some Chinese holiday.

I made a parallels machine at home on my mac and installed Ubuntu on that and built a compiler with.

I asked Bard how to build this (lol) and it gave me:

https://g.co/bard/share/051523490660

Do these instructions seem to be correct?

TIA.


From: Aiden Grossman @.> Sent: Saturday, September 30, 2023 11:09 PM To: google/ml-compiler-opt @.> Cc: Reed Kotler @.>; Author @.> Subject: Re: [google/ml-compiler-opt] Getting started (Issue #302)

The specific information on what is on the buildbot is available on https://lab.llvm.orghttps://lab.llvm.org/. If you go to thishttps://lab.llvm.org/buildbot/#/builders URL, search for ml-opt click on one of the three ml-opt-* builders, then click on one of the builds (like this onehttps://lab.llvm.org/buildbot/#/builders/6/builds/33159), you can select the Worker: tab which will give you the information buildbot records about the worker. In this case, it's the following:

How to reproduce locally: https://github.com/google/ml-compiler-opt/wiki/BuildBotReproduceLocally Linux ml-opt-devrel-x86-64-b1 5.10.0-14-cloud-amd64 #1 SMP Debian 5.10.113-1 (2022-04-29) x86_64 GNU/Linux Fri Sep 29 20:08:46 UTC 2023 cmake version 3.25.1 g++ (Debian 10.2.1-6) 10.2.1 20210110 GNU gold (GNU Binutils for Debian 2.35.2) 1.16 Architecture: x86_64 CPU op-mode(s): 32-bit, 64-bit Byte Order: Little Endian Address sizes: 46 bits physical, 48 bits virtual CPU(s): 24 On-line CPU(s) list: 0-23 Thread(s) per core: 2 Core(s) per socket: 12 Socket(s): 1 NUMA node(s): 1 Vendor ID: GenuineIntel CPU family: 6 Model: 63 Model name: Intel(R) Xeon(R) CPU @ 2.30GHz Stepping: 0 CPU MHz: 2299.998 BogoMIPS: 4599.99 Hypervisor vendor: KVM Virtualization type: full L1d cache: 384 KiB L1i cache: 384 KiB L2 cache: 3 MiB L3 cache: 45 MiB NUMA node0 CPU(s): 0-23 Vulnerability Itlb multihit: Not affected Vulnerability L1tf: Mitigation; PTE Inversion Vulnerability Mds: Mitigation; Clear CPU buffers; SMT Host state unknown Vulnerability Meltdown: Mitigation; PTI Vulnerability Spec store bypass: Mitigation; Speculative Store Bypass disabled via prctl and seccomp Vulnerability Spectre v1: Mitigation; usercopy/swapgs barriers and __user pointer sanitization Vulnerability Spectre v2: Mitigation; Retpolines, IBPB conditional, IBRS_FW, STIBP conditional, RSB filling Vulnerability Srbds: Not affected Vulnerability Tsx async abort: Not affected Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush mmx fxsr sse sse2 ss ht syscall nx pdpe1gb rdtscp lm constant_tsc rep_good nopl xtopology nonstop_tsc cpuid tsc_known_freq pni pclmulqdq ssse3 fma cx16 pcid sse4_1 sse4_2 x2apic movbe popcnt aes xsave avx f16c rdrand hypervisor lahf_lm abm invpcid_single pti ssbd ibrs ibpb stibp fsgsbase tsc_adjust bmi1 avx2 smep bmi2 erms invpcid xsaveopt arat md_clear arch_capabilities

Which does show Debian 10.2.1-6.

— Reply to this email directly, view it on GitHubhttps://github.com/google/ml-compiler-opt/issues/302#issuecomment-1741972181, or unsubscribehttps://github.com/notifications/unsubscribe-auth/AAFO4LL4DSH5XCQDQLAHUFLX5ECIBANCNFSM6AAAAAA5NBA6AA. You are receiving this because you authored the thread.Message ID: @.***>

reedkotler commented 11 months ago

Downloading model https://github.com/google/ml-compiler-opt/releases/tag/regalloc-evict-v1.0 Model archive: regalloc-evict-v1.0 CMake Error: Problem with archive_read_open_file(): Unrecognized archive format CMake Error at cmake/modules/TensorFlowCompile.cmake:19 (file): file failed to extract: /home/parallels/reg_alloc/ml-compiler-opt/lib/CodeGen/regalloc-evict-v1.0 Call Stack (most recent call first): cmake/modules/TensorFlowCompile.cmake:110 (tf_get_model) lib/CodeGen/CMakeLists.txt:8 (tf_find_and_compile)

(.venv) @.:~/reg_alloc/ml-compiler-opt$ python3 -c "import tensorflow; import os; print(os.path.dirname(tensorflow.file))" /home/parallels/reg_alloc/ml-compiler-opt/.venv/lib/python3.9/site-packages/tensorflow (.venv) @.:~/reg_alloc/ml-compiler-opt$

cmake -G Ninja -DCMAKE_BUILD_TYPE=Release -DTENSORFLOW_AOT_PATH=$(python3 -c "import tensorflow; import os; print(os.path.dirname(tensorflow.file))") -DLLVM_ENABLE_PROJECTS="clang" -DLLVM_RAEVICT_MODEL_PATH="https://github.com/google/ml-compiler-opt/releases/tag/regalloc-evict-v1.0" $WORKING_DIR/llvm-project/llvm

I installed Tensorflow but the "

pipenv sync --system

did not work so i don't know if there is something else I'm missing here.

TIA

Reed


From: Aiden Grossman @.> Sent: Friday, September 29, 2023 2:53 PM To: google/ml-compiler-opt @.> Cc: Reed Kotler @.>; Author @.> Subject: Re: [google/ml-compiler-opt] Getting started (Issue #302)

The register allocation for performance release is available at https://github.com/google/ml-compiler-opt/releases/tag/regalloc-evict-v1.0. If you set the model path to the URL while compiling LLVM, it should download and embed the release into the build (assuming you have the necessary prerequisites like tensorflow) installed so that you can then enable it in the compilation of your project with the appropriate -mllvm flags.

— Reply to this email directly, view it on GitHubhttps://github.com/google/ml-compiler-opt/issues/302#issuecomment-1741526657, or unsubscribehttps://github.com/notifications/unsubscribe-auth/AAFO4LLFSIUYZ256OLEWUVDX447MHANCNFSM6AAAAAA5NBA6AA. You are receiving this because you authored the thread.Message ID: @.***>

reedkotler commented 11 months ago

I was able to resolve this:

cmake -G Ninja -DCMAKE_BUILD_TYPE=Release -DTENSORFLOW_AOT_PATH=$(python3 -c "import tensorflow; import os; print(os.path.dirname(tensorflow.file))") -DLLVM_ENABLE_PROJECTS="clang" -DLLVM_RAEVICT_MODEL_PATH="https://github.com/google/ml-compiler-opt/releases/download/regalloc-evict-v1.0/regalloc-evict-e67430c-v1.0.tar.gz" $WORKING_DIR/llvm-project/llvm

i needed to add regalloc-evict-e67430c-v1.0.tar.gz to the end of the URL


From: Reed Kotler @.> Sent: Monday, October 2, 2023 3:49 AM To: google/ml-compiler-opt @.>; google/ml-compiler-opt @.> Cc: Author @.> Subject: Re: [google/ml-compiler-opt] Getting started (Issue #302)

Downloading model https://github.com/google/ml-compiler-opt/releases/tag/regalloc-evict-v1.0 Model archive: regalloc-evict-v1.0 CMake Error: Problem with archive_read_open_file(): Unrecognized archive format CMake Error at cmake/modules/TensorFlowCompile.cmake:19 (file): file failed to extract: /home/parallels/reg_alloc/ml-compiler-opt/lib/CodeGen/regalloc-evict-v1.0 Call Stack (most recent call first): cmake/modules/TensorFlowCompile.cmake:110 (tf_get_model) lib/CodeGen/CMakeLists.txt:8 (tf_find_and_compile)

(.venv) @.:~/reg_alloc/ml-compiler-opt$ python3 -c "import tensorflow; import os; print(os.path.dirname(tensorflow.file))" /home/parallels/reg_alloc/ml-compiler-opt/.venv/lib/python3.9/site-packages/tensorflow (.venv) @.:~/reg_alloc/ml-compiler-opt$

cmake -G Ninja -DCMAKE_BUILD_TYPE=Release -DTENSORFLOW_AOT_PATH=$(python3 -c "import tensorflow; import os; print(os.path.dirname(tensorflow.file))") -DLLVM_ENABLE_PROJECTS="clang" -DLLVM_RAEVICT_MODEL_PATH="https://github.com/google/ml-compiler-opt/releases/tag/regalloc-evict-v1.0" $WORKING_DIR/llvm-project/llvm

I installed Tensorflow but the "

pipenv sync --system

did not work so i don't know if there is something else I'm missing here.

TIA

Reed


From: Aiden Grossman @.> Sent: Friday, September 29, 2023 2:53 PM To: google/ml-compiler-opt @.> Cc: Reed Kotler @.>; Author @.> Subject: Re: [google/ml-compiler-opt] Getting started (Issue #302)

The register allocation for performance release is available at https://github.com/google/ml-compiler-opt/releases/tag/regalloc-evict-v1.0. If you set the model path to the URL while compiling LLVM, it should download and embed the release into the build (assuming you have the necessary prerequisites like tensorflow) installed so that you can then enable it in the compilation of your project with the appropriate -mllvm flags.

— Reply to this email directly, view it on GitHubhttps://github.com/google/ml-compiler-opt/issues/302#issuecomment-1741526657, or unsubscribehttps://github.com/notifications/unsubscribe-auth/AAFO4LLFSIUYZ256OLEWUVDX447MHANCNFSM6AAAAAA5NBA6AA. You are receiving this because you authored the thread.Message ID: @.***>

reedkotler commented 11 months ago
  |

orflow/xla_aot_runtime_src/tensorflow/compiler/xla/service/cpu/runtime_conv2d.cc:38:35: required from here /home/parallels/reg_alloc/ml-compiler-opt/.venv/lib/python3.9/site-packages/tensorflow/include/tensorflow/tsl/framework/convolution/eigen_spatial_convolutions-inl.h:1068:27: error: static assertion failed: YOU_MADE_A_PROGRAMMING_MISTAKE 1068 | EIGEN_STATIC_ASSERT((nr == 4), YOU_MADE_A_PROGRAMMING_MISTAKE) | ~~^~~ /home/parallels/reg_alloc/ml-compiler-opt/.venv/lib/python3.9/site-packages/tensorflow/include/Eigen/src/Core/util/StaticAssert.h:26:50: note: in definition of macro ‘EIGEN_STATIC_ASSERT’ 26 | #define EIGEN_STATIC_ASSERT(X,MSG) static_assert(X,#MSG); | ^ /home/parallels/reg_alloc/ml-compiler-opt/.venv/lib/python3.9/site-packages/tensorflow/include/tensorflow/tsl/framework/convolution/eigen_spatial_convolutions-inl.h:1068:27: note: ‘(8 == 4)’ evaluates to false 1068 | EIGEN_STATIC_ASSERT((nr == 4), YOU_MADE_A_PROGRAMMING_MISTAKE) | ~~^~~ /home/parallels/reg_alloc/ml-compiler-opt/.venv/lib/python3.9/site-packages/tensorflow/include/Eigen/src/Core/util/StaticAssert.h:26:50: note: in definition of macro ‘EIGEN_STATIC_ASSERT’ 26 | #define EIGEN_STATIC_ASSERT(X,MSG) static_assert(X,#MSG); | ^


From: Reed Kotler @.> Sent: Monday, October 2, 2023 3:55 AM To: google/ml-compiler-opt @.>; google/ml-compiler-opt @.> Cc: Author @.> Subject: Re: [google/ml-compiler-opt] Getting started (Issue #302)

I was able to resolve this:

cmake -G Ninja -DCMAKE_BUILD_TYPE=Release -DTENSORFLOW_AOT_PATH=$(python3 -c "import tensorflow; import os; print(os.path.dirname(tensorflow.file))") -DLLVM_ENABLE_PROJECTS="clang" -DLLVM_RAEVICT_MODEL_PATH="https://github.com/google/ml-compiler-opt/releases/download/regalloc-evict-v1.0/regalloc-evict-e67430c-v1.0.tar.gz" $WORKING_DIR/llvm-project/llvm

i needed to add regalloc-evict-e67430c-v1.0.tar.gz to the end of the URL


From: Reed Kotler @.> Sent: Monday, October 2, 2023 3:49 AM To: google/ml-compiler-opt @.>; google/ml-compiler-opt @.> Cc: Author @.> Subject: Re: [google/ml-compiler-opt] Getting started (Issue #302)

Downloading model https://github.com/google/ml-compiler-opt/releases/tag/regalloc-evict-v1.0 Model archive: regalloc-evict-v1.0 CMake Error: Problem with archive_read_open_file(): Unrecognized archive format CMake Error at cmake/modules/TensorFlowCompile.cmake:19 (file): file failed to extract: /home/parallels/reg_alloc/ml-compiler-opt/lib/CodeGen/regalloc-evict-v1.0 Call Stack (most recent call first): cmake/modules/TensorFlowCompile.cmake:110 (tf_get_model) lib/CodeGen/CMakeLists.txt:8 (tf_find_and_compile)

(.venv) @.:~/reg_alloc/ml-compiler-opt$ python3 -c "import tensorflow; import os; print(os.path.dirname(tensorflow.file))" /home/parallels/reg_alloc/ml-compiler-opt/.venv/lib/python3.9/site-packages/tensorflow (.venv) @.:~/reg_alloc/ml-compiler-opt$

cmake -G Ninja -DCMAKE_BUILD_TYPE=Release -DTENSORFLOW_AOT_PATH=$(python3 -c "import tensorflow; import os; print(os.path.dirname(tensorflow.file))") -DLLVM_ENABLE_PROJECTS="clang" -DLLVM_RAEVICT_MODEL_PATH="https://github.com/google/ml-compiler-opt/releases/tag/regalloc-evict-v1.0" $WORKING_DIR/llvm-project/llvm

I installed Tensorflow but the "

pipenv sync --system

did not work so i don't know if there is something else I'm missing here.

TIA

Reed


From: Aiden Grossman @.> Sent: Friday, September 29, 2023 2:53 PM To: google/ml-compiler-opt @.> Cc: Reed Kotler @.>; Author @.> Subject: Re: [google/ml-compiler-opt] Getting started (Issue #302)

The register allocation for performance release is available at https://github.com/google/ml-compiler-opt/releases/tag/regalloc-evict-v1.0. If you set the model path to the URL while compiling LLVM, it should download and embed the release into the build (assuming you have the necessary prerequisites like tensorflow) installed so that you can then enable it in the compilation of your project with the appropriate -mllvm flags.

— Reply to this email directly, view it on GitHubhttps://github.com/google/ml-compiler-opt/issues/302#issuecomment-1741526657, or unsubscribehttps://github.com/notifications/unsubscribe-auth/AAFO4LLFSIUYZ256OLEWUVDX447MHANCNFSM6AAAAAA5NBA6AA. You are receiving this because you authored the thread.Message ID: @.***>

reedkotler commented 11 months ago

python -c "import tensorflow as tf;print(tf. version)" 2.14.0


From: Reed Kotler @.> Sent: Monday, October 2, 2023 4:02 AM To: google/ml-compiler-opt @.>; google/ml-compiler-opt @.> Cc: Author @.> Subject: Re: [google/ml-compiler-opt] Getting started (Issue #302)

  |

orflow/xla_aot_runtime_src/tensorflow/compiler/xla/service/cpu/runtime_conv2d.cc:38:35: required from here /home/parallels/reg_alloc/ml-compiler-opt/.venv/lib/python3.9/site-packages/tensorflow/include/tensorflow/tsl/framework/convolution/eigen_spatial_convolutions-inl.h:1068:27: error: static assertion failed: YOU_MADE_A_PROGRAMMING_MISTAKE 1068 | EIGEN_STATIC_ASSERT((nr == 4), YOU_MADE_A_PROGRAMMING_MISTAKE) | ~~^~~ /home/parallels/reg_alloc/ml-compiler-opt/.venv/lib/python3.9/site-packages/tensorflow/include/Eigen/src/Core/util/StaticAssert.h:26:50: note: in definition of macro ‘EIGEN_STATIC_ASSERT’ 26 | #define EIGEN_STATIC_ASSERT(X,MSG) static_assert(X,#MSG); | ^ /home/parallels/reg_alloc/ml-compiler-opt/.venv/lib/python3.9/site-packages/tensorflow/include/tensorflow/tsl/framework/convolution/eigen_spatial_convolutions-inl.h:1068:27: note: ‘(8 == 4)’ evaluates to false 1068 | EIGEN_STATIC_ASSERT((nr == 4), YOU_MADE_A_PROGRAMMING_MISTAKE) | ~~^~~ /home/parallels/reg_alloc/ml-compiler-opt/.venv/lib/python3.9/site-packages/tensorflow/include/Eigen/src/Core/util/StaticAssert.h:26:50: note: in definition of macro ‘EIGEN_STATIC_ASSERT’ 26 | #define EIGEN_STATIC_ASSERT(X,MSG) static_assert(X,#MSG); | ^


From: Reed Kotler @.> Sent: Monday, October 2, 2023 3:55 AM To: google/ml-compiler-opt @.>; google/ml-compiler-opt @.> Cc: Author @.> Subject: Re: [google/ml-compiler-opt] Getting started (Issue #302)

I was able to resolve this:

cmake -G Ninja -DCMAKE_BUILD_TYPE=Release -DTENSORFLOW_AOT_PATH=$(python3 -c "import tensorflow; import os; print(os.path.dirname(tensorflow.file))") -DLLVM_ENABLE_PROJECTS="clang" -DLLVM_RAEVICT_MODEL_PATH="https://github.com/google/ml-compiler-opt/releases/download/regalloc-evict-v1.0/regalloc-evict-e67430c-v1.0.tar.gz" $WORKING_DIR/llvm-project/llvm

i needed to add regalloc-evict-e67430c-v1.0.tar.gz to the end of the URL


From: Reed Kotler @.> Sent: Monday, October 2, 2023 3:49 AM To: google/ml-compiler-opt @.>; google/ml-compiler-opt @.> Cc: Author @.> Subject: Re: [google/ml-compiler-opt] Getting started (Issue #302)

Downloading model https://github.com/google/ml-compiler-opt/releases/tag/regalloc-evict-v1.0 Model archive: regalloc-evict-v1.0 CMake Error: Problem with archive_read_open_file(): Unrecognized archive format CMake Error at cmake/modules/TensorFlowCompile.cmake:19 (file): file failed to extract: /home/parallels/reg_alloc/ml-compiler-opt/lib/CodeGen/regalloc-evict-v1.0 Call Stack (most recent call first): cmake/modules/TensorFlowCompile.cmake:110 (tf_get_model) lib/CodeGen/CMakeLists.txt:8 (tf_find_and_compile)

(.venv) @.:~/reg_alloc/ml-compiler-opt$ python3 -c "import tensorflow; import os; print(os.path.dirname(tensorflow.file))" /home/parallels/reg_alloc/ml-compiler-opt/.venv/lib/python3.9/site-packages/tensorflow (.venv) @.:~/reg_alloc/ml-compiler-opt$

cmake -G Ninja -DCMAKE_BUILD_TYPE=Release -DTENSORFLOW_AOT_PATH=$(python3 -c "import tensorflow; import os; print(os.path.dirname(tensorflow.file))") -DLLVM_ENABLE_PROJECTS="clang" -DLLVM_RAEVICT_MODEL_PATH="https://github.com/google/ml-compiler-opt/releases/tag/regalloc-evict-v1.0" $WORKING_DIR/llvm-project/llvm

I installed Tensorflow but the "

pipenv sync --system

did not work so i don't know if there is something else I'm missing here.

TIA

Reed


From: Aiden Grossman @.> Sent: Friday, September 29, 2023 2:53 PM To: google/ml-compiler-opt @.> Cc: Reed Kotler @.>; Author @.> Subject: Re: [google/ml-compiler-opt] Getting started (Issue #302)

The register allocation for performance release is available at https://github.com/google/ml-compiler-opt/releases/tag/regalloc-evict-v1.0. If you set the model path to the URL while compiling LLVM, it should download and embed the release into the build (assuming you have the necessary prerequisites like tensorflow) installed so that you can then enable it in the compilation of your project with the appropriate -mllvm flags.

— Reply to this email directly, view it on GitHubhttps://github.com/google/ml-compiler-opt/issues/302#issuecomment-1741526657, or unsubscribehttps://github.com/notifications/unsubscribe-auth/AAFO4LLFSIUYZ256OLEWUVDX447MHANCNFSM6AAAAAA5NBA6AA. You are receiving this because you authored the thread.Message ID: @.***>

reedkotler commented 11 months ago

I went back an reinstalled tensorflow so that it is 2.12 but the result was the same.

home/parallels/reg_alloc/ml-compiler-opt/.venv/lib/python3.9/site-packages/tensorflow/include/tensorflow/tsl/framework/convolution/eigen_spatial_convolutions-inl.h:1068:27: error: static assertion failed: YOU_MADE_A_PROGRAMMING_MISTAKE 1068 | EIGEN_STATIC_ASSERT((nr == 4), YOU_MADE_A_PROGRAMMING_MISTAKE) | ~~^~~ /home/parallels/reg_alloc/ml-compiler-opt/.venv/lib/python3.9/site-packages/tensorflow/include/Eigen/src/Core/util/StaticAssert.h:26:50: note: in definition of macro ‘EIGEN_STATIC_ASSERT’ 26 | #define EIGEN_STATIC_ASSERT(X,MSG) static_assert(X,#MSG); | ^ /home/parallels/reg_alloc/ml-compiler-opt/.venv/lib/python3.9/site-packages/tensorflow/include/tensorflow/tsl/framework/convolution/eigen_spatial_convolutions-inl.h:1068:27: note: ‘(8 == 4)’ evaluates to false 1068 | EIGEN_STATIC_ASSERT((nr == 4), YOU_MADE_A_PROGRAMMING_MISTAKE) | ~~^~~ /home/parallels/reg_alloc/ml-compiler-opt/.venv/lib/python3.9/site-packages/tensorflow/include/Eigen/src/Core/util/StaticAssert.h:26:50: note: in definition of macro ‘EIGEN_STATIC_ASSERT’ 26 | #define EIGEN_STATIC_ASSERT(X,MSG) static_assert(X,#MSG); | ^ [4/4891] Building CXX object lib/tf_ru...e/cpu/runtime_single_threaded_fft.cc.o ninja: build stopped: subcommand failed.


From: Reed Kotler @.> Sent: Monday, October 2, 2023 4:03 AM To: google/ml-compiler-opt @.>; google/ml-compiler-opt @.> Cc: Author @.> Subject: Re: [google/ml-compiler-opt] Getting started (Issue #302)

python -c "import tensorflow as tf;print(tf. version)" 2.14.0


From: Reed Kotler @.> Sent: Monday, October 2, 2023 4:02 AM To: google/ml-compiler-opt @.>; google/ml-compiler-opt @.> Cc: Author @.> Subject: Re: [google/ml-compiler-opt] Getting started (Issue #302)

  |

orflow/xla_aot_runtime_src/tensorflow/compiler/xla/service/cpu/runtime_conv2d.cc:38:35: required from here /home/parallels/reg_alloc/ml-compiler-opt/.venv/lib/python3.9/site-packages/tensorflow/include/tensorflow/tsl/framework/convolution/eigen_spatial_convolutions-inl.h:1068:27: error: static assertion failed: YOU_MADE_A_PROGRAMMING_MISTAKE 1068 | EIGEN_STATIC_ASSERT((nr == 4), YOU_MADE_A_PROGRAMMING_MISTAKE) | ~~^~~ /home/parallels/reg_alloc/ml-compiler-opt/.venv/lib/python3.9/site-packages/tensorflow/include/Eigen/src/Core/util/StaticAssert.h:26:50: note: in definition of macro ‘EIGEN_STATIC_ASSERT’ 26 | #define EIGEN_STATIC_ASSERT(X,MSG) static_assert(X,#MSG); | ^ /home/parallels/reg_alloc/ml-compiler-opt/.venv/lib/python3.9/site-packages/tensorflow/include/tensorflow/tsl/framework/convolution/eigen_spatial_convolutions-inl.h:1068:27: note: ‘(8 == 4)’ evaluates to false 1068 | EIGEN_STATIC_ASSERT((nr == 4), YOU_MADE_A_PROGRAMMING_MISTAKE) | ~~^~~ /home/parallels/reg_alloc/ml-compiler-opt/.venv/lib/python3.9/site-packages/tensorflow/include/Eigen/src/Core/util/StaticAssert.h:26:50: note: in definition of macro ‘EIGEN_STATIC_ASSERT’ 26 | #define EIGEN_STATIC_ASSERT(X,MSG) static_assert(X,#MSG); | ^


From: Reed Kotler @.> Sent: Monday, October 2, 2023 3:55 AM To: google/ml-compiler-opt @.>; google/ml-compiler-opt @.> Cc: Author @.> Subject: Re: [google/ml-compiler-opt] Getting started (Issue #302)

I was able to resolve this:

cmake -G Ninja -DCMAKE_BUILD_TYPE=Release -DTENSORFLOW_AOT_PATH=$(python3 -c "import tensorflow; import os; print(os.path.dirname(tensorflow.file))") -DLLVM_ENABLE_PROJECTS="clang" -DLLVM_RAEVICT_MODEL_PATH="https://github.com/google/ml-compiler-opt/releases/download/regalloc-evict-v1.0/regalloc-evict-e67430c-v1.0.tar.gz" $WORKING_DIR/llvm-project/llvm

i needed to add regalloc-evict-e67430c-v1.0.tar.gz to the end of the URL


From: Reed Kotler @.> Sent: Monday, October 2, 2023 3:49 AM To: google/ml-compiler-opt @.>; google/ml-compiler-opt @.> Cc: Author @.> Subject: Re: [google/ml-compiler-opt] Getting started (Issue #302)

Downloading model https://github.com/google/ml-compiler-opt/releases/tag/regalloc-evict-v1.0 Model archive: regalloc-evict-v1.0 CMake Error: Problem with archive_read_open_file(): Unrecognized archive format CMake Error at cmake/modules/TensorFlowCompile.cmake:19 (file): file failed to extract: /home/parallels/reg_alloc/ml-compiler-opt/lib/CodeGen/regalloc-evict-v1.0 Call Stack (most recent call first): cmake/modules/TensorFlowCompile.cmake:110 (tf_get_model) lib/CodeGen/CMakeLists.txt:8 (tf_find_and_compile)

(.venv) @.:~/reg_alloc/ml-compiler-opt$ python3 -c "import tensorflow; import os; print(os.path.dirname(tensorflow.file))" /home/parallels/reg_alloc/ml-compiler-opt/.venv/lib/python3.9/site-packages/tensorflow (.venv) @.:~/reg_alloc/ml-compiler-opt$

cmake -G Ninja -DCMAKE_BUILD_TYPE=Release -DTENSORFLOW_AOT_PATH=$(python3 -c "import tensorflow; import os; print(os.path.dirname(tensorflow.file))") -DLLVM_ENABLE_PROJECTS="clang" -DLLVM_RAEVICT_MODEL_PATH="https://github.com/google/ml-compiler-opt/releases/tag/regalloc-evict-v1.0" $WORKING_DIR/llvm-project/llvm

I installed Tensorflow but the "

pipenv sync --system

did not work so i don't know if there is something else I'm missing here.

TIA

Reed


From: Aiden Grossman @.> Sent: Friday, September 29, 2023 2:53 PM To: google/ml-compiler-opt @.> Cc: Reed Kotler @.>; Author @.> Subject: Re: [google/ml-compiler-opt] Getting started (Issue #302)

The register allocation for performance release is available at https://github.com/google/ml-compiler-opt/releases/tag/regalloc-evict-v1.0. If you set the model path to the URL while compiling LLVM, it should download and embed the release into the build (assuming you have the necessary prerequisites like tensorflow) installed so that you can then enable it in the compilation of your project with the appropriate -mllvm flags.

— Reply to this email directly, view it on GitHubhttps://github.com/google/ml-compiler-opt/issues/302#issuecomment-1741526657, or unsubscribehttps://github.com/notifications/unsubscribe-auth/AAFO4LLFSIUYZ256OLEWUVDX447MHANCNFSM6AAAAAA5NBA6AA. You are receiving this because you authored the thread.Message ID: @.***>

mtrofin commented 11 months ago

Is the host x86?

reedkotler commented 11 months ago

The host is not X86, it's ARM. I'm using my mac with parallels.

I ordered a cheapo debian x86 box that should arrive today and that should be at least be fine for building the compilers, I hope.

I'm going to see if there is something at work. Unfortunately there is a Chinese holiday for the next 7 days and almost all of ByteDances 150,000 employees are in China. I'm not too confident that getting a Debian 11/Ubuntu 20 is going to be that easy. I'm sure they exists but not in the server farms that I can use, I don't think.


From: Mircea Trofin @.> Sent: Monday, October 2, 2023 7:44 AM To: google/ml-compiler-opt @.> Cc: Reed Kotler @.>; Author @.> Subject: Re: [google/ml-compiler-opt] Getting started (Issue #302)

Is the host x86?

— Reply to this email directly, view it on GitHubhttps://github.com/google/ml-compiler-opt/issues/302#issuecomment-1743154463, or unsubscribehttps://github.com/notifications/unsubscribe-auth/AAFO4LI3JANFBLOKAWFFC5LX5LHMTAVCNFSM6AAAAAA5NBA6ACVHI2DSMVQWIX3LMV43OSLTON2WKQ3PNVWWK3TUHMYTONBTGE2TINBWGM. You are receiving this because you authored the thread.Message ID: @.***>