Open paziewskib opened 2 months ago
being suffered with exactly same error with the first error. Waiting for the response.
@paziewskib
that looks like the right place.
Can you try rm -rf cmake-out
and try compiling again?
@mergennachin
So I did it like this:
1st attempt - rm -rf cmake-out
; after that
cmake -DPYTHON_EXECUTABLE=python \
-DCMAKE_INSTALL_PREFIX=cmake-out \
-DEXECUTORCH_ENABLE_LOGGING=1 \
-DCMAKE_BUILD_TYPE=Release \
-DEXECUTORCH_BUILD_EXTENSION_MODULE=ON \
-DEXECUTORCH_BUILD_EXTENSION_DATA_LOADER=ON \
-DEXECUTORCH_BUILD_XNNPACK=ON \
-DEXECUTORCH_BUILD_KERNELS_QUANTIZED=ON \
-DEXECUTORCH_BUILD_KERNELS_OPTIMIZED=ON \
-DEXECUTORCH_BUILD_KERNELS_CUSTOM=ON \
-Bcmake-out .
cmake --build cmake-out -j16 --target install --config Release
and
cmake -DPYTHON_EXECUTABLE=python \
-DCMAKE_INSTALL_PREFIX=cmake-out \
-DCMAKE_BUILD_TYPE=Release \
-DEXECUTORCH_BUILD_KERNELS_CUSTOM=ON \
-DEXECUTORCH_BUILD_KERNELS_OPTIMIZED=ON \
-DEXECUTORCH_BUILD_XNNPACK=ON \
-DEXECUTORCH_BUILD_KERNELS_QUANTIZED=ON \
-Bcmake-out/examples/models/llama2 \
examples/models/llama2
cmake --build cmake-out/examples/models/llama2 -j16 --config Release
Results: nothing changed, the same error as above with undefined reference to
google::FlagRegisterer::FlagRegisterer`
2nd attempt:
rm -rf cmake-out
and after that only 2nd point from 4th step from the instruction:
cmake -DPYTHON_EXECUTABLE=python \
-DCMAKE_INSTALL_PREFIX=cmake-out \
-DCMAKE_BUILD_TYPE=Release \
-DEXECUTORCH_BUILD_KERNELS_CUSTOM=ON \
-DEXECUTORCH_BUILD_KERNELS_OPTIMIZED=ON \
-DEXECUTORCH_BUILD_XNNPACK=ON \
-DEXECUTORCH_BUILD_KERNELS_QUANTIZED=ON \
-Bcmake-out/examples/models/llama2 \
examples/models/llama2
cmake --build cmake-out/examples/models/llama2 -j16 --config Release
Results: errors:
> -DCMAKE_INSTALL_PREFIX=cmake-out \
> -DCMAKE_BUILD_TYPE=Release \
-Bcmake-out/examples/models/llama2 \
examples/models/llama2> -DEXECUTORCH_BUILD_KERNELS_CUSTOM=ON \
> -DEXECUTORCH_BUILD_KERNELS_OPTIMIZED=ON \
> -DEXECUTORCH_BUILD_XNNPACK=ON \
> -DEXECUTORCH_BUILD_KERNELS_QUANTIZED=ON \
> -Bcmake-out/examples/models/llama2 \
> examples/models/llama2
-- The C compiler identification is GNU 9.4.0
-- The CXX compiler identification is GNU 9.4.0
-- Detecting C compiler ABI info
-- Detecting C compiler ABI info - done
-- Check for working C compiler: /usr/bin/cc - skipped
-- Detecting C compile features
-- Detecting C compile features - done
-- Detecting CXX compiler ABI info
-- Detecting CXX compiler ABI info - done
-- Check for working CXX compiler: /usr/bin/c++ - skipped
-- Detecting CXX compile features
-- Detecting CXX compile features - done
-- Performing Test CMAKE_HAVE_LIBC_PTHREAD
-- Performing Test CMAKE_HAVE_LIBC_PTHREAD - Failed
-- Looking for pthread_create in pthreads
-- Looking for pthread_create in pthreads - not found
-- Looking for pthread_create in pthread
-- Looking for pthread_create in pthread - found
-- Found Threads: TRUE
CMake Warning (dev) at CMakeLists.txt:85 (find_package):
Policy CMP0144 is not set: find_package uses upper-case <PACKAGENAME>_ROOT
variables. Run "cmake --help-policy CMP0144" for policy details. Use the
cmake_policy command to set the policy and suppress this warning.
CMake variable EXECUTORCH_ROOT is set to:
/home/b.paziewski/conda_executorch/executorch/examples/models/llama2/../../..
For compatibility, find_package is ignoring the variable, but code in a
.cmake module might still use it.
This warning is for project developers. Use -Wno-dev to suppress it.
CMake Error at CMakeLists.txt:85 (find_package):
Could not find a package configuration file provided by "executorch" with
any of the following names:
executorchConfig.cmake
executorch-config.cmake
Add the installation prefix of "executorch" to CMAKE_PREFIX_PATH or set
"executorch_DIR" to a directory containing one of the above files. If
"executorch" provides a separate development package or SDK, be sure it has
been installed.
-- Configuring incomplete, errors occurred!
Facing the same issue.
Any ideas?
I've tried to follow the instructions once again and the other problem has appeared in Step 3: Evaluate model accuracy
. After executing
python -m examples.models.llama2.eval_llama -c lama-2-7b/consolidated.00.pth -t tokenizer.model -p llama-2-7b/params.json -d fp32 --max_seq_len 2048 --limit 1000
command the system has returned
TypeError: HFLM.__init__() missing 1 required positional argument: 'pretrained'
Hello, I can't run step 4 from the instruction that is available on https://github.com/pytorch/executorch/tree/main/examples/models/llama2
When I run point 2. Build llama runner. I have an error:
Based on Common Issues and Mitigations I added two lines of code to
examples/models/llama2/CMakeLists.txt
After that, when I run
cmake --build cmake-out/examples/models/llama2 -j16 --config Release
the script returned another error:Can you have any advice how I can run it? Maybe I added the lines in the wrong file? Should I add something more to the
CMakeLists.txt
?