OpenBMB / ollama

Get up and running with Llama 3, Mistral, Gemma, and other large language models.
https://ollama.com
MIT License
11 stars 5 forks source link

MSBUILD : error MSB1009: 项目文件不存在。 #12

Open hcl242 opened 3 weeks ago

hcl242 commented 3 weeks ago

What is the issue?

go generate ./... Already on 'minicpm-v2.5' Your branch is up to date with 'origin/minicpm-v2.5'. Submodule path '../llama.cpp': checked out 'd8974b8ea61e1268a4cad27f4f6e2cde3c5d1370' Checking for MinGW...

CommandType Name Version Source


Application gcc.exe 0.0.0.0 C:\w64devkit\bin\gcc.exe Application mingw32-make.exe 0.0.0.0 C:\w64devkit\bin\mingw32-make.exe Building static library generating config with: cmake -S ../llama.cpp -B ../build/windows/amd64_static -G MinGW Makefiles -DCMAKE_C_COMPILER=gcc.exe -DCMAKE_CXX_COMPILER=g++.exe -DBUILD_SHARED_LIBS=off -DLLAMA_NATIVE=off -DLLAMA_AVX=off -DLLAMA_AVX2=off -DLLAMA_AVX512=off -DLLAMA_F16C=off -DLLAMA_FMA=off cmake version 3.29.4

CMake suite maintained and supported by Kitware (kitware.com/cmake). -- The C compiler identification is GNU 13.2.0 -- The CXX compiler identification is GNU 13.2.0 -- Detecting C compiler ABI info -- Detecting C compiler ABI info - done -- Check for working C compiler: C:/w64devkit/bin/gcc.exe - skipped -- Detecting C compile features -- Detecting C compile features - done -- Detecting CXX compiler ABI info -- Detecting CXX compiler ABI info - done -- Check for working CXX compiler: C:/w64devkit/bin/g++.exe - skipped -- Detecting CXX compile features -- Detecting CXX compile features - done -- Found Git: C:/Program Files/Git/cmd/git.exe (found version "2.45.1.windows.1") -- Performing Test CMAKE_HAVE_LIBC_PTHREAD -- Performing Test CMAKE_HAVE_LIBC_PTHREAD - Success -- Found Threads: TRUE -- Warning: ccache not found - consider installing it for faster compilation or disable this warning with LLAMA_CCACHE=OFF -- CMAKE_SYSTEM_PROCESSOR: AMD64 -- x86 detected -- Configuring done (1.7s) -- Generating done (0.7s) -- Build files have been written to: D:/projects/ollama/llm/build/windows/amd64_static building with: cmake --build ../build/windows/amd64_static --config Release --target llama --target ggml [ 16%] Building C object CMakeFiles/ggml.dir/ggml.c.obj [ 16%] Building C object CMakeFiles/ggml.dir/ggml-alloc.c.obj [ 33%] Building C object CMakeFiles/ggml.dir/ggml-backend.c.obj [ 50%] Building C object CMakeFiles/ggml.dir/ggml-quants.c.obj [ 50%] Building CXX object CMakeFiles/ggml.dir/sgemm.cpp.obj [ 50%] Built target ggml [ 66%] Building CXX object CMakeFiles/llama.dir/llama.cpp.obj D:\projects\ollama\llm\llama.cpp\llama.cpp: In constructor 'llama_mmap::llama_mmap(llama_file, size_t, bool)': D:\projects\ollama\llm\llama.cpp\llama.cpp:1428:38: warning: cast between incompatible function types from 'FARPROC' {aka 'long long int ()()'} to 'BOOL ()(HANDLE, ULONG_PTR, PWIN32_MEMORY_RANGE_ENTRY, ULONG)' {aka 'int ()(void, long long unsigned int, _WIN32_MEMORY_RANGE_ENTRY, long unsigned int)'} [-Wcast-function-type] 1428 pPrefetchVirtualMemory = reinterpret_cast<decltype(pPrefetchVirtualMemory)> (GetProcAddress(hKernel32, "PrefetchVirtualMemory")); ^~~~~~~~~~~~~~~~~~~~~~~ D:\projects\ollama\llm\llama.cpp\llama.cpp: In function 'float llama_get_logits_ith(llama_context, int32_t)': D:\projects\ollama\llm\llama.cpp\llama.cpp:17331:65: warning: format '%lu' expects argument of type 'long unsigned int', but argument 2 has type 'std::vector::size_type' {aka 'long long unsigned int'} [-Wformat=] 17331 throw std::runtime_error(format("out of range [0, %lu)", ctx->output_ids.size())); ^ ~~~~~~~~
long unsigned int std::vector::size_type {aka long long unsigned int}
%llu
D:\projects\ollama\llm\llama.cpp\llama.cpp: In function 'float llama_get_embeddings_ith(llama_context, int32_t)': D:\projects\ollama\llm\llama.cpp\llama.cpp:17376:65: warning: format '%lu' expects argument of type 'long unsigned int', but argument 2 has type 'std::vector::size_type' {aka 'long long unsigned int'} [-Wformat=] 17376 throw std::runtime_error(format("out of range [0, %lu)", ctx->output_ids.size())); ^ ~~~~~~~~
long unsigned int std::vector::size_type {aka long long unsigned int}
%llu

[ 83%] Building CXX object CMakeFiles/llama.dir/unicode.cpp.obj [ 83%] Building CXX object CMakeFiles/llama.dir/unicode-data.cpp.obj [100%] Linking CXX static library libllama.a [100%] Built target llama [100%] Built target ggml Building LCD CPU generating config with: cmake -S ../llama.cpp -B ../build/windows/amd64/cpu -DCMAKE_POSITION_INDEPENDENT_CODE=on -A x64 -DLLAMA_AVX=off -DLLAMA_AVX2=off -DLLAMA_AVX512=off -DLLAMA_FMA=off -DLLAMA_F16C=off -DBUILD_SHARED_LIBS=on -DLLAMA_NATIVE=off -DLLAMA_SERVER_VERBOSE=off -DCMAKE_BUILD_TYPE=Release cmake version 3.29.4

CMake suite maintained and supported by Kitware (kitware.com/cmake). -- Building for: Visual Studio 17 2022 -- Selecting Windows SDK version 10.0.22621.0 to target Windows 10.0.22631. -- The C compiler identification is MSVC 19.40.33811.0 -- The CXX compiler identification is MSVC 19.40.33811.0 -- Detecting C compiler ABI info -- Detecting C compiler ABI info - done -- Check for working C compiler: C:/Program Files (x86)/Microsoft Visual Studio/2022/BuildTools/VC/Tools/MSVC/14.40.33807/bin/Hostx64/x64/cl.exe - skipped -- Detecting C compile features -- Detecting C compile features - done -- Detecting CXX compiler ABI info -- Detecting CXX compiler ABI info - done -- Check for working CXX compiler: C:/Program Files (x86)/Microsoft Visual Studio/2022/BuildTools/VC/Tools/MSVC/14.40.33807/bin/Hostx64/x64/cl.exe - skipped -- Detecting CXX compile features -- Detecting CXX compile features - done -- Found Git: C:/Program Files/Git/cmd/git.exe (found version "2.45.1.windows.1") -- Performing Test CMAKE_HAVE_LIBC_PTHREAD -- Performing Test CMAKE_HAVE_LIBC_PTHREAD - Failed -- Looking for pthread_create in pthreads -- Looking for pthread_create in pthreads - not found -- Looking for pthread_create in pthread -- Looking for pthread_create in pthread - not found -- Found Threads: TRUE -- Warning: ccache not found - consider installing it for faster compilation or disable this warning with LLAMA_CCACHE=OFF -- CMAKE_SYSTEM_PROCESSOR: AMD64 -- CMAKE_GENERATOR_PLATFORM: x64 -- x86 detected -- Configuring done (3.8s) -- Generating done (0.5s) CMake Warning: Manually-specified variables were not used by the project:

LLAMA_F16C

-- Build files have been written to: D:/projects/ollama/llm/build/windows/amd64/cpu building with: cmake --build ../build/windows/amd64/cpu --config Release --target ollama_llama_server 适用于 .NET Framework MSBuild 版本 17.10.4+10fbfbf2e MSBUILD : error MSB1009: 项目文件不存在。 开关:ollama_llama_server.vcxproj llm\generate\generate_windows.go:3: running "powershell": exit status 1

OS

Windows

GPU

Nvidia

CPU

Intel

Ollama version

No response

hyphantom commented 3 weeks ago

你的问题和我的一致,建议关注 https://github.com/OpenBMB/ollama/issues/6#issue-2322407577

zrainbowk commented 3 weeks ago

我发现在ollama\llm\ext_server中使用命令 cmake . 可以成功在ollama\llm\ext_server生成ollama_llama_server.vcxproj等文件。但后续怎么操作我还不知道