withcatai / node-llama-cpp

Run AI models locally on your machine with node.js bindings for llama.cpp. Force a JSON schema on the model output on the generation level
https://node-llama-cpp.withcat.ai
MIT License
829 stars 80 forks source link

CUDA compilation failing with VS 2022 Community #78

Closed ThomasVuillaume closed 10 months ago

ThomasVuillaume commented 10 months ago

Issue description

CUDA compilation fails with code

Expected Behavior

Compilation with CUDA should work

Actual Behavior

I have an error MSB8020 whan trying to recompile with CUDA. I tested to compile with CUDA direclty on the llama.ccp project with CMAKE, without error.

Steps to reproduce

Lauching npx --no node-llama-cpp download --cuda

My Environment

Dependency Version
Operating System Windows 11
CPU AMD Ryzen 5 3600 6-Core Processor
Node.js version v18.18.2
node-llama-cpp version 2.7.3
CUDA version v12.2
Visual Studio version 2022 community

Additional Context

The logs :

vuith@THOMZ-FIXE MINGW64 ~/GitRepositories/node-llama-cpp-experiment (main)
$ npx --no node-llama-cpp download --cuda
Repo: ggerganov/llama.cpp
Release: b1378
CUDA: enabled

✔ Fetched llama.cpp info
✔ Removed existing llama.cpp directory
Cloning llama.cpp
Clone ggerganov/llama.cpp (local bundle)  100% ████████████████████████████████████████  0s
◷ Compiling llama.cpp
Not searching for unused variables given on the command line.
CMake Error at CMakeLists.txt:3 (project):
  Failed to run MSBuild command:

    C:/Program Files/Microsoft Visual Studio/2022/Community/MSBuild/Current/Bin/amd64/MSBuild.exe

  to get the value of VCTargetsPath:

    Version MSBuild 17.7.2+d6990bcfa pour .NET Framework
    La génération a démarré 22/10/2023 18:51:39.

    Projet "C:\Users\vuith\GitRepositories\node-llama-cpp-experiment\node_modules\node-llama-cpp\llama\build\CMakeFiles\3.28.0-rc2\VCTargetsPath.vcxproj" sur le noud 1 (cibles par défaut).
    C:\Program Files\Microsoft Visual Studio\2022\Community\MSBuild\Microsoft\VC\v170\Microsoft.CppBuild.targets(456,5): error MSB8020: Les outils de génération pour C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v12.2 (ensemble d'outils de plateforme = 'C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v12.2') sont introuvables. Pour générer à l'aide des outils de génération C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v12.2, installez les outils de génération C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v12.2. Vous avez également la possibilité de mettre à niveau les outils Visual Studio actuels en sélectionnant le menu Projet ou en cliquant avec le bouton droit sur la solution, puis en sélectionnant "Recibler la solution". [C:\Users\vuith\GitRepositories\node-llama-cpp-experiment\node_modules\node-llama-cpp\llama\build\CMakeFiles\3.28.0-rc2\VCTargetsPath.vcxproj]
    Génération du projet "C:\Users\vuith\GitRepositories\node-llama-cpp-experiment\node_modules\node-llama-cpp\llama\build\CMakeFiles\3.28.0-rc2\VCTargetsPath.vcxproj" terminée (cibles par défaut) -- ÉCHEC

Relevant Features Used

Are you willing to resolve this issue by submitting a Pull Request?

No, I don’t have the time and I’m okay to wait for the community / maintainers to resolve this issue.

giladgd commented 10 months ago

@ThomasVuillaume Does running this command without --cuda works for you?

npx --no node-llama-cpp download
ThomasVuillaume commented 10 months ago

Yes, this one's working fine :

vuith@THOMZ-FIXE MINGW64 ~/GitRepositories/node-llama-cpp-experiment (main)
$ npx --no node-llama-cpp download
Repo: ggerganov/llama.cpp
Release: b1378

✔ Fetched llama.cpp info
✔ Removed existing llama.cpp directory
Cloning llama.cpp
Clone ggerganov/llama.cpp (local bundle)  100% ████████████████████████████████████████  0s
◷ Compiling llama.cpp
Not searching for unused variables given on the command line.
-- The C compiler identification is MSVC 19.37.32825.0
-- The CXX compiler identification is MSVC 19.37.32825.0
-- Detecting C compiler ABI info
-- Detecting C compiler ABI info - done
-- Check for working C compiler: C:/Program Files/Microsoft Visual Studio/2022/Community/VC/Tools/MSVC/14.37.32822/bin/Hostx64/x64/cl.exe - skipped
-- Detecting C compile features
-- Detecting C compile features - done
-- Detecting CXX compiler ABI info
-- Detecting CXX compiler ABI info - done
-- Check for working CXX compiler: C:/Program Files/Microsoft Visual Studio/2022/Community/VC/Tools/MSVC/14.37.32822/bin/Hostx64/x64/cl.exe - skipped
-- Detecting CXX compile features
-- Detecting CXX compile features - done
-- Found Git: C:/Program Files/Git/mingw64/bin/git.exe (found version "2.42.0.windows.2")
-- Performing Test CMAKE_HAVE_LIBC_PTHREAD
-- Performing Test CMAKE_HAVE_LIBC_PTHREAD - Failed
-- Looking for pthread_create in pthreads
-- Looking for pthread_create in pthreads - not found
-- Looking for pthread_create in pthread
-- Looking for pthread_create in pthread - not found
-- Found Threads: TRUE
-- CMAKE_SYSTEM_PROCESSOR: AMD64
-- CMAKE_GENERATOR_PLATFORM: x64
-- x86 detected
-- Configuring done (4.8s)
-- Generating done (0.1s)
-- Build files have been written to: C:/Users/vuith/GitRepositories/node-llama-cpp-experiment/node_modules/node-llama-cpp/llama/build
Version MSBuild 17.7.2+d6990bcfa pour .NET Framework

  1>Checking Build System
  Generating build details from Git
  -- Found Git: C:/Program Files/Git/mingw64/bin/git.exe (found version "2.42.0.windows.2")
  Building Custom Rule C:/Users/vuith/GitRepositories/node-llama-cpp-experiment/node_modules/node-llama-cpp/llama/llama
  .cpp/CMakeLists.txt
  Building Custom Rule C:/Users/vuith/GitRepositories/node-llama-cpp-experiment/node_modules/node-llama-cpp/llama/llama
  .cpp/CMakeLists.txt
  ggml.c
  ggml-alloc.c
  ggml-backend.c
  k_quants.c
  Génération de code en cours...
  ggml.vcxproj -> C:\Users\vuith\GitRepositories\node-llama-cpp-experiment\node_modules\node-llama-cpp\llama\build\llam
  a.cpp\ggml.dir\Release\ggml.lib
  Building Custom Rule C:/Users/vuith/GitRepositories/node-llama-cpp-experiment/node_modules/node-llama-cpp/llama/llama
  .cpp/CMakeLists.txt
  llama.cpp
C:\Users\vuith\GitRepositories\node-llama-cpp-experiment\node_modules\node-llama-cpp\llama\llama.cpp\llama.cpp(1054,31)
: warning C4305: 'initialisation' : troncation de 'double' à 'float' [C:\Users\vuith\GitRepositories\node-llama-cpp-exp
eriment\node_modules\node-llama-cpp\llama\build\llama.cpp\llama.vcxproj]
C:\Users\vuith\GitRepositories\node-llama-cpp-experiment\node_modules\node-llama-cpp\llama\llama.cpp\llama.cpp(1202,40)
: warning C4566: le caractère représenté par le nom de caractère universel '\u0120' ne peut pas être représenté dans la
 page de codes actuelle (1252) [C:\Users\vuith\GitRepositories\node-llama-cpp-experiment\node_modules\node-llama-cpp\ll
ama\build\llama.cpp\llama.vcxproj]
C:\Users\vuith\GitRepositories\node-llama-cpp-experiment\node_modules\node-llama-cpp\llama\llama.cpp\llama.cpp(1203,40)
: warning C4566: le caractère représenté par le nom de caractère universel '\u010A' ne peut pas être représenté dans la
 page de codes actuelle (1252) [C:\Users\vuith\GitRepositories\node-llama-cpp-experiment\node_modules\node-llama-cpp\ll
ama\build\llama.cpp\llama.vcxproj]
C:\Users\vuith\GitRepositories\node-llama-cpp-experiment\node_modules\node-llama-cpp\llama\llama.cpp\llama.cpp(1204,40)
: warning C4566: le caractère représenté par le nom de caractère universel '\u0120' ne peut pas être représenté dans la
 page de codes actuelle (1252) [C:\Users\vuith\GitRepositories\node-llama-cpp-experiment\node_modules\node-llama-cpp\ll
ama\build\llama.cpp\llama.vcxproj]
C:\Users\vuith\GitRepositories\node-llama-cpp-experiment\node_modules\node-llama-cpp\llama\llama.cpp\llama.cpp(1205,40)
: warning C4566: le caractère représenté par le nom de caractère universel '\u010A' ne peut pas être représenté dans la
 page de codes actuelle (1252) [C:\Users\vuith\GitRepositories\node-llama-cpp-experiment\node_modules\node-llama-cpp\ll
ama\build\llama.cpp\llama.vcxproj]
C:\Users\vuith\GitRepositories\node-llama-cpp-experiment\node_modules\node-llama-cpp\llama\llama.cpp\llama.cpp(2235,60)
: warning C4566: le caractère représenté par le nom de caractère universel '\u010A' ne peut pas être représenté dans la
 page de codes actuelle (1252) [C:\Users\vuith\GitRepositories\node-llama-cpp-experiment\node_modules\node-llama-cpp\ll
ama\build\llama.cpp\llama.vcxproj]
C:\Users\vuith\GitRepositories\node-llama-cpp-experiment\node_modules\node-llama-cpp\llama\llama.cpp\llama.cpp(9452,28)
: warning C4146: opérateur moins unaire appliqué à un type non signé, le résultat sera non signé [C:\Users\vuith\GitRep
ositories\node-llama-cpp-experiment\node_modules\node-llama-cpp\llama\build\llama.cpp\llama.vcxproj]
C:\Users\vuith\GitRepositories\node-llama-cpp-experiment\node_modules\node-llama-cpp\llama\llama.cpp\llama.cpp(9482,28)
: warning C4146: opérateur moins unaire appliqué à un type non signé, le résultat sera non signé [C:\Users\vuith\GitRep
ositories\node-llama-cpp-experiment\node_modules\node-llama-cpp\llama\build\llama.cpp\llama.vcxproj]
  llama.vcxproj -> C:\Users\vuith\GitRepositories\node-llama-cpp-experiment\node_modules\node-llama-cpp\llama\build\lla
  ma.cpp\Release\llama.lib
  Building Custom Rule C:/Users/vuith/GitRepositories/node-llama-cpp-experiment/node_modules/node-llama-cpp/llama/llama
  .cpp/common/CMakeLists.txt
  common.cpp
  sampling.cpp
  console.cpp
C:\Users\vuith\GitRepositories\node-llama-cpp-experiment\node_modules\node-llama-cpp\llama\llama.cpp\common\console.cpp
(253,30): warning C4267: 'initialisation' : conversion de 'size_t' en 'DWORD', perte possible de données [C:\Users\vuit
h\GitRepositories\node-llama-cpp-experiment\node_modules\node-llama-cpp\llama\build\llama.cpp\common\common.vcxproj]
C:\Users\vuith\GitRepositories\node-llama-cpp-experiment\node_modules\node-llama-cpp\llama\llama.cpp\common\console.cpp
(407,28): warning C4267: 'initialisation' : conversion de 'size_t' en 'int', perte possible de données [C:\Users\vuith\
GitRepositories\node-llama-cpp-experiment\node_modules\node-llama-cpp\llama\build\llama.cpp\common\common.vcxproj]
  grammar-parser.cpp
  train.cpp
  Génération de code en cours...
C:\Users\vuith\GitRepositories\node-llama-cpp-experiment\node_modules\node-llama-cpp\llama\llama.cpp\common\common.cpp(
785): warning C4715: 'gpt_random_prompt' : les chemins de contrôle ne retournent pas tous une valeur [C:\Users\vuith\Gi
tRepositories\node-llama-cpp-experiment\node_modules\node-llama-cpp\llama\build\llama.cpp\common\common.vcxproj]
  common.vcxproj -> C:\Users\vuith\GitRepositories\node-llama-cpp-experiment\node_modules\node-llama-cpp\llama\build\ll
  ama.cpp\common\common.dir\Release\common.lib
  Building Custom Rule C:/Users/vuith/GitRepositories/node-llama-cpp-experiment/node_modules/node-llama-cpp/llama/llama
  .cpp/CMakeLists.txt
  ggml_static.vcxproj -> C:\Users\vuith\GitRepositories\node-llama-cpp-experiment\node_modules\node-llama-cpp\llama\bui
  ld\llama.cpp\Release\ggml_static.lib
  Building Custom Rule C:/Users/vuith/GitRepositories/node-llama-cpp-experiment/node_modules/node-llama-cpp/llama/CMake
  Lists.txt
  addon.cpp
C:\Users\vuith\GitRepositories\node-llama-cpp-experiment\node_modules\node-llama-cpp\llama\llama.cpp\common\log.h(379,1
9): warning C4996: 'fopen': This function or variable may be unsafe. Consider using fopen_s instead. To disable depreca
tion, use _CRT_SECURE_NO_WARNINGS. See online help for details. [C:\Users\vuith\GitRepositories\node-llama-cpp-experime
nt\node_modules\node-llama-cpp\llama\build\llama-addon.vcxproj]
C:\Users\vuith\GitRepositories\node-llama-cpp-experiment\node_modules\node-llama-cpp\llama\llama.cpp\common\log.h(387,9
7): warning C4996: 'strerror': This function or variable may be unsafe. Consider using strerror_s instead. To disable d
eprecation, use _CRT_SECURE_NO_WARNINGS. See online help for details. [C:\Users\vuith\GitRepositories\node-llama-cpp-ex
periment\node_modules\node-llama-cpp\llama\build\llama-addon.vcxproj]
C:\Users\vuith\GitRepositories\node-llama-cpp-experiment\node_modules\node-llama-cpp\llama\addon.cpp(339,53): warning C
4267: 'argument' : conversion de 'size_t' en 'int32_t', perte possible de données [C:\Users\vuith\GitRepositories\node-
llama-cpp-experiment\node_modules\node-llama-cpp\llama\build\llama-addon.vcxproj]
C:\Users\vuith\GitRepositories\node-llama-cpp-experiment\node_modules\node-llama-cpp\llama\addon.cpp(341,33): warning C
4267: '=' : conversion de 'size_t' en 'int32_t', perte possible de données [C:\Users\vuith\GitRepositories\node-llama-c
pp-experiment\node_modules\node-llama-cpp\llama\build\llama-addon.vcxproj]
C:\Users\vuith\GitRepositories\node-llama-cpp-experiment\node_modules\node-llama-cpp\llama\addon.cpp(415,9): warning C4
996: 'llama_sample_temperature': use llama_sample_temp instead [C:\Users\vuith\GitRepositories\node-llama-cpp-experimen
t\node_modules\node-llama-cpp\llama\build\llama-addon.vcxproj]
  win_delay_load_hook.cc
  Génération de code en cours...
     Création de la bibliothèque C:/Users/vuith/GitRepositories/node-llama-cpp-experiment/node_modules/node-llama-cpp/l
  lama/build/Release/llama-addon.lib et de l'objet C:/Users/vuith/GitRepositories/node-llama-cpp-experiment/node_module
  s/node-llama-cpp/llama/build/Release/llama-addon.exp
  llama-addon.vcxproj -> C:\Users\vuith\GitRepositories\node-llama-cpp-experiment\node_modules\node-llama-cpp\llama\bui
  ld\Release\llama-addon.node
  Building Custom Rule C:/Users/vuith/GitRepositories/node-llama-cpp-experiment/node_modules/node-llama-cpp/llama/CMake
  Lists.txt
✔ Compiled llama.cpp

Repo: ggerganov/llama.cpp
Release: b1378

Done
ThomasVuillaume commented 10 months ago

Strange this is, as I said in the first message : I tested to compile with CUDA direclty on the llama.ccp project with CMAKE, without error..

sk-1982 commented 10 months ago

Also having this issue. Compilation fails in VS 2019 and VS 2017 as well. This seems to be related to setting the option CMAKE_GENERATOR_TOOLSET. I was able to successfully compile with CUDA after removing this option.

ThomasVuillaume commented 10 months ago

Thanks for the tip @sk-1982 ! I just tested to remove this line of code in dist\utils\compileLLamaCpp.js (in my node_modules) and it worked :

vuith@THOMZ-FIXE MINGW64 ~/GitRepositories/node-llama-cpp-experiment (main)
$ npx --no node-llama-cpp download --cuda
Repo: ggerganov/llama.cpp
Release: b1378
CUDA: enabled

✔ Fetched llama.cpp info
✔ Removed existing llama.cpp directory
Cloning llama.cpp
Clone ggerganov/llama.cpp (local bundle)  100% ████████████████████████████████████████  0s
◷ Compiling llama.cpp
Not searching for unused variables given on the command line.
-- The C compiler identification is MSVC 19.37.32825.0
-- The CXX compiler identification is MSVC 19.37.32825.0
-- Detecting C compiler ABI info
-- Detecting C compiler ABI info - done
-- Check for working C compiler: C:/Program Files/Microsoft Visual Studio/2022/Community/VC/Tools/MSVC/14.37.32822/bin/Hostx64/x64/cl.exe - skipped
-- Detecting C compile features
-- Detecting C compile features - done
-- Detecting CXX compiler ABI info
-- Detecting CXX compiler ABI info - done
-- Check for working CXX compiler: C:/Program Files/Microsoft Visual Studio/2022/Community/VC/Tools/MSVC/14.37.32822/bin/Hostx64/x64/cl.exe - skipped
-- Detecting CXX compile features
-- Detecting CXX compile features - done
-- Found Git: C:/Program Files/Git/mingw64/bin/git.exe (found version "2.42.0.windows.2")
-- Performing Test CMAKE_HAVE_LIBC_PTHREAD
-- Performing Test CMAKE_HAVE_LIBC_PTHREAD - Failed
-- Looking for pthread_create in pthreads
-- Looking for pthread_create in pthreads - not found
-- Looking for pthread_create in pthread
-- Looking for pthread_create in pthread - not found
-- Found Threads: TRUE
-- Found CUDAToolkit: C:/Program Files/NVIDIA GPU Computing Toolkit/CUDA/v12.2/include (found version "12.2.140")
-- cuBLAS found
-- The CUDA compiler identification is NVIDIA 12.2.140
-- Detecting CUDA compiler ABI info
-- Detecting CUDA compiler ABI info - done
-- Check for working CUDA compiler: C:/Program Files/NVIDIA GPU Computing Toolkit/CUDA/v12.2/bin/nvcc.exe - skipped
-- Detecting CUDA compile features
-- Detecting CUDA compile features - done
-- Using CUDA architectures: 52;61;70
-- CMAKE_SYSTEM_PROCESSOR: AMD64
-- CMAKE_GENERATOR_PLATFORM: x64
-- x86 detected
-- Configuring done (22.1s)
-- Generating done (0.1s)
-- Build files have been written to: C:/Users/vuith/GitRepositories/node-llama-cpp-experiment/node_modules/node-llama-cpp/llama/build
Version MSBuild 17.7.2+d6990bcfa pour .NET Framework

  1>Checking Build System
  Generating build details from Git
  -- Found Git: C:/Program Files/Git/mingw64/bin/git.exe (found version "2.42.0.windows.2")
  Building Custom Rule C:/Users/vuith/GitRepositories/node-llama-cpp-experiment/node_modules/node-llama-cpp/llama/llama
  .cpp/CMakeLists.txt
  Building Custom Rule C:/Users/vuith/GitRepositories/node-llama-cpp-experiment/node_modules/node-llama-cpp/llama/llama
  .cpp/CMakeLists.txt
  Compiling CUDA source file ..\..\llama.cpp\ggml-cuda.cu...

  C:\Users\vuith\GitRepositories\node-llama-cpp-experiment\node_modules\node-llama-cpp\llama\build\llama.cpp>"C:\Progra
  m Files\NVIDIA GPU Computing Toolkit\CUDA\v12.2\bin\nvcc.exe"  --use-local-env -ccbin "C:\Program Files\Microsoft Vis
  ual Studio\2022\Community\VC\Tools\MSVC\14.37.32822\bin\HostX64\x64" -x cu   -I"C:\Users\vuith\GitRepositories\node-l
  lama-cpp-experiment\node_modules\node-addon-api" -I"C:\Users\vuith\.cmake-js\node-x64\v18.18.2\include\node" -I"C:\Us
  ers\vuith\GitRepositories\node-llama-cpp-experiment\node_modules\node-llama-cpp\llama\llama.cpp\." -I"C:\Program File
  s\NVIDIA GPU Computing Toolkit\CUDA\v12.2\include" -I"C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v12.2\includ
  e"     --keep-dir x64\Release -use_fast_math -maxrregcount=0   --machine 64 --compile -cudart static --generate-code=
  arch=compute_52,code=[compute_52,sm_52] --generate-code=arch=compute_61,code=[compute_61,sm_61] --generate-code=arch=
  compute_70,code=[compute_70,sm_70] -Xcompiler="/EHsc -Ob2"   -D_WINDOWS -DNDEBUG -DNAPI_VERSION=7 -DGGML_USE_K_QUANTS
   -DGGML_USE_CUBLAS -DGGML_CUDA_DMMV_X=32 -DGGML_CUDA_MMV_Y=1 -DK_QUANTS_PER_ITERATION=2 -DGGML_CUDA_PEER_MAX_BATCH_SI
  ZE=128 -D_CRT_SECURE_NO_WARNINGS -D_XOPEN_SOURCE=600 -D"CMAKE_INTDIR=\"Release\"" -D_MBCS -DWIN32 -D_WINDOWS -DNDEBUG
   -DNAPI_VERSION=7 -DGGML_USE_K_QUANTS -DGGML_USE_CUBLAS -DGGML_CUDA_DMMV_X=32 -DGGML_CUDA_MMV_Y=1 -DK_QUANTS_PER_ITER
  ATION=2 -DGGML_CUDA_PEER_MAX_BATCH_SIZE=128 -D_CRT_SECURE_NO_WARNINGS -D_XOPEN_SOURCE=600 -D"CMAKE_INTDIR=\"Release\"
  " -Xcompiler "/EHsc /W3 /nologo /O2 /FS   /MD /GR" -Xcompiler "/Fdggml.dir\Release\ggml.pdb" -o ggml.dir\Release\ggml
  -cuda.obj "C:\Users\vuith\GitRepositories\node-llama-cpp-experiment\node_modules\node-llama-cpp\llama\llama.cpp\ggml-
  cuda.cu"
  ggml-cuda.cu
  tmpxft_00004eb8_00000000-7_ggml-cuda.compute_70.cudafe1.cpp
C:\Users\vuith\GitRepositories\node-llama-cpp-experiment\node_modules\node-llama-cpp\llama\llama.cpp\ggml-cuda.cu(5668)
: warning C4477: 'fprintf' : la chaîne de format '%ld' nécessite un argument de type 'long', mais l'argument variadique
 1 est de type 'int64_t' [C:\Users\vuith\GitRepositories\node-llama-cpp-experiment\node_modules\node-llama-cpp\llama\bu
ild\llama.cpp\ggml.vcxproj]
  C:\Users\vuith\GitRepositories\node-llama-cpp-experiment\node_modules\node-llama-cpp\llama\llama.cpp\ggml-cuda.cu(566
  8): note: utilisez '%lld' dans la chaîne de format
  C:\Users\vuith\GitRepositories\node-llama-cpp-experiment\node_modules\node-llama-cpp\llama\llama.cpp\ggml-cuda.cu(566
  8): note: utilisez '%Id' dans la chaîne de format
  C:\Users\vuith\GitRepositories\node-llama-cpp-experiment\node_modules\node-llama-cpp\llama\llama.cpp\ggml-cuda.cu(566
  8): note: utilisez '%I64d' dans la chaîne de format
  ggml.c
  ggml-alloc.c
  ggml-backend.c
  k_quants.c
  Génération de code en cours...
  ggml.vcxproj -> C:\Users\vuith\GitRepositories\node-llama-cpp-experiment\node_modules\node-llama-cpp\llama\build\llam
  a.cpp\ggml.dir\Release\ggml.lib
  Building Custom Rule C:/Users/vuith/GitRepositories/node-llama-cpp-experiment/node_modules/node-llama-cpp/llama/llama
  .cpp/CMakeLists.txt
  llama.cpp
C:\Users\vuith\GitRepositories\node-llama-cpp-experiment\node_modules\node-llama-cpp\llama\llama.cpp\llama.cpp(1054,31)
: warning C4305: 'initialisation' : troncation de 'double' à 'float' [C:\Users\vuith\GitRepositories\node-llama-cpp-exp
eriment\node_modules\node-llama-cpp\llama\build\llama.cpp\llama.vcxproj]
C:\Users\vuith\GitRepositories\node-llama-cpp-experiment\node_modules\node-llama-cpp\llama\llama.cpp\llama.cpp(1202,40)
: warning C4566: le caractère représenté par le nom de caractère universel '\u0120' ne peut pas être représenté dans la
 page de codes actuelle (1252) [C:\Users\vuith\GitRepositories\node-llama-cpp-experiment\node_modules\node-llama-cpp\ll
ama\build\llama.cpp\llama.vcxproj]
C:\Users\vuith\GitRepositories\node-llama-cpp-experiment\node_modules\node-llama-cpp\llama\llama.cpp\llama.cpp(1203,40)
: warning C4566: le caractère représenté par le nom de caractère universel '\u010A' ne peut pas être représenté dans la
 page de codes actuelle (1252) [C:\Users\vuith\GitRepositories\node-llama-cpp-experiment\node_modules\node-llama-cpp\ll
ama\build\llama.cpp\llama.vcxproj]
C:\Users\vuith\GitRepositories\node-llama-cpp-experiment\node_modules\node-llama-cpp\llama\llama.cpp\llama.cpp(1204,40)
: warning C4566: le caractère représenté par le nom de caractère universel '\u0120' ne peut pas être représenté dans la
 page de codes actuelle (1252) [C:\Users\vuith\GitRepositories\node-llama-cpp-experiment\node_modules\node-llama-cpp\ll
ama\build\llama.cpp\llama.vcxproj]
C:\Users\vuith\GitRepositories\node-llama-cpp-experiment\node_modules\node-llama-cpp\llama\llama.cpp\llama.cpp(1205,40)
: warning C4566: le caractère représenté par le nom de caractère universel '\u010A' ne peut pas être représenté dans la
 page de codes actuelle (1252) [C:\Users\vuith\GitRepositories\node-llama-cpp-experiment\node_modules\node-llama-cpp\ll
ama\build\llama.cpp\llama.vcxproj]
C:\Users\vuith\GitRepositories\node-llama-cpp-experiment\node_modules\node-llama-cpp\llama\llama.cpp\llama.cpp(2235,60)
: warning C4566: le caractère représenté par le nom de caractère universel '\u010A' ne peut pas être représenté dans la
 page de codes actuelle (1252) [C:\Users\vuith\GitRepositories\node-llama-cpp-experiment\node_modules\node-llama-cpp\ll
ama\build\llama.cpp\llama.vcxproj]
C:\Users\vuith\GitRepositories\node-llama-cpp-experiment\node_modules\node-llama-cpp\llama\llama.cpp\llama.cpp(9452,28)
: warning C4146: opérateur moins unaire appliqué à un type non signé, le résultat sera non signé [C:\Users\vuith\GitRep
ositories\node-llama-cpp-experiment\node_modules\node-llama-cpp\llama\build\llama.cpp\llama.vcxproj]
C:\Users\vuith\GitRepositories\node-llama-cpp-experiment\node_modules\node-llama-cpp\llama\llama.cpp\llama.cpp(9482,28)
: warning C4146: opérateur moins unaire appliqué à un type non signé, le résultat sera non signé [C:\Users\vuith\GitRep
ositories\node-llama-cpp-experiment\node_modules\node-llama-cpp\llama\build\llama.cpp\llama.vcxproj]
  llama.vcxproj -> C:\Users\vuith\GitRepositories\node-llama-cpp-experiment\node_modules\node-llama-cpp\llama\build\lla
  ma.cpp\Release\llama.lib
  Building Custom Rule C:/Users/vuith/GitRepositories/node-llama-cpp-experiment/node_modules/node-llama-cpp/llama/llama
  .cpp/common/CMakeLists.txt
  common.cpp
  sampling.cpp
  console.cpp
C:\Users\vuith\GitRepositories\node-llama-cpp-experiment\node_modules\node-llama-cpp\llama\llama.cpp\common\console.cpp
(253,30): warning C4267: 'initialisation' : conversion de 'size_t' en 'DWORD', perte possible de données [C:\Users\vuit
h\GitRepositories\node-llama-cpp-experiment\node_modules\node-llama-cpp\llama\build\llama.cpp\common\common.vcxproj]
C:\Users\vuith\GitRepositories\node-llama-cpp-experiment\node_modules\node-llama-cpp\llama\llama.cpp\common\console.cpp
(407,28): warning C4267: 'initialisation' : conversion de 'size_t' en 'int', perte possible de données [C:\Users\vuith\
GitRepositories\node-llama-cpp-experiment\node_modules\node-llama-cpp\llama\build\llama.cpp\common\common.vcxproj]
  grammar-parser.cpp
  train.cpp
  Génération de code en cours...
C:\Users\vuith\GitRepositories\node-llama-cpp-experiment\node_modules\node-llama-cpp\llama\llama.cpp\common\common.cpp(
785): warning C4715: 'gpt_random_prompt' : les chemins de contrôle ne retournent pas tous une valeur [C:\Users\vuith\Gi
tRepositories\node-llama-cpp-experiment\node_modules\node-llama-cpp\llama\build\llama.cpp\common\common.vcxproj]
  common.vcxproj -> C:\Users\vuith\GitRepositories\node-llama-cpp-experiment\node_modules\node-llama-cpp\llama\build\ll
  ama.cpp\common\common.dir\Release\common.lib
  Building Custom Rule C:/Users/vuith/GitRepositories/node-llama-cpp-experiment/node_modules/node-llama-cpp/llama/llama
  .cpp/CMakeLists.txt
  ggml_static.vcxproj -> C:\Users\vuith\GitRepositories\node-llama-cpp-experiment\node_modules\node-llama-cpp\llama\bui
  ld\llama.cpp\Release\ggml_static.lib
  Building Custom Rule C:/Users/vuith/GitRepositories/node-llama-cpp-experiment/node_modules/node-llama-cpp/llama/CMake
  Lists.txt
  addon.cpp
C:\Users\vuith\GitRepositories\node-llama-cpp-experiment\node_modules\node-llama-cpp\llama\llama.cpp\common\log.h(379,1
9): warning C4996: 'fopen': This function or variable may be unsafe. Consider using fopen_s instead. To disable depreca
tion, use _CRT_SECURE_NO_WARNINGS. See online help for details. [C:\Users\vuith\GitRepositories\node-llama-cpp-experime
nt\node_modules\node-llama-cpp\llama\build\llama-addon.vcxproj]
C:\Users\vuith\GitRepositories\node-llama-cpp-experiment\node_modules\node-llama-cpp\llama\llama.cpp\common\log.h(387,9
7): warning C4996: 'strerror': This function or variable may be unsafe. Consider using strerror_s instead. To disable d
eprecation, use _CRT_SECURE_NO_WARNINGS. See online help for details. [C:\Users\vuith\GitRepositories\node-llama-cpp-ex
periment\node_modules\node-llama-cpp\llama\build\llama-addon.vcxproj]
C:\Users\vuith\GitRepositories\node-llama-cpp-experiment\node_modules\node-llama-cpp\llama\addon.cpp(339,53): warning C
4267: 'argument' : conversion de 'size_t' en 'int32_t', perte possible de données [C:\Users\vuith\GitRepositories\node-
llama-cpp-experiment\node_modules\node-llama-cpp\llama\build\llama-addon.vcxproj]
C:\Users\vuith\GitRepositories\node-llama-cpp-experiment\node_modules\node-llama-cpp\llama\addon.cpp(341,33): warning C
4267: '=' : conversion de 'size_t' en 'int32_t', perte possible de données [C:\Users\vuith\GitRepositories\node-llama-c
pp-experiment\node_modules\node-llama-cpp\llama\build\llama-addon.vcxproj]
C:\Users\vuith\GitRepositories\node-llama-cpp-experiment\node_modules\node-llama-cpp\llama\addon.cpp(415,9): warning C4
996: 'llama_sample_temperature': use llama_sample_temp instead [C:\Users\vuith\GitRepositories\node-llama-cpp-experimen
t\node_modules\node-llama-cpp\llama\build\llama-addon.vcxproj]
  win_delay_load_hook.cc
  Génération de code en cours...
     Création de la bibliothèque C:/Users/vuith/GitRepositories/node-llama-cpp-experiment/node_modules/node-llama-cpp/l
  lama/build/Release/llama-addon.lib et de l'objet C:/Users/vuith/GitRepositories/node-llama-cpp-experiment/node_module
  s/node-llama-cpp/llama/build/Release/llama-addon.exp
  llama-addon.vcxproj -> C:\Users\vuith\GitRepositories\node-llama-cpp-experiment\node_modules\node-llama-cpp\llama\bui
  ld\Release\llama-addon.node
  Building Custom Rule C:/Users/vuith/GitRepositories/node-llama-cpp-experiment/node_modules/node-llama-cpp/llama/CMake
  Lists.txt
✔ Compiled llama.cpp

Repo: ggerganov/llama.cpp
Release: b1378

Done
giladgd commented 10 months ago

@sk-1982 Interesting. I originally added it to solve a compilation issue with CUDA, but since it seems to create another issue I'll remove it then. Thanks for the help :)

github-actions[bot] commented 10 months ago

:tada: This issue has been resolved in version 2.7.4 :tada:

The release is available on:

Your semantic-release bot :package::rocket: