ggerganov / llama.cpp

LLM inference in C/C++
MIT License
67.65k stars 9.71k forks source link

idk wth is happening help #713

Closed Kilgorio closed 1 year ago

Kilgorio commented 1 year ago

PS C:\Users\Admin> cd D:\Software\GPT4ALL\llama.cpp PS D:\Software\GPT4ALL\llama.cpp> make process_begin: CreateProcess(NULL, uname -s, ...) failed. Makefile:2: pipe: No error process_begin: CreateProcess(NULL, uname -p, ...) failed. Makefile:6: pipe: No error process_begin: CreateProcess(NULL, uname -m, ...) failed. Makefile:10: pipe: No error 'cc' is not recognized as an internal or external command, operable program or batch file. 'head' is not recognized as an internal or external command, operable program or batch file. I llama.cpp build info: I UNAME_S: I UNAME_P: I UNAME_M: I CFLAGS: -I. -O3 -DNDEBUG -std=c11 -fPIC -Wall -Wextra -Wpedantic -Wcast-qual -Wdouble-promotion -Wshadow -Wstrict-prototypes -Wpointer-arith -Wno-unused-function -mfma -mf16c -mavx -mavx2 I CXXFLAGS: -I. -I./examples -O3 -DNDEBUG -std=c++11 -fPIC -Wall -Wextra -Wpedantic -Wcast-qual -Wno-unused-function I LDFLAGS: I CC: I CXX:

cc -I. -O3 -DNDEBUG -std=c11 -fPIC -Wall -Wextra -Wpedantic -Wcast-qual -Wdouble-promotion -Wshadow -Wstrict-prototypes -Wpointer-arith -Wno-unused-function -mfma -mf16c -mavx -mavx2 -c ggml.c -o ggml.o process_begin: CreateProcess(NULL, cc -I. -O3 -DNDEBUG -std=c11 -fPIC -Wall -Wextra -Wpedantic -Wcast-qual -Wdouble-promotion -Wshadow -Wstrict-prototypes -Wpointer-arith -Wno-unused-function -mfma -mf16c -mavx -mavx2 -c ggml.c -o ggml.o, ...) failed. make (e=2): The system cannot find the file specified. make: *** [Makefile:229: ggml.o] Error 2

MillionthOdin16 commented 1 year ago

This is definitely not proper format. Look at the issue template, look at other issues, put in effort to explaining your issue before expecting others to pick up your lack.

Kilgorio commented 1 year ago

I looked at it and I said

"I don't have time for that"

slaren commented 1 year ago

You need to use cmake to build on windows.

Kilgorio commented 1 year ago

but how

Kilgorio commented 1 year ago

I am only doing this to get tokenizer.model so I can convert gpt4all so it can work with kobold ai and with that it can work with tavernAI I think I will get tokenizer.model trough this cause currently I don't have it

slaren commented 1 year ago

The tokenizer.model file is part of the original llama models distribution and you won't get that by compiling this project. We should include build instructions for windows in the README, but for now you can use one of the pre-compiled binaries available at https://github.com/ggerganov/llama.cpp/tags

Kilgorio commented 1 year ago

I still can't find it

slaren commented 1 year ago

@Kilgorio unfortunately I cannot tell you where to find the original llama models as that is explicitly against the policy of this project. You will have to look for that elsewhere.