Closed hiqsociety closed 9 months ago
can run but i cant seem to generate the same amount of context size tokens as without using golang. why?
with 4060 rtx, i can do 1920 max tokens using pure llama.cpp cuda offload 100%
on go-llama, i can only do around ctx size of 650 without oom
@mudler do u know why? how do i fix this?
same setting with llama.cpp but in golang...
var model string flags := flag.NewFlagSet(os.Args[0], flag.ExitOnError) flags.StringVar(&model, "m", "./models/7B/ggml-model-q4_0.bin", "path to q4_0.bin model file to load") flags.IntVar(&gpulayers, "ngl", 0, "Number of GPU layers to use") flags.IntVar(&threads, "t", runtime.NumCPU(), "number of threads to use during computation") flags.IntVar(&tokens, "n", 1900, "number of tokens to predict") flags.IntVar(&seed, "s", -1, "predict RNG seed, -1 for random seed") err := flags.Parse(os.Args[1:]) if err != nil { fmt.Printf("Parsing program arguments failed: %s", err) os.Exit(1) } l, err := llama.New(model, llama.EnableF16Memory, llama.SetContext(655), llama.EnableEmbeddings, llama.SetGPULayers(gpulayers)) if err != nil { fmt.Println("Loading the model failed:", err.Error()) os.Exit(1) } fmt.Printf("Model loaded successfully.\n") reader := bufio.NewReader(os.Stdin) for { text := readMultiLineInput(reader) _, err := l.Predict(text, llama.Debug, llama.SetTokenCallback(func(token string) bool { fmt.Print(token) return true }), llama.SetTokens(tokens), llama.SetTemperature(0.3), llama.SetMirostat(2), llama.SetThreads(threads), llama.SetTopK(90), llama.SetTopP(0.86), llama.SetSeed(seed)) if err != nil { panic(err) } embeds, err := l.Embeddings(text) if err != nil { fmt.Printf("Embeddings: error %s \n", err.Error())
There are many parameters to set - what's the batch size you are using? Is f16 enabled?
can run but i cant seem to generate the same amount of context size tokens as without using golang. why?
with 4060 rtx, i can do 1920 max tokens using pure llama.cpp cuda offload 100%
on go-llama, i can only do around ctx size of 650 without oom
@mudler do u know why? how do i fix this?
same setting with llama.cpp but in golang...