meysamhadeli / codai

Codai is an AI code assistant that helps developers through a session-based CLI, providing intelligent code suggestions, refactoring, and code reviews based on the full context of the project. It supports multiple LLMs, including GPT-4o, GPT-4, and Ollama, to streamline daily development tasks.
Apache License 2.0
70 stars 2 forks source link

fails on code #75

Open maxandersen opened 2 hours ago

maxandersen commented 2 hours ago

I'm running with:

ai_provider_config:
  provider_name: "ollama" # openai | ollama
  chat_completion_url: "http://localhost:11434"
  #chat_completion_model: "gpt-4o"
  chat_completion_model: "granite3-dense"
  #embedding_url: "https://api.openai.com/v1/embeddings" #(Optional, If you want use RAG.)
  embedding_url: "http://localhost:11434/v1/embeddings" #(Optional, If you want use RAG.)
  #embedding_model: "text-embedding-3-small" #(Optional, If you want use RAG.)
  embedding_model: "nomic-embed-text" #(Optional, If you want use RAG.)
  temperature: 0.2
  threshold: 0.3 #(Optional, If you want use RAG.)
theme: "dracula"
rag: true #(Optional, If you want use RAG.)

as config.yml and when i I run codeai code and ask a question it starts ragging and I get this exception:

⠙ Embedding Context... (2s)panic: runtime error: index out of range [0] with length 0

goroutine 40 [running]:
github.com/meysamhadeli/codai/cmd.handleCodeCommand.func3.1()
        /Users/manderse/go/pkg/mod/github.com/meysamhadeli/codai@v1.6.3/cmd/code.go:127 +0xbc
github.com/meysamhadeli/codai/cmd.handleCodeCommand.func3({{0x140001f649f, 0x19}, {0x14000212300, 0x684}, {0x14000546000, 0xb5}})
        /Users/manderse/go/pkg/mod/github.com/meysamhadeli/codai@v1.6.3/cmd/code.go:132 +0xc8
created by github.com/meysamhadeli/codai/cmd.handleCodeCommand in goroutine 1
        /Users/manderse/go/pkg/mod/github.com/meysamhadeli/codai@v1.6.3/cmd/code.go:118 +0x118c
meysamhadeli commented 1 hour ago

I'm running with:

ai_provider_config:
  provider_name: "ollama" # openai | ollama
  chat_completion_url: "http://localhost:11434"
  #chat_completion_model: "gpt-4o"
  chat_completion_model: "granite3-dense"
  #embedding_url: "https://api.openai.com/v1/embeddings" #(Optional, If you want use RAG.)
  embedding_url: "http://localhost:11434/v1/embeddings" #(Optional, If you want use RAG.)
  #embedding_model: "text-embedding-3-small" #(Optional, If you want use RAG.)
  embedding_model: "nomic-embed-text" #(Optional, If you want use RAG.)
  temperature: 0.2
  threshold: 0.3 #(Optional, If you want use RAG.)
theme: "dracula"
rag: true #(Optional, If you want use RAG.)

as config.yml and when i I run codeai code and ask a question it starts ragging and I get this exception:

⠙ Embedding Context... (2s)panic: runtime error: index out of range [0] with length 0

goroutine 40 [running]:
github.com/meysamhadeli/codai/cmd.handleCodeCommand.func3.1()
        /Users/manderse/go/pkg/mod/github.com/meysamhadeli/codai@v1.6.3/cmd/code.go:127 +0xbc
github.com/meysamhadeli/codai/cmd.handleCodeCommand.func3({{0x140001f649f, 0x19}, {0x14000212300, 0x684}, {0x14000546000, 0xb5}})
        /Users/manderse/go/pkg/mod/github.com/meysamhadeli/codai@v1.6.3/cmd/code.go:132 +0xc8
created by github.com/meysamhadeli/codai/cmd.handleCodeCommand in goroutine 1
        /Users/manderse/go/pkg/mod/github.com/meysamhadeli/codai@v1.6.3/cmd/code.go:118 +0x118c

@maxandersen Hi. I used the original endpoint ollama and base on below example it works: (please try with this)

ai_provider_config:
  provider_name: "ollama" # openai | ollama
  chat_completion_url: "http://localhost:11434/api/chat"
  #chat_completion_model: "gpt-4o"
  chat_completion_model: "llama3.1"
  #embedding_url: "https://api.openai.com/v1/embeddings" #(Optional, If you want use RAG.)
  embedding_url: "http://localhost:11434/api/embed" #(Optional, If you want use RAG.)
  #embedding_model: "text-embedding-3-small" #(Optional, If you want use RAG.)
  embedding_model: "nomic-embed-text" #(Optional, If you want use RAG.)
  temperature: 0.2
  threshold: 0.3 #(Optional, If you want use RAG.)
theme: "dracula"
rag: true #(Optional, If you want use RAG.)

I will simplified the config to just get base-url from user to prevent the problem like this. in next version. I created a issue #78 to implement that.