madox2 / vim-ai

AI-powered code assistant for Vim. OpenAI and ChatGPT plugin for Vim and Neovim.
MIT License
727 stars 66 forks source link

Gemini AI models support #111

Closed Enrico68 closed 4 months ago

Enrico68 commented 4 months ago

I've been using your excellent extension and find it incredibly helpful. I was wondering if you might consider integrating functionality for the Gemini AI platform in an upcoming release? This addition would greatly enhance the tool's versatility. Thank you for your consideration and for your ongoing work on this project. Kind regards, Enrico

madox2 commented 4 months ago

Hi, you can use Gemini with this plugin - first you need to set up OpenAI compatible proxy like littlellm and then configure it as a custom API

Enrico68 commented 4 months ago

Hi madox2,

Thanks for your quick reply. I've set up a proxy with LiteLLM using the following commands:

litellm --model gemini/gemini-pro
INFO:     Started server process [33361]
INFO:     Waiting for application startup.

#------------------------------------------------------------#
#                                                            #
#        'It would help me if you could add...'              #
#        https://github.com/BerriAI/litellm/issues/new       #
#                                                            #
#------------------------------------------------------------#

Thank you for using LiteLLM! - Krrish & Ishaan

Give Feedback / Get Help: https://github.com/BerriAI/litellm/issues/new

INFO:     Application startup complete.
INFO:     Uvicorn running on http://0.0.0.0:4000 (Press CTRL+C to quit)
INFO:     127.0.0.1:63497 - "POST /chat/completions HTTP/1.1" 200 OK
INFO:     127.0.0.1:63497 - "POST /chat/completions HTTP/1.1" 200 OK
INFO:     127.0.0.1:63497 - "POST /completions HTTP/1.1" 200 OK

I tested the connection with:

litellm --test

LiteLLM: Making a test ChatCompletions request to your proxy. Model=gpt-3.5-turbo

LiteLLM: response from proxy ChatCompletion(id='chatcmpl-87472afd-9b68-4119-bb91-ee1afccd1e56', choices=[Choice(finish_reason='stop', index=0, logprobs=None, message=ChatCompletionMessage(content="Midst thoughts profound, a pen in hand,\nA dance of words, at my command.\nIn this test request, I'll try,\nTo weave a poem that meets the eye.\n\nIn ethereal realm, where dreams reside,\nImagination's flight, I cannot hide.\nWith every stroke, a brush of grace,\nA tapestry of words, I'll embrace.\n\nEmotions stir, a symphony of sound,\nIn rhythm's embrace, new depths are found.\nEach verse a canvas, colors bright,\nA world of wonder, takes its flight.\n\nSo here's my humble verse, a test complete,\nA tapestry of words, a heartfelt beat.\nThough short in length, its essence true,\nA glimpse of beauty, just for you.", role='assistant', function_call=None, tool_calls=[]))], created=1720805178, model='gemini-pro', object='chat.completion', service_tier=None, system_fingerprint=None, usage=CompletionUsage(completion_tokens=165, prompt_tokens=11, total_tokens=176))

LiteLLM: Making a test ChatCompletions + streaming request to proxy. Model=gpt-3.5-turbo
LiteLLM: streaming response from proxy ChatCompletionChunk(id='chatcmpl-5a2d6e34-15f5-4b0f-a6da-7dcf75b9b90e', choices=[Choice(delta=ChoiceDelta(content='In verdant fields, where whispers dance,\nA symphony of life, a', function_call=None, role='assistant', tool_calls=None), finish_reason=None, index=0, logprobs=None)], created=1720805183, model='gemini-pro', object='chat.completion.chunk', service_tier=None, system_fingerprint=None, usage=None)
LiteLLM: streaming response from proxy ChatCompletionChunk(id='chatcmpl-5a2d6e34-15f5-4b0f-a6da-7dcf75b9b90e', choices=[Choice(delta=ChoiceDelta(content=' vibrant trance.\nEmerald hues cascade, a gentle breeze,\nStirring the tapestry of rustling trees.\n\nButterflies flutter, their wings a vibrant show,', function_call=None, role=None, tool_calls=None), finish_reason=None, index=0, logprobs=None)], created=1720805183, model='gemini-pro', object='chat.completion.chunk', service_tier=None, system_fingerprint=None, usage=None)
LiteLLM: streaming response from proxy ChatCompletionChunk(id='chatcmpl-5a2d6e34-15f5-4b0f-a6da-7dcf75b9b90e', choices=[Choice(delta=ChoiceDelta(content="\nNature's artistry, a vibrant glow.\nBirdsong fills the air, a cheerful choir,\nGuiding lost souls through dawn's desire.", function_call=None, role=None, tool_calls=None), finish_reason=None, index=0, logprobs=None)], created=1720805183, model='gemini-pro', object='chat.completion.chunk', service_tier=None, system_fingerprint=None, usage=None)
LiteLLM: streaming response from proxy ChatCompletionChunk(id='chatcmpl-5a2d6e34-15f5-4b0f-a6da-7dcf75b9b90e', choices=[Choice(delta=ChoiceDelta(content=None, function_call=None, role=None, tool_calls=None), finish_reason='stop', index=0, logprobs=None)], created=1720805183, model='gemini-pro', object='chat.completion.chunk', service_tier=None, system_fingerprint=None, usage=None)

LiteLLM: making completion request to proxy
Completion(id='chatcmpl-6b5765fe-eeb9-4532-9deb-b8208a42dd4a', choices=[CompletionChoice(finish_reason='stop', index=0, logprobs=None, text="In realms where words dance and thoughts ignite,\nA poet's soul weaves verse with all its might.\nFrom depths of heart, emotions find their voice,\nA symphony of words, a precious choice.\n\nThrough ink and paper, stories gently flow,\nA tapestry of dreams, where truths bestow.\nIn every line, a whisper of the soul,\nA timeless echo, making hearts whole.\n\nSo let the muse inspire, the pen shall roam,\nA poet's journey, where words find a home.\nIn each verse crafted, a world comes to light,\nA testament to beauty, taking flight.")], created=1720805183, model='gemini-pro', object='text_completion', system_fingerprint=None, usage=CompletionUsage(completion_tokens=129, prompt_tokens=11, total_tokens=140))

I've configured the vimrc as follows:

" This prompt instructs the model to work with syntax highlighting
let s:initial_chat_prompt =<< trim END
>>> system

You are a general assistant.
If you attach a code block, add syntax type after ``` to enable syntax highlighting.
END

" :AIChat
" - options: openai config (see https://platform.openai.com/docs/api-reference/chat)
" - options.initial_prompt: prompt prepended to every chat request (list of lines or string)
" - options.request_timeout: request timeout in seconds
" - options.enable_auth: enable authorization using openai key
" - options.selection_boundary: selection prompt wrapper (eliminates empty responses, see #20)
" - ui.populate_options: put [chat-options] to the chat header
" - ui.open_chat_command: preset (preset_below, preset_tab, preset_right) or a custom command
" - ui.scratch_buffer_keep_open: re-use scratch buffer within the vim session
" - ui.paste_mode: use paste mode (see more info in the Notes below)
let g:vim_ai_chat = {
\  "options": {
\    "endpoint_url": "http://localhost:8000/v1/chat/completions",
\    "enable_auth": 0,
\    "max_tokens": 0,
\    "temperature": 1,
\    "request_timeout": 20,
\    "selection_boundary": "",
\    "initial_prompt": s:initial_chat_prompt,
\  },
\  "ui": {
\    "code_syntax_enabled": 1,
\    "populate_options": 0,
\    "open_chat_command": "preset_below",
\    "scratch_buffer_keep_open": 0,
\    "paste_mode": 1,
\  },
\}

However, when I try the AI command, I get the following error:

File "/usr/local/Cellar/python@3.12/3.12.4/Frameworks/Python.framework/Versions/3.12/lib/python3.12/urllib/request.py", line 1347, in do_open
    raise URLError(err)
urllib.error.URLError: <urlopen error [Errno 61] Connection refused>

I'm not sure where I've gone wrong. Any guidance would be greatly appreciated.

Thanks,

madox2 commented 4 months ago

Not sure but it looks like your proxy is running on port 4000 while you configured it on 8000

Enrico68 commented 4 months ago

Yeah, thanks a lot ! E.