jupyterlab / jupyter-ai

A generative AI extension for JupyterLab
https://jupyter-ai.readthedocs.io/
BSD 3-Clause "New" or "Revised" License
3.02k stars 299 forks source link

Google Gemini API support #395

Closed richlysakowski closed 4 months ago

richlysakowski commented 10 months ago

Problem

Support / wrapper for Google PaLM and Bard missing. I don't find any mention of it here, yet PaLM is one of the major AI LLM. (@JasonWeill on 2023-12-29: The Gemini API is replacing PaLM — thanks @simonff)

Proposed Solution

Can we start a discussion about requirements packages, and tools needed to bring PaLM and Bard functionality into JupyterAI?

Is there a description of how to wrap new LLM engines?

Are there template wrappers or decorators to make this easy ?

What limitations do we need to take into account for this integration? I know that Google has been blocking Python wrappers that reverse engineer Bard, but what about PaLM API? What are the most effective workarounds for using a wrapper around Bard Chat?

Please help with this integration so we can get it done soon, because I am getting requests from Julyter.AI users I am training.

3coins commented 10 months ago

@richlysakowski As long as LangChain has an LLM class for the provider you are interested in, this can be added to Jupyter AI. To get started, here are some existing example here for provider implementations. https://github.com/jupyterlab/jupyter-ai/blob/main/packages/jupyter-ai-magics/jupyter_ai_magics/providers.py#L450

Here are the steps to add a new provider:

  1. Add a new class to providers.py which should extend from BaseProvider and VertexAI.
  2. Look at the existing implementation I referred above, and add model list, and any fields required to get API Keys, config etc.
  3. Add the provider id to the pyproject.toml file, the name should match the id of the class you created in 1.
  4. Add the new class in the import here.
ishaan-jaff commented 9 months ago

Hi @richlysakowski @3coins - I believe we can make this easier I’m the maintainer of LiteLLM - we allow you to deploy a LLM proxy to call 100+ LLMs in 1 format - PaLM, Bedrock, OpenAI, Anthropic etc https://github.com/BerriAI/litellm/tree/main/openai-proxy.

If this looks useful (we're used in production)- please let me know how we can help.

Usage

PaLM request

curl http://0.0.0.0:8000/v1/chat/completions \
  -H "Content-Type: application/json" \
  -d '{
     "model": "palm/chat-bison",
     "messages": [{"role": "user", "content": "Say this is a test!"}],
     "temperature": 0.7
   }'

gpt-3.5-turbo request

curl http://0.0.0.0:8000/v1/chat/completions \
  -H "Content-Type: application/json" \
  -d '{
     "model": "gpt-3.5-turbo",
     "messages": [{"role": "user", "content": "Say this is a test!"}],
     "temperature": 0.7
   }'

claude-2 request

curl http://0.0.0.0:8000/v1/chat/completions \
  -H "Content-Type: application/json" \
  -d '{
     "model": "claude-2",
     "messages": [{"role": "user", "content": "Say this is a test!"}],
     "temperature": 0.7
   }'
simonff commented 6 months ago

Gemini API is superseding Palm API.

I'll follow @3coins's advice to add a new provider, but I can't figure out how to use jupyter-ai from local sources, so I might need some help with testing.

krassowski commented 6 months ago

@simonff do you need help setting up jupyter-ai locally for development? Did you encounter any error, or failed at a particular step?

simonff commented 6 months ago

@krassowski : I think various installed versions got mixed up, and the last error I got is below. Yes, it would be great to try some self-contained instructions for running jupyter-ai locally. For now I'm just using github actions to run tests.

Traceback (most recent call last): File "/usr/lib/python3/dist-packages/notebook/traittypes.py", line 235, in _resolve_classes klass = self._resolve_string(klass) ^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/usr/local/google/home/simonf/.local/lib/python3.11/site-packages/traitlets/traitlets.py", line 2018, in _resolve_string return import_item(string) ^^^^^^^^^^^^^^^^^^^ File "/usr/local/google/home/simonf/.local/lib/python3.11/site-packages/traitlets/utils/importstring.py", line 31, in import_item module = import(package, fromlist=[obj]) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ ModuleNotFoundError: No module named 'jupyter_server.contents'

During handling of the above exception, another exception occurred:

Traceback (most recent call last): File "/usr/bin/jupyter-notebook", line 33, in sys.exit(load_entry_point('notebook==6.4.12', 'console_scripts', 'jupyter-notebook')()) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/usr/lib/python3/dist-packages/jupyter_core/application.py", line 282, in launch_instance super().launch_instance(argv=argv, kwargs) File "/usr/local/google/home/simonf/.local/lib/python3.11/site-packages/traitlets/config/application.py", line 1075, in launch_instance app = cls.instance(kwargs) ^^^^^^^^^^^^^^^^^^^^^^ File "/usr/local/google/home/simonf/.local/lib/python3.11/site-packages/traitlets/config/configurable.py", line 583, in instance inst = cls(*args, kwargs) ^^^^^^^^^^^^^^^^^^^^ File "/usr/local/google/home/simonf/.local/lib/python3.11/site-packages/traitlets/traitlets.py", line 1294, in new inst.setup_instance(*args, *kwargs) File "/usr/local/google/home/simonf/.local/lib/python3.11/site-packages/traitlets/traitlets.py", line 1337, in setup_instance super(HasTraits, self).setup_instance(args, kwargs) File "/usr/local/google/home/simonf/.local/lib/python3.11/site-packages/traitlets/traitlets.py", line 1313, in setup_instance init(self) File "/usr/lib/python3/dist-packages/notebook/traittypes.py", line 226, in instance_init self._resolve_classes() File "/usr/lib/python3/dist-packages/notebook/traittypes.py", line 238, in _resolve_classes warn(f"{klass} is not importable. Is it installed?", ImportWarning) TypeError: warn() missing 1 required keyword-only argument: 'stacklevel'

krassowski commented 6 months ago

Yes, it looks like it stems from a conflict between packages installed on the system level and local version of Python. I would strongly suggest developing in a self-contained virtual environment.

dlqqq commented 6 months ago

@simonff To do a development installation locally, run these in your terminal:

# verify that you are at the root of the repo
pwd

# create & activate a new isolated Python environment named `jupyter-ai-dev`
# I recommend using `micromamba`, but you can use `conda` or `venv`
micromamba create -yn jupyter-ai-dev python=3.11 jupyterlab
micromamba activate jupyter-ai-dev

# install JS dependencies
jlpm

# build JS code & install `jupyter-ai` locally in this environment
jlpm dev-install
simonff commented 6 months ago

I don't have conda or micromamba installed, so I'm trying venv. This worked: /myenv/bin/pip install jupyterlab

But for the next step I get: ./myenv/bin/pip install jupyter-ai-dev ERROR: Could not find a version that satisfies the requirement jupyter-ai-dev (from versions: none) ERROR: No matching distribution found for jupyter-ai-dev

Sorry for beginner questions, I usually just run "pip install --break-system-packages". :)

krassowski commented 6 months ago

In the example above jupyter-ai-dev is a name of the environment, not of the installable. Once you activate the environment, you should have a script named jlpm which is installed with jupyterlab package. Running this script using jlpm && jlpm dev-install should do the trick.

simonff commented 6 months ago

Ok, 'jlpm' by itself runs (and I see it's coming from the myenv environment), but then I get (running from the root of the git repo):

jlpm dev-install

lerna notice cli v6.6.2

Lerna (powered by Nx) The following projects do not have a configuration for any of the provided targets ("dev-install")

@jupyter-ai/magics:dev-install

@jupyter-ai/magics: error: externally-managed-environment @jupyter-ai/magics: × This environment is externally managed @jupyter-ai/magics: ╰─> To install Python packages system-wide, try apt install @jupyter-ai/magics: python3-xyz, where xyz is the package you are trying to @jupyter-ai/magics: install. @jupyter-ai/magics:
@jupyter-ai/magics: If you wish to install a non-Debian-packaged Python package, @jupyter-ai/magics: create a virtual environment using python3 -m venv path/to/venv. @jupyter-ai/magics: Then use path/to/venv/bin/python and path/to/venv/bin/pip. Make @jupyter-ai/magics: sure you have python3-full installed. @jupyter-ai/magics:
@jupyter-ai/magics: If you wish to install a non-Debian packaged Python application, @jupyter-ai/magics: it may be easiest to use pipx install xyz, which will manage a @jupyter-ai/magics: virtual environment for you. Make sure you have pipx installed. @jupyter-ai/magics:
@jupyter-ai/magics: See /usr/share/doc/python3.11/README.venv for more information. @jupyter-ai/magics: note: If you believe this is a mistake, please contact your Python installation or OS distribution provider. You can override this, at the risk of breaking your Python installation or OS, by passing --break-system-packages. @jupyter-ai/magics: hint: See PEP 668 for the detailed specification.

——————————————————————————————————————————————————————————————————————————————————

Lerna (powered by Nx) Running target dev-install for 2 projects failed

Tasks not run because their dependencies failed or --nx-bail=true:

simonff commented 6 months ago

Meanwhile, this is the new code so far:

https://github.com/jupyterlab/jupyter-ai/compare/main...simonff:jupyter-ai:main

giswqs commented 4 months ago

I tried to add the Gemini provider, but could not get Gemini to show up on the model list. Any help will be appreciated. https://github.com/jupyterlab/jupyter-ai/pull/666

dlqqq commented 4 months ago

This issue should be resolved by #666, which adds support for Gemini. Thanks to @giswqs for working on this! Users will have access to this in the next release, which we are planning for sometime this week.

giswqs commented 4 months ago

@dlqqq Thank you for your help! I couldn't have done it without your guidance. Thank you for accepting the PR.