jupyterlab / jupyter-ai

A generative AI extension for JupyterLab
https://jupyter-ai.readthedocs.io/
BSD 3-Clause "New" or "Revised" License
3.1k stars 306 forks source link

Self-hosted LLM support #661

Open Mrjaggu opened 6 months ago

Mrjaggu commented 6 months ago

Problem

To access our own custom trained LLM model using privatae endpoint hosted on local env.

welcome[bot] commented 6 months ago

Thank you for opening your first issue in this project! Engagement like this is essential for open source projects! :hugs:
If you haven't done so already, check out Jupyter's Code of Conduct. Also, please try to follow the issue template as it helps other other community members to contribute more effectively. welcome You can meet the other Jovyans by joining our Discourse forum. There is also an intro thread there where you can stop by and say Hi! :wave:
Welcome to the Jupyter community! :tada:

dlqqq commented 6 months ago

@Mrjaggu Thank you opening this issue! This is already possible if the local LLM supports an "OpenAI-like" API. To do so, you should select any "OpenAI Chat" model, and set the "Base URL" field to localhost and your port number.

If this doesn't meet your use-case however, then please feel free to describe your problem in more detail. For example, what self-hosted LLM services are you trying to use?

dlqqq commented 6 months ago

See #389 for existing discussion on using self-hosted LLMs through the strategy I just described.

DanielCastroBosch commented 2 months ago

It is possible to use a internal LLM in the same network with token provided by MS Entra ? We have the following steps:

  1. Get the token - Authorization: https: //login .microsoftonline.com/< tenant id >/oauth2/v2.0/authorize?response_type=code&client_id=< client id >&scope=< api scope >&redirect_uri=<redirect_uri>

This returns: https://redirect_uri?code=< code >&session_state=< session_state >

Then:

https://login.microsoftonline.com//oauth2/v2.0/token
Request Body:
grant_type: “authorization_code”
code: “< code generated in the previous step >”
redirect_uri: “<redirect_uri>”
client_id: “<client_id>”
client_secret: “< client secret >”

Step 2 - Get App Context id

POST https://login.microsoftonline.com//oauth2/v2.0/token
Request Body
client_id: “< client id >”
scope: “< api scope >”
client_secret: “< client secret >”
grant_type: “client_credentials”
Response Body
{“token_type”:“Bearer”,“expires_in”:3599,“ext_expires_in”:3599,“access_token”:" < token >"}
  1. Send the message:
    POST https://< LLM Server >/api/tryout/v1/public/gpt3/chats/messages
    Request Body:
    {“messages”:[{“role”:“user”,“content”:“good morning”}],“model”:“gpt3”,“temperature”:0.1}
    Response Body:
    [{“role”:“assistant”,“content”:“Good morning! How are you today?”,“tokenCount”:17,“tokenLimitExceeded”:false}]

How do I configure Jupyter-Ai assistant to work with that?