Open Mrjaggu opened 8 months ago
Thank you for opening your first issue in this project! Engagement like this is essential for open source projects! :hugs:
If you haven't done so already, check out Jupyter's Code of Conduct. Also, please try to follow the issue template as it helps other other community members to contribute more effectively.
You can meet the other Jovyans by joining our Discourse forum. There is also an intro thread there where you can stop by and say Hi! :wave:
Welcome to the Jupyter community! :tada:
@Mrjaggu Thank you opening this issue! This is already possible if the local LLM supports an "OpenAI-like" API. To do so, you should select any "OpenAI Chat" model, and set the "Base URL" field to localhost
and your port number.
If this doesn't meet your use-case however, then please feel free to describe your problem in more detail. For example, what self-hosted LLM services are you trying to use?
See #389 for existing discussion on using self-hosted LLMs through the strategy I just described.
It is possible to use a internal LLM in the same network with token provided by MS Entra ? We have the following steps:
https: //login .microsoftonline.com/< tenant id >/oauth2/v2.0/authorize?response_type=code&client_id=< client id >&scope=< api scope >&redirect_uri=<redirect_uri>
This returns:
https://redirect_uri?code=< code >&session_state=< session_state >
Then:
https://login.microsoftonline.com//oauth2/v2.0/token
Request Body:
grant_type: “authorization_code”
code: “< code generated in the previous step >”
redirect_uri: “<redirect_uri>”
client_id: “<client_id>”
client_secret: “< client secret >”
Step 2 - Get App Context id
POST https://login.microsoftonline.com//oauth2/v2.0/token
Request Body
client_id: “< client id >”
scope: “< api scope >”
client_secret: “< client secret >”
grant_type: “client_credentials”
Response Body
{“token_type”:“Bearer”,“expires_in”:3599,“ext_expires_in”:3599,“access_token”:" < token >"}
POST https://< LLM Server >/api/tryout/v1/public/gpt3/chats/messages
Request Body:
{“messages”:[{“role”:“user”,“content”:“good morning”}],“model”:“gpt3”,“temperature”:0.1}
Response Body:
[{“role”:“assistant”,“content”:“Good morning! How are you today?”,“tokenCount”:17,“tokenLimitExceeded”:false}]
How do I configure Jupyter-Ai assistant to work with that?
Problem
To access our own custom trained LLM model using privatae endpoint hosted on local env.