vercel / ai-chatbot

A full-featured, hackable Next.js AI chatbot built by Vercel
https://chat.vercel.ai
Other
5.95k stars 1.82k forks source link

Cutomized openAI like server url? #141

Closed lucasjinreal closed 11 months ago

lucasjinreal commented 12 months ago

Hello, I am tried using customized openai like server, not work:

const configuration = new Configuration({
  // apiKey: process.env.OPENAI_API_KEY,
  apiKey: 'sk354t3gdgtrh',
  basePath: 'http://llm.mxxi.cn:8080/v1'
})

const openai = new OpenAIApi(configuration)
image

what could I do?

I don't really understand, my Api server using fastAPi servers many openAi compatible client, except yours.

image

Why?

kajusarkar commented 11 months ago

Can you give the steps here to reproduce the error?

louis030195 commented 11 months ago

same error with fastchat


python3 -m fastchat.serve.controller

# gpu
python3 -m fastchat.serve.model_worker --model-path lmsys/vicuna-7b-v1.3

# cpu
python3 -m fastchat.serve.model_worker --model-path lmsys/vicuna-7b-v1.3 --device cpu

# mps
python3 -m fastchat.serve.model_worker --model-path lmsys/vicuna-7b-v1.3 --device mps --load-8bit

python3 -m fastchat.serve.openai_api_server --host localhost --port 8000

curl http://localhost:8000/v1/chat/completions \
  -H "Content-Type: application/json" \
  -d '{
    "model": "vicuna-7b-v1.3",
    "messages": [{"role": "user", "content": "Hello! What is your name?"}]
  }'

Curl works but w ai it fails


import { kv } from '@vercel/kv'
import { OpenAIStream, StreamingTextResponse } from 'ai'
import { Configuration, OpenAIApi } from 'openai-edge'

import { auth } from '@/auth'
import { nanoid } from '@/lib/utils'

export const runtime = 'edge'

// const configuration = new Configuration({
//   apiKey: process.env.OPENAI_API_KEY
// })

const configuration = new Configuration({
  apiKey: 'EMPTY', // Use a valid key if provided in the --api-keys flag
  apiBase: 'http://localhost:8000/v1' // Point to your local Vicuna instance
})

const openai = new OpenAIApi(configuration)

export async function POST(req: Request) {
  const json = await req.json()
  const { messages, previewToken } = json
  const userId = (await auth())?.user.id

  if (!userId) {
    return new Response('Unauthorized', {
      status: 401
    })
  }

  if (previewToken) {
    configuration.apiKey = previewToken
  }

  // const res = await openai.createChatCompletion({
  //   model: 'gpt-3.5-turbo',
  //   messages,
  //   temperature: 0.7,
  //   stream: true
  // })

  // curl http://localhost:8000/v1/chat/completions   -H "Content-Type: application/json"   -d '{
  //   "model": "vicuna-7b-v1.3",
  //   "messages": [{"role": "user", "content": "Hello! What is your name?"}], "stream": true
  // }'

  console.log('messages', messages)
  const res = await openai.createChatCompletion({
    model: 'vicuna-7b-v1.3', // Use your Vicuna model
    messages,
    // temperature: 0.7,
    stream: true
  })

  console.log(res.status) // 401

  const stream = OpenAIStream(res, {
    async onCompletion(completion) {
      const title = json.messages[0].content.substring(0, 100)
      const id = json.id ?? nanoid()
      const createdAt = Date.now()
      const path = `/chat/${id}`
      const payload = {
        id,
        title,
        userId,
        createdAt,
        path,
        messages: [
          ...messages,
          {
            content: completion,
            role: 'assistant'
          }
        ]
      }
      await kv.hmset(`chat:${id}`, payload)
      await kv.zadd(`user:chat:${userId}`, {
        score: createdAt,
        member: `chat:${id}`
      })
    }
  })

  return new StreamingTextResponse(stream)
}
lucasjinreal commented 11 months ago

I fixed it.

AmbroxMr commented 11 months ago

I fixed it.

Hi! How did you solve it?

ClementSicard commented 11 months ago

I fixed it.

Hi! How did you solve it?

I followed the steps from @anhhtz here: https://github.com/vercel-labs/ai-chatbot/issues/102#issuecomment-1635454834 and it worked 🚢