microsoft / teams-ai

SDK focused on building AI based applications and extensions for Microsoft Teams and other Bot Framework channels
MIT License
415 stars 182 forks source link

[Bug]: Error generating embeddings for query: AxiosError: Request failed with status code 404 #1162

Closed junhonglau closed 9 months ago

junhonglau commented 9 months ago

Language

Javascript/Typescript

Version

latest

Description

Experimenting Teams AI Chef with Azure Open AI but keep facing error below:

 [onTurnError] unhandled error: Error: Error generating embeddings for query: AxiosError: Request failed with status code 404
Error: Error generating embeddings for query: AxiosError: Request failed with status code 404
    at LocalDocumentIndex.<anonymous> (C:\Users\junho\Downloads\TeamChef\teams-ai\js\node_modules\vectra\src\LocalDocumentIndex.ts:269:19)
    at Generator.throw (<anonymous>)
    at rejected (C:\Users\junho\Downloads\TeamChef\teams-ai\js\node_modules\vectra\lib\LocalDocumentIndex.js:29:65)
    at processTicksAndRejections (node:internal/process/task_queues:95:5)

Source Code Snipped .env - Changed Azure OpenAi Key and OpenAi End Point. Its following the Playground (Show Code) It's surely correct.

OPENAI_KEY=
AZURE_OPENAI_KEY=xxxxxx
AZURE_OPENAI_ENDPOINT=https://<my-azureopenai-azureresourcesname>.openai.azure.com/openai/deployments/<my-depoyedmodelname>/chat/completions
BOT_ID=xxx
BOT_PASSWORD=xxxxx

index.ts - Changed the azureApiVersion

// Copyright (c) Microsoft Corporation. All rights reserved.
// Licensed under the MIT License.

// Import required packages
import { config } from 'dotenv';
import * as path from 'path';
import * as restify from 'restify';

// Import required bot services.
// See https://aka.ms/bot-services to learn more about the different parts of a bot.
import {
    CloudAdapter,
    ConfigurationBotFrameworkAuthentication,
    ConfigurationServiceClientCredentialFactory,
    MemoryStorage,
    TurnContext
} from 'botbuilder';

// Read botFilePath and botFileSecret from .env file.
const ENV_FILE = path.join(__dirname, '..', '.env');
config({ path: ENV_FILE });

const botFrameworkAuthentication = new ConfigurationBotFrameworkAuthentication(
    {},
    new ConfigurationServiceClientCredentialFactory({
        MicrosoftAppId: process.env.BOT_ID,
        MicrosoftAppPassword: process.env.BOT_PASSWORD,
        MicrosoftAppType: 'MultiTenant'
    })
);

// Create adapter.
// See https://aka.ms/about-bot-adapter to learn more about how bots work.
const adapter = new CloudAdapter(botFrameworkAuthentication);

// Catch-all for errors.
const onTurnErrorHandler = async (context: TurnContext, error: any) => {
    // This check writes out errors to console log .vs. app insights.
    // NOTE: In production environment, you should consider logging this to Azure
    //       application insights.
    console.error(`\n [onTurnError] unhandled error: ${error}`);
    console.log(error);

    // Send a trace activity, which will be displayed in Bot Framework Emulator
    await context.sendTraceActivity(
        'OnTurnError Trace',
        `${error}`,
        'https://www.botframework.com/schemas/error',
        'TurnError'
    );

    // Send a message to the user
    await context.sendActivity('The bot encountered an error or bug.');
    await context.sendActivity('To continue to run this bot, please fix the bot source code.');
};

// Set the onTurnError for the singleton CloudAdapter.
adapter.onTurnError = onTurnErrorHandler;

// Create HTTP server.
const server = restify.createServer();
server.use(restify.plugins.bodyParser());

server.listen(process.env.port || process.env.PORT || 3978, () => {
    console.log(`\n${server.name} listening to ${server.url}`);
    console.log('\nGet Bot Framework Emulator: https://aka.ms/botframework-emulator');
    console.log('\nTo test your bot in Teams, sideload the app manifest.json within Teams Apps.');
});

import { AI, Application, ActionPlanner, OpenAIModel, PromptManager, TurnState } from '@microsoft/teams-ai';
import { addResponseFormatter } from './responseFormatter';
import { VectraDataSource } from './VectraDataSource';

// eslint-disable-next-line @typescript-eslint/no-empty-interface
interface ConversationState {}
type ApplicationTurnState = TurnState<ConversationState>;

if (!process.env.OPENAI_KEY && !process.env.AZURE_OPENAI_KEY) {
    throw new Error('Missing environment variables - please check that OPENAI_KEY or AZURE_OPENAI_KEY is set.');
}

// Create AI components
const model = new OpenAIModel({
    // OpenAI Support
    // apiKey: process.env.OPENAI_KEY!,
    // defaultModel: 'gpt-3.5-turbo',

    // Azure OpenAI Support
    azureApiKey: process.env.AZURE_OPENAI_KEY!,
    azureDefaultDeployment: 'gpt-35-turbo',
    azureEndpoint: process.env.AZURE_OPENAI_ENDPOINT!,
    azureApiVersion: '2023-07-01-preview',

    // Request logging
    logRequests: true
});

const prompts = new PromptManager({
    promptsFolder: path.join(__dirname, '../src/prompts')
});

const planner = new ActionPlanner({
    model,
    prompts,
    defaultPrompt: 'chat'
});

// Define storage and application
const storage = new MemoryStorage();
const app = new Application<ApplicationTurnState>({
    storage,
    ai: {
        planner
    }
});

// Register your data source with planner
planner.prompts.addDataSource(
    new VectraDataSource({
        name: 'teams-ai',
        apiKey: process.env.OPENAI_KEY!,
        azureApiKey: process.env.AZURE_OPENAI_KEY!,
        azureEndpoint: process.env.AZURE_OPENAI_ENDPOINT!,
        indexFolder: path.join(__dirname, '../index')
    })
);

// Add a custom response formatter to convert markdown code blocks to <pre> tags
addResponseFormatter(app);

// Register other AI actions
app.ai.action(
    AI.FlaggedInputActionName,
    async (context: TurnContext, state: ApplicationTurnState, data: Record<string, any>) => {
        await context.sendActivity(`I'm sorry your message was flagged: ${JSON.stringify(data)}`);
        return AI.StopCommandName;
    }
);

app.ai.action(AI.FlaggedOutputActionName, async (context: TurnContext, state: ApplicationTurnState, data: any) => {
    await context.sendActivity(`I'm not allowed to talk about such things.`);
    return AI.StopCommandName;
});

// Listen for incoming server requests.
server.post('/api/messages', async (req, res) => {
    // Route received a request to adapter for processing
    await adapter.process(req, res as any, async (context) => {
        // Dispatch to application for routing
        await app.run(context);
    });
});
});

Reproduction Steps

1.Follow Readme.md done all the step 
2.able to launch application in teams 
3.No welcome message show up when added application 
4.sending some prompt 
5.error message return from bot 
- The bot encountered an error or bug.
- To continue to run this bot, please fix the bot source code.
6. check the debug output in VS Code show error below 
[onTurnError] unhandled error: Error: Error generating embeddings for query: AxiosError: Request failed with status code 404
Error: Error generating embeddings for query: AxiosError: Request failed with status code 404
    at LocalDocumentIndex.<anonymous> (C:\Users\junho\Downloads\TeamChef\teams-ai\js\node_modules\vectra\src\LocalDocumentIndex.ts:269:19)
    at Generator.throw (<anonymous>)
    at rejected (C:\Users\junho\Downloads\TeamChef\teams-ai\js\node_modules\vectra\lib\LocalDocumentIndex.js:29:65)
    at processTicksAndRejections (node:internal/process/task_queues:95:5)
...
dchan14 commented 9 months ago

same issue here

lilyydu commented 9 months ago

Hi @junhonglau and @dchan14, I see that your endpoint in the .env is incorrect. It should be

https://<my-azureopenai-azureresourcesname>.openai.azure.com

Our AI library logic will add to this endpoint for specific API calls (e.g., Completions)

corinagum commented 9 months ago

Closing as question answered :)

linjungz commented 7 months ago

In addition to chaging the gpt deployment name in config.json and index.ts, you still need to change the embedding deployment name in VectraDataSource.ts:

https://github.com/microsoft/teams-ai/blob/dccaeb01417f709c5da71e6b698f3761e57ce554/js/samples/04.ai.a.teamsChefBot/src/VectraDataSource.ts#L85

The default value is embedding but you should change it to your actual deployment name.

linjungz commented 7 months ago

In addition to chaging the gpt deployment name in config.json and index.ts, you still need to change the embedding deployment name in VectraDataSource.ts:

https://github.com/microsoft/teams-ai/blob/dccaeb01417f709c5da71e6b698f3761e57ce554/js/samples/04.ai.a.teamsChefBot/src/VectraDataSource.ts#L85

The default value is embedding but you should change it to your actual deployment name.

I think we should change this value to be loaded from .env file instead of hardcoding here.

linjungz commented 7 months ago

hi @lilyydu @corinagum, if needed, I could submit a PR to fix this issue. Thanks

corinagum commented 7 months ago

hi @lilyydu @corinagum, if needed, I could submit a PR to fix this issue. Thanks

Perhaps you could file a new issue as a suggested work item, and from there the team can discuss whether we want to proceed with the change?