huggingface / chat-ui

Open source codebase powering the HuggingChat app
https://huggingface.co/chat
Apache License 2.0
6.91k stars 984 forks source link

assistant - function calling - thoughts / roadmap eta #788

Open johndpope opened 5 months ago

johndpope commented 5 months ago

I draw your attention to some useful implementations of local llm using function calling

@amalshaji https://github.com/amalshaji/actionai

@rizerphe https://github.com/rizerphe/local-llm-function-calling

I use this functionality in chatgpt - one time for calling stable diffusion to produce images for text based video game - and also am interested for internal finance app to call apis to get charts etc.

presumably you have seen the inside out of chatgpts - but for reference - Screenshot from 2024-02-06 23-14-16 as part of the assistant prompt - I just say something like

When generating images, the GPT will provide the response in HTML format, displaying the image directly from the provided URL "https://covershot.ngrok.app/images/[image_id].png". This will enhance the visual experience by showing the images in their full glory, directly within the narrative flow.

because this is open source - and hosted on github - I think it would be better to allow engineers to build pipelines in code but maybe architectually - doesn't make sense to have it in this code.

siddhu032d commented 5 months ago

Use codes with python mix

gururise commented 5 months ago

Just being able to connect an assistant to a model endpoint would be a first useful start. That way one could create an assistant that would represent each model, similar to how models are currently defined in the .env file.

It would also be useful if an assistant could have an option to make a REST query for its answer. That way, one could create an assistant that would connect to an endpoint that would already have an LLM and RAG backend.

Jrbiltmore commented 5 months ago
const fs = require('fs');
const path = require('path');

// Load model endpoints from .env configuration
const modelEndpoints = {};

const loadModelEndpoints = () => {
    const envPath = path.resolve(__dirname, '../config/models.env');
    const envVars = fs.readFileSync(envPath, 'utf8').split('\n');
    envVars.forEach((line) => {
        const [key, value] = line.split('=');
        modelEndpoints[key.trim()] = value.trim();
    });
};

// Connect to model endpoint
const connectModelEndpoint = (modelIdentifier) => {
    loadModelEndpoints();
    const endpoint = modelEndpoints[modelIdentifier];
    if (!endpoint) {
        throw new Error(`Model endpoint for ${modelIdentifier} not found.`);
    }
    // Assume a function exists to actually establish the connection
    // This is a placeholder for demonstration purposes
    console.log(`Connecting to model endpoint at ${endpoint}`);
    // Return the endpoint for further actions, e.g., making REST queries
    return endpoint;
};

Function: makeRESTQuery(endpoint, payload)

File Location: /src/RestClient.js

const axios = require('axios');

const makeRESTQuery = async (endpoint, payload) => {
    try {
        const response = await axios.post(endpoint, payload);
        return response.data;
    } catch (error) {
        console.error(`Error making REST query to ${endpoint}: ${error}`);
        throw error;
    }
};

Function: authenticateUser(credentials)

File Location: /security/Authenticator.js

const authenticateUser = (credentials) => {
    // Placeholder for user authentication logic
    // This could involve verifying credentials against a database
    console.log(`Authenticating user with credentials: ${credentials}`);
    // For demonstration, assume authentication is successful
    return true;
};

Function: configureEnvironment()

File Location: /src/IntegrationLayer.js

const configureEnvironment = () => {
    loadModelEndpoints(); // Reuse the function from above to load model endpoints
    console.log('Environment configured with model endpoints.');
};

These functions provide the foundational logic required for connecting to model endpoints, making RESTful queries, authenticating users, and configuring the environment. They are placeholders and need to be expanded with actual logic for connecting to databases, handling HTTP requests, and implementing security measures.

To further develop the system's capabilities, we'll introduce additional functionalities that are crucial for a comprehensive solution. These will include error handling, logging, and dynamic request handling, which are essential for a robust system.

Enhanced Error Handling and Logging

Purpose: Improve system reliability and maintainability by implementing sophisticated error handling and logging mechanisms.

File Location: /src/Utility.js

const fs = require('fs');
const path = require('path');

const logError = (error) => {
    const logPath = path.resolve(__dirname, '../logs/error.log');
    const timestamp = new Date().toISOString();
    const errorMessage = `[${timestamp}] Error: ${error}\n`;
    fs.appendFileSync(logPath, errorMessage);
};

const handleRESTError = (error, endpoint) => {
    logError(`Failed REST query to ${endpoint}: ${error.message}`);
    throw new Error(`REST query to ${endpoint} failed.`);
};

Dynamic Request Handling

Purpose: Provide a flexible mechanism to handle different types of RESTful requests (GET, POST, PUT, DELETE) to support various operations with model endpoints.

File Location: /src/RestClient.js

const axios = require('axios');

const sendRequest = async (method, endpoint, payload = {}) => {
    try {
        const options = {
            method,
            url: endpoint,
            data: payload,
            headers: { 'Content-Type': 'application/json' },
        };
        const response = await axios(options);
        return response.data;
    } catch (error) {
        handleRESTError(error, endpoint); // Utilize enhanced error handling
    }
};

// Extend makeRESTQuery to support different methods
const makeRESTQuery = async (method, endpoint, payload) => {
    return await sendRequest(method, endpoint, payload);
};

User Session Management

Purpose: Manage user sessions for authentication and authorization, providing a secure and user-friendly system access.

File Location: /security/SessionManager.js

const sessions = {};

const createUserSession = (userId, credentials) => {
    // Placeholder for creating a user session after successful authentication
    const sessionId = `${userId}-${new Date().getTime()}`;
    sessions[sessionId] = { userId, timestamp: new Date() };
    console.log(`Session created: ${sessionId}`);
    return sessionId;
};

const validateUserSession = (sessionId) => {
    // Placeholder for session validation logic
    if (sessions[sessionId]) {
        console.log(`Session ${sessionId} is valid.`);
        return true;
    } else {
        console.log(`Session ${sessionId} is invalid or expired.`);
        return false;
    }
};

Configuring and Securing Environment Variables

Purpose: Securely manage environment variables, including API keys, database connections, and other sensitive information.

File Location: /config/SecureConfig.js

require('dotenv').config(); // Assuming dotenv package is used

const getEnvVar = (varName) => {
    const value = process.env[varName];
    if (!value) {
        throw new Error(`Environment variable ${varName} not found.`);
    }
    return value;
};

This advanced functionality encompasses error handling, dynamic request handling, user session management, and secure configuration management. These components are essential for building a secure, reliable, and user-friendly system for connecting to model endpoints and enhancing the Huggingface Chat UI's capabilities.

Sent from my iPhone

On 9. 2. 2024., at 09:03, Gene Ruebsamen @.***> wrote:

Just being able to connect an assistant to a model endpoint would be a first useful start. That way one could create an assistant that would represent each model, similar to how models are currently defined in the .env file.

It would also be useful if an assistant could have an option to make a REST query for its answer. That way, one could create an assistant that would connect to an endpoint that would already have an LLM and RAG backend.