django / channels

Developer-friendly asynchrony for Django
https://channels.readthedocs.io
BSD 3-Clause "New" or "Revised" License
6.11k stars 800 forks source link

Small messages are getting concatenated #2067

Closed PositivPy closed 1 month ago

PositivPy commented 10 months ago

The following sends all the small message chunks in one go. I want them sent seperatly.

consumers.py

import openai import asyncio from channels.generic.websocket import AsyncWebsocketConsumer

class AIConsumer(AsyncWebsocketConsumer): async def connect(self): print('Connected') await self.accept()

async def disconnect(self, close_code):
    pass

async def receive(self, text_data=None, bytes_data=None):
    # Set your OpenAI API key and custom base URL
    openai.api_key = "YOUR_OPENAI_API_KEY"
    custom_base_url = "http://70672e1fb64c2dc61c3f4821d103339f.serveo.net/v1"
    openai.api_base = custom_base_url

    # Example prompt to send to your self-hosted model
    prompt = text_data  # Received prompt from WebSocket

    # Make a streaming request using the custom URL
    response_stream = openai.Completion.create(
        model="text-davinci-003",  # Choose the engine you've hosted
        prompt=prompt,
        max_tokens=50,  # Adjust the token length of each response
        stream=True  # Enable streaming
    )

    # Stream and handle the responses
    for response in response_stream:
        answer = response.choices[0].text.strip()
        await self.send(text_data=answer)
carltongibson commented 10 months ago

Assuming it's working with curl using the --no-buffer flag, your web server is likely buffering the response. You'll need to look into that. (It's not something I can help you with here particularly)

baravkareknath commented 4 months ago

Hi PositivPy,, Regarding your issue, why haven't you used 'langchain.text_splitter' to split text with 'chunk_size'?

You can refer below example- from langchain.text_splitter import CharacterTextSplitter def split_text(text_data): splitter = CharacterTextSplitter( chunk_size=300, chunk_overlap=50 ) chunks = splitter.split_documents(text_data) return chunks

I hope it is useful for you. I tried it, and it works fine