Closed PositivPy closed 1 month ago
Assuming it's working with curl
using the --no-buffer
flag, your web server is likely buffering the response. You'll need to look into that. (It's not something I can help you with here particularly)
Hi PositivPy,, Regarding your issue, why haven't you used 'langchain.text_splitter' to split text with 'chunk_size'?
You can refer below example- from langchain.text_splitter import CharacterTextSplitter def split_text(text_data): splitter = CharacterTextSplitter( chunk_size=300, chunk_overlap=50 ) chunks = splitter.split_documents(text_data) return chunks
I hope it is useful for you. I tried it, and it works fine
The following sends all the small message chunks in one go. I want them sent seperatly.
consumers.py
import openai import asyncio from channels.generic.websocket import AsyncWebsocketConsumer
class AIConsumer(AsyncWebsocketConsumer): async def connect(self): print('Connected') await self.accept()