nlkitai / nlux

The π—£π—Όπ˜„π—²π—Ώπ—³π˜‚π—Ή Conversational AI JavaScript Library πŸ’¬ β€”Β UI for any LLM, supporting LangChain / HuggingFace / Vercel AI, and more 🧑 React, Next.js, and plain JavaScript ⭐️
https://docs.nlkit.com/nlux
Other
933 stars 48 forks source link

Custom renderer investigation #73

Open ScreamZ opened 3 weeks ago

ScreamZ commented 3 weeks ago

[!note] There is a lot of information below, sorry if this is a bit disordered, I try to update the issue everytime it gets clearer on my mind, I'll try to split it too.

I'm currently using the streaming custom adapter (to have the writing effect) while I'm received a single (full) message of text from my API.

Summary

Code sample

Click to see my code

```tsx import { StreamResponseComponentProps } from "@nlux/react"; import React from "react"; export function MyResponseResponseRenderer(props: StreamResponseComponentProps) { console.log(props); return (

); } ``` ```ts import type { ChatAdapter, StreamingAdapterObserver } from "@nlux/core"; import { sendMessage } from "../server/actions/sendMessage"; import { parseResponseMessageParsing } from "../utils/message-parsers"; export const MyAdapterBuilder = (characterID: string, conversationID: string): ChatAdapter => ({ streamText: async (message: string, observer: StreamingAdapterObserver) => { const result = await sendMessage({ character: characterID, conversation: conversationID, userInputMessage: message, }); const parseResult = parseResponseMessageParsing(result.data); if (!result.serverError && parseResult.status === "success") { observer.next(parseResult.data.message); observer.complete(); } else { observer.error(new Error(result.serverError || "Could not generate message")); } }, }); ```

Attachments

Typing issue

[!note] See https://github.com/nluxai/nlux/issues/75

See the screenshot above.

type StreamResponseComponentProps<AiMsg> = {
    uid: string;
    dataTransferMode: 'stream';
    status: 'streaming' | 'complete';
    content?: AiMsg[];
    serverResponse?: unknown[];
    containerRef: RefObject<never>;
};

Custom response renderer.

What can we do ?

[!note] See https://github.com/nluxai/nlux/discussions/74

[!TIP] Here are some observations / suggestions of what we might do. Or you can tell me if you have some.

~~1. It's okay that server/initialConversation content is considered dataTransferMode: "batch" that way we can define a different render code for server/initialConversation. But we could need the default parser, otherwise, messages are not rendered as markdown but plain text ? Or should i use my own renderer for MD ?.

  1. For client/adapter message with stream or whatever what can we do ? Maybe expose primitives and/or some components for streaming message instead of using the containerRef would be better.~~

πŸ’‘ Briefly: - Expose a DefaultStreamingRenderer component instead of containerRef to handle the streaming mode if someone use a custom renderer. (more flexibility) This component would receive content props which should be the same for streaming/batch.

- For client side (after receiving a message from the adapter), containerRef is defined but content is empty.

status: "streaming" never go complete

When using the above code, I never get status: "streaming", going to complete. for streaming messages

salmenus commented 2 weeks ago

The issue related to status property passed to custom renderers not being updated: That's now fixed βœ…

Example here: https://codesandbox.io/p/sandbox/custom-renderer-status-9tt6zt?file=%2FApp.tsx

salmenus commented 2 weeks ago
ScreamZ commented 2 weeks ago
  • Issue related to content property fixed βœ…
  • content is now always an array
  • Fixed in 2.5
  • Give it a try and let me know @ScreamZ

Looks good so far, now we can implement any renderer we want.

salmenus commented 2 weeks ago

One more item is still pending: Rendering difference between environments