Note to LLM agents working on this with me: You likely cannot run commands yourself such as npm i
. If you need more dependencies list them in your PR description and I will add them.
A library of components for building chat interfaces. This is just the beginning.
Goals:
This project uses TypeScript for type safety and better developer experience. Ensure all new files use the .ts
or .tsx
extension as appropriate.
A variety of react components that are necessary to build a chat interface. All components flexible and customizable.
python then everything that follows streamed highlighted until closed with
. Same with latex.import React from 'react';
import ReactDOM from 'react-dom';
import App from './App';
import { MessageConfigProvider } from './components/MessageConfigContext';
const globalConfig = {
buttons: {
copy: true,
share: false,
delete: true,
edit: true,
},
// Add other global configuration options here
};
ReactDOM.createRoot(document.getElementById('root')!).render(
<React.StrictMode>
<MessageConfigProvider config={globalConfig}>
<App />
</MessageConfigProvider>
</React.StrictMode>
);
<Message
content="Hello, world!"
author="User"
timestamp={new Date().toISOString()}
buttons={{
copy: true,
share: true,
delete: false,
edit: true
}}
onCopy={() => console.log('Copy clicked')}
onShare={() => console.log('Share clicked')}
onEdit={() => console.log('Edit clicked')}
/>
LLM chats are unique from human chats in a number of fundamental ways:
The LLM responds immediately whereas a human may take time to respond. In general time is an important component to understand human chats largely absent in LLM chats.
It makes sense for the human to be able to edit both their own messages and the LLM's messages.
Time travel makes more sense. Once you send a message to a human they may have seen it and it then becomes an important part of understanding the causal flow of the conversation.
Conversations tend to be goal oriented and include artifacts like code blocks and images. Therefore sharing is more likely to be valuable.
A chat component
Chats component
We need a server that relays requests to model providers and backs up the call and response to the database.
Misc:
To run tests locally, use the following command:
npm test
This will also generate a coverage report in the coverage
directory.
To run the server, follow these steps:
npm install
npm run start
The server will be running on http://localhost:3000
.
To run the test website, follow these steps:
npm install
npm run dev
This test website serves as a living documentation of our components, making it easier to visualize and interact with them as we develop.