The UI of Chat is becoming increasingly complex, often encompassing an entire front-end project along with deployment solutions.
This repository aims to construct the entire front-end UI using a single HTML file, aiming for a minimalist approach to create a chatbot.
By simplifying the structure and key functions, developers can quickly set up and experiment with a functional chatbot, adhering to a slimmed-down project design philosophy.
Supports OpenAI-format requests, enabling compatibility with various backends such as HuggingFace Text Generation Inference (TGI)
, vLLM
, etc.
Automatically supports multiple response formats without additional configuration, including standard OpenAI
response formats, Cloudflare AI
response formats, and plain text
responses
Support various backend endpoints through custom configurations
, providing any project with a universal frontend chatbot
Support the download of chat history
, interrupt the current generation, and repeat the previous generation to quickly test the backend inference capabilities
Inquiries with image inputs can be made using multimodal vision models
Support for toggling between original format
and Markdown format
display
The demo will use
Llama-3.2
by default, image upload is only supported for vision models
cd /path/to/your/directory
python3 -m http.server 8000
Then, open your browser and access
http://localhost:8000
docker run -p 8080:8080 -d aiql/chat-ui
By default, the Chatbot will use API format as OpenAI ChatGPT.
You can insert your OpenAI API Key
and change the Endpoint
in configuration to use API from any other vendors
You can also download the config template from example and insert your API Key
, then use it for quick configuration
If you're experiencing issues opening the page and a simple refresh isn't resolving the issue, please take the following steps:
Refresh
icon on the upper right of Interface Configuration
Reset All Config
iconNetwork
section.Introduce the image as sidecar container
spec:
template:
metadata:
labels:
app: my-app
spec:
containers:
- name: chat-ui
image: aiql/chat-ui
ports:
- containerPort: 8080
Add service
apiVersion: v1
kind: Service
metadata:
name: chat-ui-service
spec:
selector:
app: my-app
ports:
- protocol: TCP
port: 8080
targetPort: 8080
type: LoadBalancer
You can access the port or add other ingress
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: my-app-ingress
annotations:
nginx.ingress.kubernetes.io/rewrite-target: /$1
spec:
rules:
- host: chat-ui.example.com
http:
paths:
- path: /
pathType: Prefix
backend:
service:
name: chat-ui-service
port:
number: 8080