This template is an example project for a simple Large Language Model (LLM) application built on top of React and Node. This template was built on top of the React template app from nano-react-app and updated with a Node server that uses HuggingFace.js and LangChain.js to connect to supported large language models. Use this template to easily build and run an LLM app, like the screenshot below:
To get started, follow the below steps:
Create an .env
file by copying the SAMPLE_env
file and add the model store provider you'll be using (e.g. HUGGING_FACE
or OPEN_AI
) and the API keys for the models you are going to use
Install packages
Run the backend server that will start with a default port of 3100
yarn start-server
Run the frontend server that will start with a default port of 5173
.
yarn start
Note: You can use the -p
flag to specify a port for the frontend server. To do this, you can either run yarn start
with an additional flag, like so:
yarn start -- --port 3000
Or, edit the start
script directly:
vite --port 3000
Additional scripts are provided to prepare the app for production
yarn build
— This will output a production build of the frontend app in the dist
directory.yarn preview
— This will run the production build of the frontend app locally with a default port of 5173
(note: this will not work if you haven't generated the production build yet).👽 Looking for more content? Check out our tutorials on running LLM apps 📚
Feel free to try out the template and open any issues if there's something you'd like to see added or fixed, or open a pull request to contribute.