LAION-AI / Open-Assistant

OpenAssistant is a chat-based assistant that understands tasks, can interact with third-party systems, and retrieve information dynamically to do so.
https://open-assistant.io
Apache License 2.0
36.99k stars 3.23k forks source link

what do I do after install the open assistant from git hub? #3676

Closed CT1800098 closed 1 year ago

CT1800098 commented 1 year ago

I have no idea on what should I do to get the local version up and running after install it on my pc

drShivashankar commented 1 year ago

What do you want to work on? Backend Inference or Frontend

CT1800098 commented 1 year ago

what is the differece between them?

CT1800098 commented 1 year ago

What do you want to work on? Backend Inference or Frontend

I sort of want to do what the web version does

drShivashankar commented 1 year ago

The docs detail how you can setup and start backend on your local PC

Create a venv Clone the repo and install the required libraries as guided in the backend readme file Make sure you have postgres and redis running Start the backend server using the script inside the backend directory ( Again, instructions in the readme file of the backend directory)

Check documentation in your local machine at localhost:8080/docs

CT1800098 commented 1 year ago

The docs detail how you can setup and start backend on your local PC

Create a venv Clone the repo and install the required libraries as guided in the backend readme file Make sure you have postgres and redis running Start the backend server using the script inside the backend directory ( Again, instructions in the readme file of the backend directory)

Check documentation in your local machine at localhost:8080/docs So I just follow the instruction on the redme in the backend?

stefangrotz commented 1 year ago

A small remark: unless you have a very powerful GPU you will be only able to run the website locally, but not the chat (=Inference).

drShivashankar commented 1 year ago

what is the differece between them?

Backend is to work on the open assistant web backend Front is to improve the UI for the same web Inference is to work on the LLMs

CT1800098 commented 1 year ago

A small remark: unless you have a very powerful GPU you will be only able to run the website locally, but not the chat (=Inference).

how many VRM do I need?

CT1800098 commented 1 year ago

what is the differece between them?

Backend is to work on the open assistant web backend Front is to improve the UI for the same web Inference is to work on the LLMs

what is LLM?

drShivashankar commented 1 year ago

what is the differece between them?

Backend is to work on the open assistant web backend Front is to improve the UI for the same web Inference is to work on the LLMs

what is LLM?

The chat itself Large language models They have so many parameters you need more hardware like @stefangrotz said

CT1800098 commented 1 year ago

what is the differece between them?

Backend is to work on the open assistant web backend Front is to improve the UI for the same web Inference is to work on the LLMs

what is LLM?

The chat itself Large language models They have so many parameters you need more hardware like @stefangrotz said

how many VRM do I need to run?

stefangrotz commented 1 year ago

For the 70b model you'll need 48GB VRAM.

If you want to run models locally I recommend https://gpt4all.io/ Its for language models that are optimized for consumer hardware and it is easy to use.

CT1800098 commented 1 year ago

For the 70b model you'll need 48GB VRAM.

If you want to run models locally I recommend https://gpt4all.io/ Its for language models that are optimized for consumer hardware.

is it censored or uncensored?