lucapodo / V-RECS

V-RECS descrizione
https://vrecs.deepvizlab.site
0 stars 1 forks source link
llm llm4vis visualization

V-RECS

This repository contains all the supplemental materials and a demo application related to the research paper "V-RECS, a Low-Cost LLM4VIS Recommender with Explanations, Captioning and Suggestions" (arXiv:2406.15259). It includes code, data, and resources necessary for replicating the experiments and testing the model locally. The demo application provides a hands-on way to interact with the model, allowing users to test its capabilities in generating controlled text outputs. The repository serves as a comprehensive resource for researchers and developers interested in exploring the implementation and potential applications of the proposed approach.

You can test and access the model here HuggingFaces Model repo

Setup

1. Clone the Repository

First, clone the repository to your local machine:

git clone https://github.com/lucapodo/V-RECS.git
cd V-RECS

2. Create a Virtual Environment

Create a virtual environment to manage your dependencies. This helps to keep the project isolated from other projects on your machine.

For Windows:

python -m venv venv

For macOS/Linux:

python3 -m venv venv

3. Activate the Virtual Environment

Activate the virtual environment using the following commands:

For Windows:

venv\Scripts\activate

For macOS/Linux:

source venv/bin/activate

4. Install Dependencies

Install the required packages using the requirements.txt file:

pip install -r requirements.txt

5. Create an .env File

Create an .env file in the project's root directory to specify the following environment variable:

HF_ENDPOINT=your_huggingface_endpoint_here

You should deploy the model on the HF 🤗 inference endpoint and copy the link to env file variable.

6. Run Streamlit frontend

Finally, within the V-RECS folder open a new terminal and run

streamlit run app.py

This will launch the frontend to test the model

Project Structure