This repository contains all the supplemental materials and a demo application related to the research paper "V-RECS, a Low-Cost LLM4VIS Recommender with Explanations, Captioning and Suggestions" (arXiv:2406.15259). It includes code, data, and resources necessary for replicating the experiments and testing the model locally. The demo application provides a hands-on way to interact with the model, allowing users to test its capabilities in generating controlled text outputs. The repository serves as a comprehensive resource for researchers and developers interested in exploring the implementation and potential applications of the proposed approach.
You can test and access the model here HuggingFaces Model repo
First, clone the repository to your local machine:
git clone https://github.com/lucapodo/V-RECS.git
cd V-RECS
Create a virtual environment to manage your dependencies. This helps to keep the project isolated from other projects on your machine.
For Windows:
python -m venv venv
For macOS/Linux:
python3 -m venv venv
Activate the virtual environment using the following commands:
For Windows:
venv\Scripts\activate
For macOS/Linux:
source venv/bin/activate
Install the required packages using the requirements.txt
file:
pip install -r requirements.txt
.env
FileCreate an .env
file in the project's root directory to specify the following environment variable:
HF_ENDPOINT=your_huggingface_endpoint_here
You should deploy the model on the HF 🤗 inference endpoint and copy the link to env file variable.
Finally, within the V-RECS folder open a new terminal and run
streamlit run app.py
This will launch the frontend to test the model
app.py
: Main application file.src/
: Source code including pages, components, and utilities.assets/
: Static assets like images and styles.resources/
: Additional resources as supplemental paper materials and evaluation samples