Paver
simplifies the setup of the
Continue extension
to integrate IBM's
Granite models, as
your code assistant in Visual Studio Code, using Ollama
as the runtime environment.
By leveraging Granite models and open-source components such as Ollama and Continue, you can write, generate, explain, or document code with full control over your data, ensuring it stays private and secure on your machine.
This project features an intuitive UI, designed to simplify the installation and management of Ollama and Granite models. The first time the extension starts, a setup wizard is automatically launched to guide you through the installation process.
You can later open the setup wizard anytime from the command palette by executing the "Paver: Setup Granite as code assistant" command.
Open Visual Studio Code, navigate to the Extensions tab on the left sidebar, select "Paver," and click "install."
The Continue.dev extension will be automatically added
as a dependency, if not already installed. If you installed Paver
manually,
you may need to also install the Continue extension separately.
Once the extension is running, the setup wizard will prompt you to install Ollama.
The following Ollama installation options are available :
Once Ollama is installed, the page will refresh automatically. Depending on the security settings of your plateform, you may need to start Ollama manually the first time.
Select the Granite model(s) you wish to install and follow the on-screen instructions to complete the setup.
After the models are pulled into Ollama, Continue will be configured automatically to use them, and the Continue chat view will open, allowing you to interact with the models via the UI or tab completion.
The Granite models are optimized for enterprise software development workflows, performing well across various coding tasks (e.g., code generation, fixing, and explanation). They are versatile "all-around" code models.
Granite comes in various sizes to fit your workstation's resources. Generally, larger models yield better results but require more disk space, memory, and processing power.
Recommendation: Using Model Size 2B should work on most machines. Use the 8b version if you're running on a high-end computer.
For more details, refer to Granite Models.
Many corporations have privacy regulations that prohibit sending internal code or data to third-party services. Running LLMs locally allows you to sidestep these restrictions and ensures no sensitive information is sent to a remote service. Ollama is one of the simplest and most popular open-source solutions for running LLMs locally.
Continue is the leading open-source AI code assistant. You can connect any models and contexts to build custom autocomplete and chat experiences inside VS Code and JetBrains.
For more details, refer to continue.dev.
Please check our Guidelines to contribute to our project.
This project is licensed under Apache 2.0. See LICENSE for more information.
With your approval, the Paver extension collects anonymous
usage data and sends it to Red Hat servers to help improve our
products and services. Read our
privacy statement
to learn more. This extension respects the redhat.telemetry.enabled
setting,
which you can learn more about at
Red Hat Telemetry.