This project allows you to host your own GitHubCopilot-like model locally while using the official GitHubCopilot VSCode extension.
Download and install the oobabooga backend
Download a model
open the oobabooga UI, go to the models tab and download a code completion model. I'm using: Deci/DeciCoder-1b
, paste that name, then click download, then click load once complete
Which model should I choose? Use smaller models for faster predictions, especially if you have a weaker PC. I tested DeciCoder-1b
size | speed | model name |
---|---|---|
125M | superfast | flax-community/gpt-neo-125M-code-clippy-dedup-2048 |
1B | fast | Deci/DeciCoder-1b |
3B | medium | TheBloke/stablecode-instruct-alpha-3b-GGML |
7B | slow | mlabonne/codellama-2-7b |
15B | slow | TheBloke/WizardCoder-15B-1.0-GGML |
Go to VSCode and modify the settings and add the following:
"github.copilot.advanced": {
"debug.overrideEngine": "codegen",
"debug.testOverrideProxyUrl": "http://localhost:8000", // address:port of middleware
"debug.overrideProxyUrl": "http://localhost:8000",
},
(optional for authentication) Update ~/.vscode/extensions/github.copilot-*/dist/extension.js
with the following:
https://api.github.com/copilot_internal
with http://127.0.0.1:8000/copilot_internal
https://copilot-proxy.githubusercontent.com
with http://127.0.0.1:8000
Run the proxy:
pip install git+https://github.com/FarisHijazi/localCopilot
localCopilot --port 7000
If you have oobabooga running on a separate server use the --backend argument {hostname:port}
pip install git+https://github.com/FarisHijazi/localCopilot
localCopilot --port 8000 --backend http://10.0.0.1:5002
install the official GitHub copilot extension
HAPPY CODING!
To test that the copilot extension is working, either type some code and hope for a completion
or use the command pallet (Ctrl+Shift+P
) and search for GitHub Copilot: Open Completions Panel
This is done using a single script: localCopilot/middleware.py
(only 90 lines of code), which is a compatibility layer between the official GitHub copilot VSCode extension and oobabooga as a backend.
Credit: I learned about the traffic redirecting from the Fauxpilot project here.
Cloud | |
Self-hosted |
There are many other projects for having an open source alternative for copilot, but they all need so much maintenance, I tried to use an existing large project that is well maintained: oobabooga, since it supports almost all open source LLMs and is commonly used, and is well maintained
I know that the middleware method might not be optimal, but this is a minimal hack that's easy to run, and this repository should be really easy to maintain.
Once oobabooga supports multiple requests in a single call, then the middleware should no longer be needed.
Here are some helpful open source projects I found while doing my research:
Project URL | description | actively maintained (as of Aug 2023) |
---|---|---|
https://github.com/CodedotAl/gpt-code-clippy | Frontend + models | ❌ |
https://github.com/Venthe/vscode-fauxpilot | this is a FauxPilot frontend | ✅ |
https://github.com/hieunc229/copilot-clone | frontend which uses Google/StackOverflow search as a backend | ✅ |
https://github.com/fauxpilot/fauxpilot | FauxPilot backend | ✅ |
https://github.com/ravenscroftj/turbopilot | A backend that runs models | ✅ |