-
Hi,
Thanks for sharing this work.
1) What's the best way to perform a quick analysis on a custom domain w/o fine tuning (zero shot or few shot)?
How should I prepare my input dataset and prompts…
-
I did some extensive investigation, testing and benchmarking, and determined that the following is needed to speedup inference for the Bigcode models (and most of text-gen-inference models:
1. **Use …
-
======================
=== EXAMPLE 6 ===
Implement a program to find the common elements in two arrays without using any extra data structures.
You can use Python's built-in set datatyp…
-
1. docker pull tabbyml/tabby:latest _ps: version 0.13.1_
2. git clone https://huggingface.co/deepseek-ai/deepseek-coder-6.7b-base
3. download gguf from https://github.com/wsxiaoys/registry-tabby/…
-
I installed all the dependencies by following the instructions from the repo.
Following that, I am running the following code:
```
import torch
from transformers import AutoModelForCausalLM, A…
-
I am facing some issues whe using Deep Speed for fine tuning StarCoder Model. I am exactly following the steps mentioned in this article [Creating a Coding Assistant with StarCoder](https://huggingfac…
-
please tell me hot wo generate the following files:
# WEIGHTS_TRAIN=/fsx/loubna/code/bigcode-data-mix/data/train_data_paths.txt.tmp
# WEIGHTS_VALID=/fsx/loubna/code/bigcode-data-mix/data/valid_data_…
-
When will support huggingface GPT BigCode model
-
实测sqlcoder2(基于starcoder)模型的速度比vllm快,但是输出内容与原版模型相差甚远,是否完全是因为不支持beam search的问题?
-
Thank you for the best code model to date!
Would it be possible to share the pre-training data generation code? —>
Data Creation
Step 1: Collect code data from GitHub and apply the same filte…