-
Hey guys,
Really pulling my hair out with this, everything setup and running but I'm getting really bad flickering on the display and other fun weird stuff.
I'm on the latest 1.4.00 Firmware, at…
-
Hi, thanks for the great work! Whether the models were finetuned before using HumanEval, and whether the models and parameters can be provided?
-
Hello,
Is there a way to configure it in an on-premise environment?
Looking at the contents, I think need external API(Ex) ChatGPT).
Is there a way to test in an environment where I can't com…
vsecv updated
8 months ago
-
![image](https://github.com/bigcode-project/bigcode-evaluation-harness/assets/56470984/17081dbd-d811-4740-867c-852424f319ed)
-
Add support for this [FIM task](https://huggingface.co/datasets/bigcode/santacoder-fim-task) discussed in this [issue](https://github.com/bigcode-project/bigcode-evaluation-harness/issues/33) on Human…
-
Love your project! 🚀
Where can I find the final model outputs used to calculate `pass@1` with EvalPlus?
I could only reproduce the results **without** EvalPlus by analyzing the files you made a…
-
From my point of view, to test my local model I need to first run inference then run test.
However I noticed that the inference process is quite slow. As for incoder-1B, it took me about 7 hours to…
-
Hi, I have noticed that the question 156 in Pandas lib has 2847 tokens which lead to a blank output, I think it is due to a if-else part in incode_inference() function in run_inference.py which makes …
-
Hello😃
A Great job!
I am reaching out to inquire about the possibility of using the pre-trained incoder model provided by your team for code infilling tasks~
However, I have been unable to locate a…
-
### Is there an existing issue for this?
- [X] I have searched the existing issues
### OS
Windows
### GPU
cuda
### VRAM
12
### What version did you experience this issue on?
…