-
MicrosoftのminiサイズのLLM。
https://huggingface.co/microsoft/Phi-3-mini-4k-instruct
-
I have installed h2ogpt on ubuntu22.04 as per procedure. But when I run the following command I get an error of missing **config.json** file. Please let me know how to overcome this error.
The comm…
-
Hi,
Thanks for your great work on evaluating long context ability of LLMs! Also enjoyed your poster presentation at COLM 2024.
Could you provide the raw scores of phi3-mini-128k? It seems like this…
-
### Bug description
For some reason, the tensor parallel implementation generates non-sensical outputs
```
⚡ python-api-tensor-parallel ~/litgpt litgpt generate_tp checkpoints/microsoft/phi-2
…
rasbt updated
2 months ago
-
Following local finetuning README
Ran `python gradio_chat.py --baseonly`
Got:
```
(phi-3-env) hayden@XPS15:/mnt/d/phi-3-env/inference$ python gradio_chat.py --baseonly
Number of GPUs availa…
-
Hi, I am trying to run the `Llama-3.1 8b + Unsloth 2x faster finetuning.ipynb` you provided in the README. However, when I use google colab to run the second cell I got this error:
``` bash
------…
-
Hi unsloth team. I'm wondering if you have plans for supporting the [microsoft/Phi-3-mini-128k-instruct](https://huggingface.co/microsoft/Phi-3-mini-128k-instruct)?
Also, is it viable for a average …
-
https://huggingface.co/Kooten/DaringMaid-20B-V1.1
^ That model has much better results than the original DaringMaid 20b from Kooten in terms of sticking with context.
I have wanted to use a Q8_0 …
-
Take the best bits of https://github.com/ExCALIBUR-NEPTUNE/nektar-diffusion-ambipolar for the oblique BCs and cylindrical coordinate systems and https://github.com/ExCALIBUR-NEPTUNE/nektar-driftwave f…
-
I have a 32 core AMD CPU and no GP.
mistral.rs will only use two of the cores. 2 cores is a bit less. Is it possible to allow to set it through arguments? Ollama will use half of core numbers by defa…
lij55 updated
1 month ago