-
I cant see downloaded Phi-3 Model in the Choose a Model drop down options even after restarting moxin.
I am on a Macbook.
> $ du -hs second-state/Phi-3-mini-4k-instruct-GGUF/*
2.5G second-state…
-
When I execute the command "gaianet init"
It show :
[+] Checking the config.json file ...
[+] Downloading Phi-3-mini-4k-instruct-Q5_K_M.gguf ...
* Using the cached Phi-3-mini-4k-instruct-…
-
This issue is to explain how to host locally the LLM model.
For all the solutions listed below, `ngrok.com` (or any similar tool) can be used to share the local AI server to other people.
We ha…
-
먼저 좋은 논문에 대해 감사드립니다.
1. Train 코드는 업로드 예정에 있는 지 궁금합니다.
2. Phi-3-mini, InternLM2을 백본모델로 사용한걸로 봤는데 license도 위 모델들을 따라가나요?
답변 미리 감사합니다.
-
export HF_HOME=./cache/ && litgpt download --repo_id microsoft/Phi-3-mini-4k-instruct
generation_config.json: 100%|███████████████████████████████████████████████████████████████████████████████…
-
### What happened?
I get a CUDA out of memory error when sending large prompt (about 20k+ tokens) to Phi-3 Mini 128k model on laptop with Nvidia A2000 4GB RAM. At first about 3.3GB GPU RAM and 8GB CP…
-
How we can fine tune Multimodal LLM like Phi 3 using unsloth package ?
```
config.json: 100%
684/684 [00:00 4 model, tokenizer = FastLanguageModel.from_pretrained(
5 model_name …
-
Since Phi-3 mini did so well on the leaderboard, it would be interesting to see where the new small and medium models land.
With Phi-3 vision, it also seems like we're starting to have a pretty hea…
-
I've been finetuning unsloth/Phi-3-mini-4k-instruct-bnb-4bit with a T4, which doesn't support flash attention, so I don't have it installed.
During evaluation, I've been running into the following …
-
### System Info
- `transformers` version: 4.41.2
- Platform: macOS-14.5-x86_64-i386-64bit
- Python version: 3.11.6
- Huggingface_hub version: 0.23.4
- Safetensors version: 0.4.3
- Accelerate ver…