-
Currently when trying to run with a local model that isn't downloaded, the app crashes with an error such as the following:
```
⇒ npx humanifyjs local --disableGpu foo.js
(node:96922) [DEP0040] Dep…
-
### Question Validation
- [X] I have searched both the documentation and discord for an answer.
### Question
I am playing with the example: `query_pipeline_memory.ipynb` [notebook](https://docs.lla…
-
### Question Validation
- [X] I have searched both the documentation and discord for an answer.
### Question
When I switch my Settings.embed_model I have this error:
PydanticUserError: Field 'na…
-
PyTorch is dead. Long live JAX.
https://neel04.github.io/my-website/blog/pytorch_rant/
LLM Compressor
https://github.com/vllm-project/llm-compressor
https://neuralmagic.com/blog/llm-compressor-i…
-
I'm not really sure if that's possible but adding that to ollama could really impact the performance on 4-bit quant option:
99%+ in all benchmarks in lm-eval relative performance to FP16 and simila…
-
### '딸기'가 진짜로 온다! (2024-09-02)
개요
최근 OpenAI는 새로운 인공지능 제품 '딸기(Strawberry)'의 출시에 박차를 가하고 있습니다. '딸기'는 기존의 인공지능 기술을 한층 발전시킨 혁신적인 모델로, 특히 복잡한 문제 해결 능력에 있어서 뛰어난 성능을 보이는 것으로 알려져 있습니다.
기능 및 성능
'딸기'는…
-
The challenge-generation functionality should make it clear which AI model / API is being used. Ideally, with a nicely formatted model card that could also be used as a onebox (similar to Data Package…
loleg updated
2 weeks ago
-
I tried fine-tuning **Llama 2**, **Llama 3** & even **LLama 3.1** but my loss is decreasing/increasing. I can't figure out.
I have my dataset in alpaca format like this:
```
[
{
…
-
Hi,
is `data_quant.json` optional?
Right now I tested 3.2 3B instruct without quants and it does not seem to be usable. I will test w8a8 later today, but is there any guide how to deal with hall…
-
# Prerequisites
Please answer the following questions for yourself before submitting an issue.
- [ ] I am running the latest code. Development is very rapid so there are no tagged versions as of…