-
### Bug Description
If you use 'gpt-4o-mini' with a specific questions responded by VectorStorageIndex you will get a error about convert output to JSON when is used throw RouterQueryEngine. It don…
-
Environment:
Hardware: Power 10 system (PPC64LE)
OS: Red Hat Enterprise Linux release 9.3 (Plow)
kernel: 5.14.0-362.18.1.el9_3.ppc64le
GH repo: https://github.com/foundation-model-stack/found…
-
### Your current environment
```text
Collecting environment information...
PyTorch version: 2.3.1+cu121
Is debug build: False
CUDA used to build PyTorch: 12.1
ROCM used to build PyTorch: N/A
…
-
### Your current environment
The output of `python collect_env.py`
```text
PyTorch version: 2.4.0+cu121
Is debug build: False
CUDA used to build PyTorch: 12.1
ROCM used to build PyTorch: N…
-
```
Attaching to llama-gpt-llama-gpt-api-cuda-ggml-1, llama-gpt-llama-gpt-ui-1
llama-gpt-llama-gpt-ui-1 | [INFO wait] --------------------------------------------------------
llama-gpt…
-
### **Description of Bug**
Provide a concise description of your bug and your project link (if applicable).
util module should be considered as a node internal package
```shell
Error: nodeUtil…
-
(https://twitter.com/realSharonZhou/status/1693744954143904102)
(https://huggingface.co/learn/nlp-course/chapter5/4)
จาก tweet ไม่ใช้ lora ก็สามารถเทรนได้
- ตรวจสอบว่าโค้ดที่เทรนมีส่วนที่เป็นปัญหาต่…
-
Using torch.distribution and fairscale, LLaMA can be parallelized on multiple devices or machines, which works quite well already. However, each GPU device is expected to have a large VRAM since weigh…
-
### Your current environment
```text
The output of `python collect_env.py`
```
```
Versions of relevant libraries:
[pip3] mypy-extensions==0.4.3
[pip3] numpy==1.23.5
[pip3] torch==2.0.1+cu11…
-
# Prerequisites
Please answer the following questions for yourself before submitting an issue.
- [ x ] I am running the latest code. Development is very rapid so there are no tagged versions as …