-
Add the option to load models in bfloat16 and float16. Esp important for large models like GPT-J and GPT-NeoX.
Ideally, load from HuggingFace in this low precision, do weight processing on the CPU,…
-
### Problem Statement
Sentry error, especially for python, have all the information to diagnoise an issue and create a fix, so sentry GPT should be able to create a pull request to address the fix
#…
-
```
python3 build.py --model_version v2_7b \
--model_dir ./model_files/Baichuan2-7B-Chat \
--dtype float16 \
--use_gemm_plugin float16 \
--use_gpt_attention_plugin float16 \
…
-
Hi Torch Team,
I am currently experimenting with native torch float8 distributed training using the delayed scaling recipe on GPT 1.5B with DDP at batch=12 seq=1024 on an HGX 8xH100 (700W H100 SXM …
-
When launching `computerassistant`, I get this error :
```sh
Traceback (most recent call last):
File "/home/molives/.local/bin/computerassistant", line 8, in
sys.exit(start())
File "/…
-
-
Hi
I sent a .gpt script the following message to my mode gpt-4o
"use kubectl to get the pods in the all namespaces if there is a pod name contains the word 'poly' get its container image name a…
-
The toast message says:
'Something went wrong'
'The model `gpt-4-vision-preview` has been deprecated, learn more here: https://platform.openai.com/d'
Text cut-off in the message.
Thanks in…
-
### 🥰 Feature Description
上游的v2.15.0支持插件功能,实现方式和本项目完全不一致。这个还会继续同步吗?
### 🧐 Proposed Solution
上游的v2.15.0支持插件功能,实现方式和本项目完全不一致。这个还会继续同步吗?
### 📝 Additional Information
_No response_
-
# Language Model Overview
## OpenAI
| | gpt-4o | gpt-4o-mini …