-
According to OpenAI's latest policy, the token limit for gpt-3.5-turbo has been increased from 4,000 tokens to a maximum of 16,000 tokens for the gpt-3.5-turbo-16k model. In our current files, we need…
-
I'd like to be able to switch between gpt-3.5 (default) and gpt-4.
However, since gpt-4 is highly throttled (25 messages per 3 hours), it should either safeguard against staying in gpt-4 mode for t…
-
Hi All,
I used to have a 1517+ 8 bays and unknowingly I purchased the card and realised laters that this won`t work due to the pcie being incompatible so I ended up getting an RS1221+ knowing tha…
-
From our existing ChatDev typing.py:
```python
class ModelType(Enum):
GPT_3_5_TURBO = "gpt-3.5-turbo-16k-0613"
GPT_3_5_TURBO_NEW = "gpt-3.5-turbo-16k"
GPT_4 = "gpt-4"
GPT_4_32k =…
-
Background
vLLM currently supports various model features through configuration parameters, but lacks support for passing additional model-specific parameters through extra_body, which is particularl…
-
I am getting
```
Traceback (most recent call last):
File "predict.py", line 219, in
predictor.setup(model_base=None, model_name="nextgpt-v1.5-7b", model_path="./checkpoints/nextgpt-v1.5-7…
-
### Describe the bug
Asked it to change the OpenAI model to "gpt-4o". It refused, telling me that it doesn't exist, even when I explained that it's knowledge is out of date.
![image](https://github.…
-
Can you please export this jupiter notebook thing of Llama-3-PyTorch .ipynb to PURE PYTHON as Llama-3-PyTorch_model.py and Llama-3-PyTorch_tokenizer.py
Because I want to try to adapt this to work w…
-
Hi,
I cannot seem to inject the service or access it in any way, unless I explicitly alias it as public - which I do not want.
Unless I am doing it wrong... please advise.
Example:
```
-
I'm in the process to migrate an old system disk declaration to the new format, just like explained in https://github.com/nix-community/disko/blob/master/docs/table-to-gpt.md.
I find out a feature …