-
### Description
Currently, OpenAI provides a range of [alternative AI models](https://platform.openai.com/docs/models/gpt-4) (e.g. `text-davinci-003`, `gpt-4`, and `gpt-3.5-turbo`). Given the consi…
-
Hi,
I have just configured this plugin with the api_key_cmd like so:
`api_key_cmd = "op read op://somePath --no-newline",`
When I set my cursor on line 9 in the following buffer and issue ChatG…
-
`ai! prompt here ?model=modelName&temp=1&fp=1&pp=1&tp=1`
fp: Frequency penalty
pp: Presence penalty
tp: top p
See available parameters here with the playground
https://platform.openai.com/pla…
-
**Is your feature request related to a problem? Please describe.**
cant use gpt-3.5-turbo for the Rich output structure example
**Describe the solution you'd like**
an updated version of the Rich…
-
I tried to use Vicuna-13b-16k with vllm worker(feature in Fastchat library). In that case, it repeats single word in output.
reproduce the error:
" python3 -m fastchat.serve.vllm_worker --model-name…
-
问题:使用Java编写冒泡排序
2023-03-25 15:23:31.057 ERROR 1 --- [pool-1-thread-1] o.s.s.s.TaskUtils$LoggingErrorHandler : Unexpected error occurred in scheduled task
java.lang.NullPointerException: null
at …
-
Hi!
Just wanted to say thank you, awesome extension!
I have a couple ideas on how to improve DX
### Save snippets
When using `text-davinci-003` model, it's really useful to use the following pat…
-
3.3 Training Loop
-
python models_server.py --config configs/config.default.yaml # required when `inference_mode` is `local` or `hybrid`
python awesome_chat.py --config configs/config.default.yaml --mode server # for te…
-
We run experiments on PrOntoQA and FOLIO, the results of accuracy are only about 51%~53%. We run logic_program.py, self_refinement.py, and then logic_inference.py, evaluation.py. Any wrong with my ste…