-
**To Reproduce**
Give a full working code snippet that can be pasted into a notebook cell or python file. Make sure to include the LLM load step so we know which model you are using.
```python
gpt3…
jkfnc updated
11 months ago
-
# LLM Prompt Testing for choosing icon from set of icons
In this issue we test different prompts for choosing an icon for searched product category.
Testing using GPT3.5
-
GPT-3 + GASで作るSlackbot
https://qiita.com/malleroid/items/36def200eadfce51c5f2
-
The audio RTP packet receives in WebRTC publisher transcodes from OPUS to AAC directly when enabled rtc_to_rtmp option.
But it may be out of order or arrive after retransmitting, so we need an audio …
-
-
### 💻 Operating System
Windows
### 📦 Environment
Official Preview
### 🌐 Browser
Chrome
### 🐛 Bug Description
https://github.com/lobehub/lobe-chat/assets/14088043/0c9f79f5-e892-43e0-a4fe-49441…
-
Hi, nice work,
To my understanding, you are using `text-davinci-003`given the following lines:
https://github.com/microsoft/visual-chatgpt/blob/865db606fb0b37e03f3fdb786b3cc29543f0d51b/visual_ch…
-
Fresh install. iOS client not responding keeps spinning. Node version 18.17.1.
App console eventually shows " `ERROR Connection error: The network connection was lost`."
Server console out…
-
Today I was going to train a gpt3_124m model, when I noticed that the max_seq_len is hardcoded [here](https://github.com/karpathy/llm.c/blob/d396cd18b71367f79cbaab8f8203e64e578f9ee8/train_gpt2.cu#L653…
-
Hello, I have a small question regarding the MuP proxy model sweeps. Did you perform full learning rate decay to the 4b or 16b tokens in the proxy models mentioned in Appendix F.4 (gpt3)? Or did you d…