Closed DigitallyTailored closed 5 months ago
For those who want to be automatically upgraded to new GPT-4 Turbo preview versions, we are also introducing a new
gpt-4-turbo-preview
model name alias, which will always point to our latest GPT-4 Turbo preview model.
For those who want to be automatically upgraded to new GPT-4 Turbo preview versions, we are also introducing a new
gpt-4-turbo-preview
model name alias, which will always point to our latest GPT-4 Turbo preview model.
That might be good as an additional feature and I would be using it if it were added, but there is a disconnect there where we would not be communicating which model version the user is using exactly, plus we would still need to update the token costs manually if it changes between versions.
So, my primary concern is the fact that our API keys will be sent through the website and presumably stored. What's to prevent bad actors from using our keys?
On Thu, 25 Jan 2024 at 18:04, Luke Aaron @.***> wrote:
I've taken the pricing from https://openai.com/pricing which is the same as the current gpt-4-1106-preview
You can view, comment on, or merge this pull request online at:
https://github.com/ztjhz/BetterChatGPT/pull/521 Commit Summary
- 42e0f54 https://github.com/ztjhz/BetterChatGPT/pull/521/commits/42e0f547de0c2de2a7910524560261631fdb3d2e Added gpt-4-0125-preview as a model option
File Changes
(2 files https://github.com/ztjhz/BetterChatGPT/pull/521/files)
- M src/constants/chat.ts https://github.com/ztjhz/BetterChatGPT/pull/521/files#diff-5991d1a42d0cf29dd25cd52fee80b0c81b0782d1c3351b728c2783a2fd824205 (8)
- M src/types/chat.ts https://github.com/ztjhz/BetterChatGPT/pull/521/files#diff-149830263521dd5458cda6d3607035614de3c2d8c55a4784ac632891361601e7 (2)
Patch Links:
- https://github.com/ztjhz/BetterChatGPT/pull/521.patch
- https://github.com/ztjhz/BetterChatGPT/pull/521.diff
— Reply to this email directly, view it on GitHub https://github.com/ztjhz/BetterChatGPT/pull/521, or unsubscribe https://github.com/notifications/unsubscribe-auth/BAORNT6J4C7NKWDR6LV4FDLYQMFLPAVCNFSM6AAAAABCLPBXR6VHI2DSMVQWIX3LMV43ASLTON2WKOZSGEYDCNBVGQ2DSNY . You are receiving this because you are subscribed to this thread.Message ID: @.***>
@AETHER-ENGINEERS I don't think this is relavent to this PR. You might want to make a new issue to discuss. FYI the API key is stored on the client side using localstorage and according to the sourcecode on GH only used when being checked, set and posting requests directly to OpenAI.
@DigitallyTailored Why was the token length reverted back to 4096?
@ztjhz
@DigitallyTailored Why was the token length reverted back to 4096?
As per @NN1985 comment: https://github.com/ztjhz/BetterChatGPT/pull/521#discussion_r1468530493
The original 4096 appears to be correct.
@ztjhz @DigitallyTailored First, thank you for the update on this matter. However, when a token length of 4096 is specified, I am unable to input long text, which I was able to do successfully with the previous version (gpt-4-1106-preview).
I also tried running BetterChatGPT locally after I rewrote the constant in src/constants/chat.ts as follows, there are no errors when receiving API output:
export const modelMaxToken = {
'gpt-3.5-turbo': 4096,
'gpt-3.5-turbo-0301': 4096,
'gpt-3.5-turbo-0613': 4096,
'gpt-3.5-turbo-16k': 16384,
'gpt-3.5-turbo-16k-0613': 16384,
'gpt-3.5-turbo-1106': 16384,
'gpt-3.5-turbo-0125': 16384,
'gpt-4': 8192,
'gpt-4-0314': 8192,
'gpt-4-0613': 8192,
'gpt-4-32k': 32768,
'gpt-4-32k-0314': 32768,
'gpt-4-32k-0613': 32768,
'gpt-4-1106-preview': 128000,
// 'gpt-4-0125-preview': 4096,
'gpt-4-0125-preview': 128000,
};
Is this still the correct value of 4096? Does gpt-4-0125-preview not accept long inputs like gpt-4-1106-preview version?
I've taken the pricing from https://openai.com/pricing which is the same as the current gpt-4-1106-preview