joshpxyne / gpt-migrate

Easily migrate your codebase from one framework or language to another.
https://gpt-migrate.com
MIT License
6.85k stars 485 forks source link

adding max_tokens to cli #8

Closed ctr26 closed 12 months ago

ctr26 commented 1 year ago

Adding max tokens to cli as using gpt3.5 crashes when the context length is 10k

joshpxyne commented 1 year ago

@ctr26 This will be solved with #2 - we'll have a mapping of model -> context window (max tokens) and we'll break down files and prompts accordingly.

marina727 commented 1 year ago

Ааа

gianpaj commented 1 year ago

I don't have access to gpt-4-32k. How can I use gpt-4? I get this error even with these changes

openai.error.InvalidRequestError: This model's maximum context length is 8192 tokens. However, you requested 10601 tokens (601 in the messages, 10000 in the completion). Please reduce the length of the messages or completion.
        max_tokens: int = typer.Option(8192),
    ):

    ai = AI(
        model=model,
        temperature=temperature,
        max_tokens=int(max_tokens),
    )
joshpxyne commented 1 year ago

@gianpaj The output also contributes to the number of tokens. If your model has a max context window of 8k, you're probably better off making max_tokens 4k or so.

danixv9 commented 1 year ago

vvdd

Ran-Mewo commented 1 year ago

Is it also possible to also add something that can change the openai base url? So this could work with the microsoft azure openai endpoint or proxies

joshpxyne commented 1 year ago

@Ran-Mewo Yes definitely, I'll try to get to this later - feel free to also submit a PR for this if you'd like