Closed opensource-elearning closed 8 months ago
Thanks for raising this concern. It will be updated asap. Meanwhile, utilize the flag -p
or --provider
followed by the provider name.
For instance :
pytgpt interactive -p Aura
Note : The providers are auto-detected whether they're gf4-based or tgpt-based and then handled accordingly.
List working g4f-based working providers:
pytgpt gpt4free list providers --working
For the case of tgpt-based ones, they'll be shown in the <generate/interactive>
help info along with the --provider
details.
pytgpt <generate/interactive> --help
Thanks, @Simatwa for your quick response.
From the provider, I am not able to determine which LLM model is being used.
can please give me a command in which I can set the LLM model name instead of providers, once we provide the LLM name it can use all working providers for that LLM.
sorry for slightly diverging from the issue but can we have an AutoGPT/CrewAI/babyagi/autogen kind of implementation plan in the future or is there any way exist that I can use pytgpt with AutoGPT
It is very much possible to specify the model name, thanks to the --model
flag which employs try-error mechanism to check it's validity at response generation . However, different providers support different models so it is advisable to check the validity status of the model in relation to that particular provider.
Perhaps you might find the following command useful.
pytgpt gpt4free list models
For the case of AutoGPT, I'm yet to dig deep into it.
Hi @Simatwa ,
Thank you so much for developing a great tool.
can you please add some more help on how to change the default model from CLI ?
I am mainly interested in the gpt4free usage.