Closed kqlio67 closed 1 day ago
Hi! I still think airforce
and llm
providers should be divided into two providers, and not replace each other. Based on tests from version 0.3.3.4 and any subsequent ones, if you use llm
settings, then there are much fewer working text-models.
If you use airforce
on version 0.3.3.4, then there are much more working text-models, although they are not specified there.
Also, Chatgpt4Online
and FreeNetfly
are not removed from the model.py
. These providers are removed, but we still call them in this file. They should be remove.
Thank you for your contribution to the project! This pull request introduces significant updates including new models and providers, enhancements to existing providers, and necessary maintenance changes.
New Models Added:
o1-preview
, o1-mini
, llama-3.2-90b
, codellama-34b
, mixtral-7b
, mistral-tiny
, claude-3.5-haiku
, qwen-2.5-72b
, flux-dev
, dalle-3
, recraft-v3
New Providers Added:
Copilot
, LLMPlayground
, AmigoChat
New Base Providers:
Gryphe
, databricks
, anthracite-org
, ai21
, llmplayground.net
DDG
and Copilot
.ChatGptEs
and Blackbox
.LLMPlayground
.Liaobots
and DeepInfra
with the latest models.Qwen2.5
and Nemotron
support to multiple providers.PollinationsAI
to needs_auth/
.AIUncensored
to not_working/
.Allyfy
provider.Airforce
to LLMPlayground
.Liaobots
.Airforce
with LLMPlayground
in the provider list.The code changes are well-structured and the addition of new models and providers seems to follow the existing conventions. The use of constants for models and personas enhances maintainability.
Once again, thank you for your contribution! Your efforts help improve the project significantly.
o1-preview, o1-mini, llama-3.2-90b, codellama-34b, mixtral-7b, mistral-tiny, claude-3.5-haiku, qwen-2.5-72b, flux-dev, dalle-3, recraft-v3
meta-llama/Meta-Llama-3.2-90B-Vision-Instruct-Turbo, codellama/CodeLlama-34b-Instruct-hf, meta-llama/Llama-3.2-90B-Vision-Instruct-Turbo, mistralai/Mistral-7B-Instruct-v0.3, mistralai/mistral-tiny, deepseek-ai/deepseek-llm-67b-chat, databricks/dbrx-instruct, anthracite-org/magnum-72b, ai21/jamba-1-5-mini
Copilot, LLMPlayground, AmigoChat
DDG
and Copilot
with better conversation handling.Qwen2.5
and Nemotron
support to multiple providers.ChatGptEs
and Blackbox
.LLMPlayground
.PollinationsAI
to needs_auth/
.AIUncensored
to not_working/
.Allyfy
provider.Airforce
to LLMPlayground
.Liaobots
.Airforce
with LLMPlayground
in provider list.Thank you for the extensive updates and the addition of new models and providers. This pull request is very comprehensive and introduces significant improvements to the project. The detailed changes and additions will enhance the functionality and usability of the tool significantly.
Recommendations:
Overall, this pull request is well-executed and will bring valuable enhancements to the project. Keep up the good work!
Thank you for contributing to the project!
g4f copilot review
Thanks for the significant contribution to the gpt4free project! This pull request introduces a substantial number of improvements, adding new models and providers and enhancing existing ones. The changes are well-documented, and the commit messages are clear and concise.
Here are some observations and suggestions:
Positive Aspects:
gpt4free
.DDG
and Copilot
, are valuable additions.needs_auth/
and not_working/
and removing deprecated providers is good housekeeping.Areas for Improvement/Questions:
AmigoChat
Model Mapping: The AmigoChat
provider now uses a large dictionary MODELS
for model mapping. While functional, consider if a more structured approach (e.g., a class or separate configuration file) might be more maintainable as the number of models grows.AmigoChat
provider mentions balance checks. It might be beneficial to add more robust error handling and informative error messages for these situations.Airforce
(LLM Playground) Chunking: The message chunking implemented in Airforce
seems potentially useful in other providers. Consider if this logic could be abstracted into a reusable helper function for broader application.Minor Suggestions:
re.sub
in Airforce
for filtering. Depending on the complexity of the patterns to be removed, this might be a candidate for optimization using compiled regular expressions.Overall, this is a high-quality pull request that significantly enhances the project. Addressing the minor suggestions would further improve its clarity and maintainability. I recommend merging after addressing these points.
Thank you for your contribution to the project! This pull request implements significant updates, adding new models and providers, enhancing existing ones, and performing various maintenance tasks. These changes will undoubtedly enhance the functionality of the codebase.
o1-preview
, o1-mini
, llama-3.2-90b
, and others is a great enhancement. This allows users access to the latest AI capabilities.Copilot
, LLMPlayground
, and AmigoChat
expands the availability of tools for users, which is a fantastic addition.DDG
and Copilot
, along with other fixes and updates to various providers, showcases a commitment to maintaining high-quality interactions.Liaobots
might affect users relying on those models. Adequate documentation and communication regarding this change will be essential to mitigate any issues for current users.The code changes are well-organized, and the implementation seems to follow good practices. The newly added function split_message
looks beneficial for breaking down large messages, promoting cleaner and more efficient processing.
Overall, this pull request demonstrates excellent work in enhancing the project significantly. The community will surely benefit from these updates. Thank you once again for your contribution!
Best, g4f copilot
# Review for Pull Request: feat(g4f): Major provider updates and new model support
## Summary
Thank you for your contribution to the project! This pull request introduces significant updates to the providers and adds support for several new models, which is a fantastic enhancement for our users.
## Core Changes
- **New Models Added:**
- `o1-preview`
- `o1-mini`
- `llama-3.2-90b`
- `codellama-34b`
- `mixtral-7b`
- `mistral-tiny`
- `claude-3.5-haiku`
- `qwen-2.5-72b`
- `flux-dev`
- `dalle-3`
- `recraft-v3`
- **New Providers Added:**
- `Copilot`
- `LLMPlayground`
- `AmigoChat`
- **Base Providers Added:**
- `Gryphe`
- `databricks`
- `anthracite-org`
- `ai21`
- `llmplayground.net`
## Provider Improvements
- Enhanced conversation handling for `DDG` and `Copilot`.
- Fixed system prompts in `ChatGptEs` and `Blackbox`.
- Improved message processing in `Airforce`.
- Updated `Liaobots` and `DeepInfra` with the latest models.
- Added support for `Qwen2.5` and `Nemotron` across multiple providers.
## Maintenance
- Moved `PollinationsAI` to `needs_auth/`.
- Moved `AIUncensored` to `not_working/`.
- Removed deprecated `Allyfy` provider.
- Updated model mappings and provider configurations.
## Breaking Changes
- Removed several outdated models from `Liaobots`.
## Code Review
The code changes look well-structured and follow the project's coding standards. The addition of the `split_message` function in `Airforce.py` is a great improvement for handling long messages. The updates in `AmigoChat.py` and other provider files are clear and enhance functionality.
### Suggestions
- Consider adding unit tests for the new models and providers to ensure they work as expected.
- Documentation updates may be necessary to reflect the new models and providers added.
Overall, this is a solid contribution that enhances the
@TheFirstNoob I didn't quite grasp the issue and renamed Airforce
to LLMPlayground
, but after seeing your comments, I reverted back to the original Airforce name to maintain consistency.
I've implemented a straightforward solution by removing the "origin": "https://llmplayground.net" parameter. This resolved the issue with non-working text models.
Regarding splitting the providers, I don't see a practical benefit in maintaining two separate providers since llmplayground.net is essentially just a GUI interface that utilizes api.airforce backend. Given that llmplayground.net's API is fundamentally dependent on api.airforce, the more elegant solution was to simply remove the origin parameter, which has effectively addressed the issue you reported.
@kqlio67 Thanks for answer! Let me explain in more detail.
The file version from version 0.3.3.4 works with models that are not listed on the site itself, they were listed there earlier, but the functionality works and quite quickly. This list of models is listed above in the bug list and they must be specified in models.py
. At the same time, the LLM version, that is, everything after 0.3.3.4, works extremely strangely. On 3.5-turbo
and those models that are listed in the bug list, they give me an error that "such a model is not supported" (since we are checking for models that are listed in /models/v1
. And all subsequent models, when requested, freeze for me and endlessly wait for a response.
It is for this reason that I suggest separating these two file versions, since there will be greater availability of old and new models. Or a different approach is required.
Core changes:
o1-preview, o1-mini, llama-3.2-90b, codellama-34b, mixtral-7b, mistral-tiny, claude-3.5-haiku, qwen-2.5-72b, flux-dev, dalle-3, recraft-v3
Copilot, LLMPlayground, AmigoChat
Gryphe, databricks, anthracite-org, ai21, llmplayground.net
Provider improvements:
DDG
andCopilot
with better conversation handlingChatGptEs
andBlackbox
(#2393), (https://github.com/xtekky/gpt4free/issues/2433#issue-2695811324)Airforce
(#2360)Liaobots
andDeepInfra
with latest modelsQwen2.5
andNemotron
support to multiple providersMaintenance:
PollinationsAI
toneeds_auth/
AIUncensored
tonot_working/
Allyfy
providerBreaking changes:
Liaobots