langflow-ai / langflow

Langflow is a low-code app builder for RAG and multi-agent AI applications. It’s Python-based and agnostic to any model, API, or database.
http://www.langflow.org
MIT License
35.43k stars 4.22k forks source link

fix: CrewAI-based flows with no extra openai #4683

Closed erichare closed 6 days ago

erichare commented 6 days ago

This pull request creates LLM() and Tool() objects which are CrewAI compatible, based on the components in the Crew-Ai based flow (including openai api key, etc)

@ogabrielluiz this fixes the crewai issues in my experience - the root problem is that we were (are) attempting to use tools and agent LLMs that are not in the format that Crew AI expects. This PR attempts to build the crewai versions from the specifications of the input tool and LLM. It's not perfect, as you'll see, but i think it might be a better short term solution than the setting env like you pointed out. Let me know what you think. CC @NadirJ, note i did upgrade CrewAI to the latest release with this, but this is also compatible with the prior release as well.

codspeed-hq[bot] commented 6 days ago

CodSpeed Performance Report

Merging #4683 will degrade performances by 26.61%

Comparing fix-crewai-update (9c901c6) with main (3188517)

Summary

⚡ 2 improvements ❌ 1 regressions ✅ 12 untouched benchmarks

:warning: Please fix the performance issues or acknowledge them on CodSpeed.

Benchmarks breakdown

Benchmark main fix-crewai-update Change
test_successful_run_with_input_type_text 217.6 ms 296.5 ms -26.61%
test_successful_run_with_output_type_any 271.2 ms 168 ms +61.44%
test_successful_run_with_output_type_debug 295.5 ms 216.9 ms +36.21%
erichare commented 6 days ago

I like this.

What do you say we put this logic in the BaseCrewComponent?

The LLM can be passed to the agent or to the Crew. Also, do you know if they support all llms we support?

This is the main problem right now i think. I didn't see an easy way to find the "api_key" attribute in an arbitrary embeddings model (the langchain version). For example, in OpenAI this is openai_api_key. But obviously others its surely different. I was hoping there was a generic api_key attribute or some lookup to determine it. If you have any ideas there, that would be great... i also pass arbitrary kwargs to the LLM() function, but that has a specific api_key parameter that needs to be filled in so im sure there's models that wont work as is.

As for putting it in BaseCrewComponent, i think thats a great idea! I can update that.

erichare commented 6 days ago

@ogabrielluiz not sure if this would be a suitable solution in general, but we could do something like:

    def _find_api_key(self, model):
        """Attempts to find the API key attribute for a LangChain LLM model instance.

        Args:
            model: LangChain LLM model instance.

        Returns:
            The API key if found, otherwise None.
        """
        # Define the possible API key attribute names
        key_patterns = ["api_key", "key", "token"]

        # Iterate over the model attributes
        for attr in dir(model):

            # Check if the attribute name contains any of the key patterns
            if any(pattern in attr.lower() for pattern in key_patterns):
                value = getattr(model, attr, None)

                # Check if the value is a non-empty string
                if isinstance(value, str) and value:
                    return value

        return None
ogabrielluiz commented 6 days ago

@ogabrielluiz not sure if this would be a suitable solution in general, but we could do something like:

    def _find_api_key(self, model):
        """Attempts to find the API key attribute for a LangChain LLM model instance.

        Args:
            model: LangChain LLM model instance.

        Returns:
            The API key if found, otherwise None.
        """
        # Define the possible API key attribute names
        key_patterns = ["api_key", "key", "token"]

        # Iterate over the model attributes
        for attr in dir(model):

            # Check if the attribute name contains any of the key patterns
            if any(pattern in attr.lower() for pattern in key_patterns):
                value = getattr(model, attr, None)

                # Check if the value is a non-empty string
                if isinstance(value, str) and value:
                    return value

        return None

I think our best bet for now would be to downgrade crewai to a version it worked. What do you think?

erichare commented 6 days ago

I think our best bet for now would be to downgrade crewai to a version it worked. What do you think?

I could be wrong about this but i believe one of the reasons for upgrading crewai was the move to langchain~=0.3.x.... so .... i agree its the best solution, but if thats correct then it may not be an easy option. I'll double check that though

ogabrielluiz commented 6 days ago

I think our best bet for now would be to downgrade crewai to a version it worked. What do you think?

I could be wrong about this but i believe one of the reasons for upgrading crewai was the move to langchain~=0.3.x.... so .... i agree its the best solution, but if thats correct then it may not be an easy option. I'll double check that though

I see... Well. I guess, for now we focus on the short term solution. Maybe if the user passes a model that we haven't added support yet, raise an error.

erichare commented 6 days ago

I think our best bet for now would be to downgrade crewai to a version it worked. What do you think?

I could be wrong about this but i believe one of the reasons for upgrading crewai was the move to langchain~=0.3.x.... so .... i agree its the best solution, but if thats correct then it may not be an easy option. I'll double check that though

I see... Well. I guess, for now we focus on the short term solution. Maybe if the user passes a model that we haven't added support yet, raise an error.

I just updated the PR to factor the code a little better. i search for an key or token partial match in the attributes list - this works with the ones i've tested, i believe it works with all of the models we support that require a key, BUT - obviously its not perfect by any stretch and may cause a false positive - with that said, i think its better than the non-working solution from before (which really only works with OpenAI anyway).

If only there was some sort of from_langchain() method for an agent LLM like they have for tools (and im using in this PR). But alas, i don't see that: https://github.com/crewAIInc/crewAI/blob/main/src/crewai/llm.py

ogabrielluiz commented 6 days ago

They use litellm which can be ok to integrate later. @phact has experience with it.