yoheinakajima / babyagi

MIT License
19.87k stars 2.6k forks source link

Introducing LangChain to base babyagi #150

Closed MalikMAlna closed 1 year ago

MalikMAlna commented 1 year ago

Been seeing a lot of YouTubers adding LangChain agents for giving babyagi more functionality for its task completion. This issue is similar to #96, but I wanted to go a bit further with adding Langchain throughout the whole architecture along with more than just search agents, like optional APIs for more modular functionality with other toolsets.

Would that be alright?

Just wanted to gauge interest from the community and @yoheinakajima before moving forward on those more heavy-set changes to the code.

Ryangr0 commented 1 year ago

Honestly if you want to go this route I would say let's start getting some abstraction in here etc. I'd be super interested in a PR regardless of whether it gets merged or not. I've been messing around with it a couple of hours as well, implementing weaviate instead of pinecone. I'd love to see a more general solution for this.

Just a thought: Is python the way to go here?

moover-shaker commented 1 year ago

@Ryangr0 @MalikMAlna - I would be interested in this and can contribute some development. Have good working knowledge of langchain but newbe on babyagi.

Ryangr0 commented 1 year ago

@Ryangr0 @MalikMAlna - I would be interested in this and can contribute some development. Have good working knowledge of langchain but newbe on babyagi.

I have been working on this the past few days and thinking about it since I saw this repository.

https://github.com/webgrip/PuttyGPT/tree/master/Eve This is how far I've gotten so far. You're free to take a look through the scripts or the readme for an explanation of what's in there.

I have to say I took the liberty to change a lot, I refactored quite a bit, and the project may be in a broken state, even on master. But the idea's are there. And so is most of the implementation.

moover-shaker commented 1 year ago

@Ryangr0 - Briefly looked at your code and it’s a good start. Also like your idea to start with some abstraction and architecture discussions. Requirements discussion would also be very helpful before doing too much more coding. Also check out BabyAgi streamlit using LangChain. Oh, and, BTW, get some sleep 😴

MalikMAlna commented 1 year ago

Hey guys, I appreciate the enthusiasm. For context, I’m familiar with the babyagi code and commit process. However, I don’t have a lot of experience with LangChain.

That being said, I do know that the creator of the repo wants to keep babyagi relatively lightweight and low on huge breaking changes. So your example @Ryangr0 while interesting is a lot compared to what I think makes sense for the project.

I think the easiest way for us to coordinate on this change is to set up a PR and pair program our way through the changes, starting small with one or two of the task functions in a way that doesn’t break much backwards compatibility. I’ll get a PR set up with some changes either later today or sometime tomorrow. Gonna drop it here and feel free to provide feedback on the implementation of LangChain that I go with based your experience @moover-shaker and @Ryangr0.

I think it’ll be really neat to work with you both on this change, and I look forward to doing so!

Ryangr0 commented 1 year ago

I completely agree, this should probably stay a stepping stone like it is for a lot of us, I love this collaborative spirit. Just for context, all the code that pertains to babyagi that I refactored, is in here. Maybe you can get some ideas from there.

MalikMAlna commented 1 year ago

Yeah @Ryangr0, I’m probably gonna use that as a reference, likely setting up a relatively simple change similar to something in your implementation and from there we can discuss it further on the PR to see if we’d need to refactor or if it’s good to be reviewed and merged.

moover-shaker commented 1 year ago

Also @MalikMAlna and @Ryangr0 take a look at the Babyagi streamlit repo I referenced above. The streamlit part is not so interesting but he has a full babyagi implemented in LangChain and fairly nicely implemented. I tried to find a way to contact him but don’t have any (@ dory111111 doesn’t seem to work).

MalikMAlna commented 1 year ago

I looked it over @moover-shaker and yeah long term a paired down version of that without streamlit would be a good implementation. If you can’t @ dory, I’d try submitting a PR and/or Issue on their repo and see if they respond. That usually gets a dev’s attention faster than an email.

MalikMAlna commented 1 year ago

Okay, I got a PR up, I started with something very simple by adding LangChain to the requirements.txt and separating out the main loop into its own function that can run any LangChain implementation we end up going with as a group.

It's not much, but it's a start and it doesn't break anything. Let me know what you all think as I'm still new to LangChain and not 100% sure where to start with a full implementation.

Ryangr0 commented 1 year ago

If I'm being completely honest, I think my repo had a good thing going on on main.py and the relating classes/files, you're free to copy it to here if you want, or just gain some inspiration, whatever. That's the result of me struggling with chatgpt's help for almost 3 days now. In a bit I'll have it analyze all the code and write a summary of what I'm doing. Lol I can't believe this is reality now.

MalikMAlna commented 1 year ago

Yeah, I'm mostly leaning toward your implementation @Ryangr0. I think it's the cleanest and you've actually given permission for using it in the code here.

I would appreciate the summary though. I have an idea as to what it's doing, but documenting each part would really help so I think I'm gonna on that for right now before moving forward.

MalikMAlna commented 1 year ago

That being said, I have also been reading some of your toddleragi files and those seem like they could work as well for a long-term full implementation.

Ryangr0 commented 1 year ago

Be aware though that even right now, in my local environment, it's not fully working. I got it working, then it broke, then I refactored, then it broke again, it's one of those things. But every moment I'm getting closer to stability. I'm still having a hard time with the concept of chains, agents and tools. I think a lot of people use a lot of words completely interchangeably and it makes it hard to know what's going on. Especially with python's wonky **kwargs stuff lol, I've never worked with python for longer than a few minutes before. Fun experience, for sure.

moover-shaker commented 1 year ago

One (of several) nice things about your approach @Ryangr0 is a more complete integration of LangChain. The streamlit repo was really just using LangChain as conduit to OpenAI instead of using the nice LangChain (LC) abstraction to DOZENS of AI providers (Cohure, AI21, etc.).

If I can help a little, **kwargs are just what Python calls “unpacking”. If your familiar with c++ it’s somewhat like derefferencing a pointer or like a spread operator in JavaScript. The analogy is not perfect though. RealPython site has a good explanation here.

We haven’t started our abstraction discussion yet (I guess we are now 😀) but I was envisioning that we could us LC’s agent as a “router” or “supervisor” agent to control its tools. We may need to write custom LC agent or tools to implement BabyAGI in (this is not hard, I’ve written several).

And, at another level, we may want to even abstract LC away from the current BabyAGI to make an easier transition and also since some users might find LC too heavyweight (I see that AutoGPT started with LC but stripped it out, might be nice to know why).

@MalikMAlna - it would be nice to have a good place to have these discussions and store resultant documents on GitHub. Any suggestions or thoughts?

MalikMAlna commented 1 year ago

Not at the moment @moover-shaker, but I was curious about a couple things you mentioned.

We haven’t started our abstraction discussion yet (I guess we are now grinning) but I was envisioning that we could us LC’s agent as a “router” or “supervisor” agent to control its tools. We may need to write custom LC agent or tools to implement BabyAGI in (this is not hard, I’ve written several).

Could you provide an example? Again, still pretty new myself.

And, at another level, we may want to even abstract LC away from the current BabyAGI to make an easier transition and also since some users might find LC too heavyweight (I see that AutoGPT started with LC but stripped it out, might be nice to know why).

I would be in favor of taking this approach. I think I can speak for the community in saying that the architecture should be able to stand on its own, independent of any given framework such as LC. Although using LC or choosing not to do so should be fairly effortless for everyone in either case.

moover-shaker commented 1 year ago

Custom tools

Custom agents

Use agent just as a router

Ryangr0 commented 1 year ago

One (of several) nice things about your approach @Ryangr0 is a more complete integration of LangChain. The streamlit repo was really just using LangChain as conduit to OpenAI instead of using the nice LangChain (LC) abstraction to DOZENS of AI providers (Cohure, AI21, etc.).

If I can help a little, **kwargs are just what Python calls “unpacking”. If your familiar with c++ it’s somewhat like derefferencing a pointer or like a spread operator in JavaScript. The analogy is not perfect though. RealPython site has a good explanation here.

We haven’t started our abstraction discussion yet (I guess we are now 😀) but I was envisioning that we could us LC’s agent as a “router” or “supervisor” agent to control its tools. We may need to write custom LC agent or tools to implement BabyAGI in (this is not hard, I’ve written several).

And, at another level, we may want to even abstract LC away from the current BabyAGI to make an easier transition and also since some users might find LC too heavyweight (I see that AutoGPT started with LC but stripped it out, might be nice to know why).

@MalikMAlna - it would be nice to have a good place to have these discussions and store resultant documents on GitHub. Any suggestions or thoughts?

It is my opinion that it wouldn't make much sense to refactor this library completely to use a framework. I think I get what you're going for, but the actual "recursive" loop that was demonstrated in this project isn't technically such a difficult concept to implement. The idea was 👌 though. That's why I refactored a little bit of babyagi and named it toddleragi (as well as wanting to know more about the weaviate python client), so that I could then work on PuttyGPT by implementing that loop.

Anyway, the point was, it's up to you. I think this repository would be a great, example of some small usecases with a minimal implementation to get people in the door. ...That's not to say it can't look nice :)

I have many ideas. But I need to go to sleep soon. I'm going to write them down in my repo so I can keep track of it in one place and close my eyes.

Am I saying something crazy when I say that I just need to work on this until the script can literally just rewrite itself to be better and I don't even need to commit anything anymore? Make new scripts, make new agents, read papers, read blogs, scrape API's, automatically keep track of new developments in the field, on huggingface, in weaviate, build a long and short term memory and more. I feel like I'm almost there...

MalikMAlna commented 1 year ago

Hey @Ryangr0, I decide to take the code for the toddleragi work you were doing for my personal reference on a full implementation.

I managed to get it working up to the point where you left off on your todo for the query method in context_agent.py. Here's what it looks like in the babyagi code in this branch I set up: https://github.com/MalikMAlna/babyagi/tree/langchain-full-implementation

I left most everything in the extensions folder aside from the toddleragi file that I left in root. Also, I left out Weaviate for now because I wanted to focus on Pinecone since I'm more familiar with that vector db.

Let me know if you have any thoughts.

moover-shaker commented 1 year ago

@Ryangr0 - BEFORE READING THIS, GET SOME SLEEP. When you wake up and have some breakfast an d coffee, read on…

So….are you proposing a whole new open source project “inspired by” BabyAGI but not using any of its code, based fully on LC? That is certainly one alternative. I would like to see what @MalikMAlna and others think about that.

Another alternative is to break out an AiEngine layer with an AiEngineFactory (both abstract classes) and have two implementations of AiEngine:

  1. AiLangChainEngine - based on pure LC implementation (ala your Eve (great name, BTW))
  2. AiNativePythonEngine - based on (slightly refactored) current BabyAGI code, all in Python with no heavy weight 3rd party libraries.

    AiEngineFactory (which may actually not be abstract, not sure) would have a createAiEngine(aiEngineName) method that would create one or the other concrete ai engine classes. Not sure exactly where the abstraction would be but one option is to have something like a AiEngine:runLoop() method (though this is TBD).

This has two advantages:

  1. Keeps this in the existing BabyAGI community along with its traction and developers.
  2. Allows switching between a lighter engine with less functionality and a heavier engine with more functionality.

it does have (at least) the disadvantage of an additional abstraction layer, with is not always justified.

As for your sprint goals, that are highly commendable and I really think we can get there soon. But it’s a bit ambitious for the first step. Would be great to see the ABSOLUTE MINIMUM running version of Eve first so we can clone it and start to help you code together.

moover-shaker commented 1 year ago

@MalikMAlna - that’s a good approach.

Ryangr0 commented 1 year ago

@Ryangr0 - BEFORE READING THIS, GET SOME SLEEP. When you wake up and have some breakfast an d coffee, read on…

So….are you proposing a whole new open source project “inspired by” BabyAGI but not using any of its code, based fully on LC? That is certainly one alternative. I would like to see what @MalikMAlna and others think about that.

Another alternative is to break out an AiEngine layer with an AiEngineFactory (both abstract classes) and have two implementations of AiEngine:

  1. AiLangChainEngine - based on pure LC implementation (ala your Eve (great name, BTW))
  2. AiNativePythonEngine - based on (slightly refactored) current BabyAGI code, all in Python with no heavy weight 3rd party libraries.

AiEngineFactory (which may actually not be abstract, not sure) would have a createAiEngine(aiEngineName) method that would create one or the other concrete ai engine classes. Not sure exactly where the abstraction would be but one option is to have something like a AiEngine:runLoop() method (though this is TBD).

This has two advantages:

  1. Keeps this in the existing BabyAGI community along with its traction and developers.
  2. Allows switching between a lighter engine with less functionality and a heavier engine with more functionality.

it does have (at least) the disadvantage of an additional abstraction layer, with is not always justified.

As for your sprint goals, that are highly commendable and I really think we can get there soon. But it’s a bit ambitious for the first step. Would be great to see the ABSOLUTE MINIMUM running version of Eve first so we can clone it and start to help you code together.

I woke up refreshed! Crazy stuff, I was up for over 48 hours and didn't feel tired at all, lol.

Hmmm, good idea. We should abstract this stuff. There will be standards for certain things within the community so it would be great to know if there are any projects doing this currently. I know of certain ethics standards.

Langchain has so much utility code, but most if it is (in my humble opinion) completely unreadable and even inconsistent. Not the quality a framework wants to be at, but I see the churn and people are hard at work. Think concurrent requests, implementation capabilities of abstracts they defined (introduction of some concepts such as Tools, Agents and Chains. These are great concepts to think around. Before we abstract things I would like to have more insight into what people need to end up using for certain tasks.

We can be sure of a few things:

To give a concrete answer, I would personally (and I am) making code that for now relies on langchain. It's an active community where we can be certain the interest of developers that experiment with AI stuff will converge. That's the best way to start narrowing down on the patterns we can see emerging from this new branch of engineering.

radman-rt commented 1 year ago

@Ryangr0 -- Totally agree with everything you said. I think there's real value in trying to keep both the BabyAGI and LC communities involved.

I did a clone of @MalikMAlna's forked langchain-full-implementation branch and see quite a few differences between this and the toddleragi in your PuttyGPT repo . Some are minor (adding langchain to requirements.tx) but some are larger. Seems like you're using Weaviate in your toddleragi whereas the fork is using Pinecone. As an aside, LC already has built-in support for both of these:

But we can refactor to use those later.

Which of these two vectorstores are in your latest thinking? Weaviate? If so, would it be possible to update @MalikMAlna 's fork with your latest code?

Also, I don't see anything in @MalikMAlna fork that allows setting of the OPEN_API_KEY in env vars or elsewhere (maybe just missing this?).

@MalikMAlna -- Any chance you could set @Ryangr0 and myself up as contributors to your fork? And also enable Issues in the Github repo settings so we can start putting some of our discussions in there?

radman-rt commented 1 year ago

If I can get the @MalikMAlna fork up and running (at least minimally), I will attempt an quick abstraction of BabyAGI and toddleragi in the next few days.

Ryangr0 commented 1 year ago

Mine is weaviate, my project has a docker-compose file that runs the following services when I up:

version: '3.4'
services:
  weaviate:
    container_name: weaviate
    command:
    - --host
    - 0.0.0.0
    - --port
    - '9001'
    - --scheme
    - http
    image: semitechnologies/weaviate:1.18.3
    ports:
    - 9001:8080
    restart: on-failure:0
    environment:
      TRANSFORMERS_INFERENCE_API: 'http://t2v-transformers:8080'
      QNA_INFERENCE_API: 'http://qna-transformers:8080'
      IMAGE_INFERENCE_API: 'http://i2v-neural:8080'
      NER_INFERENCE_API: 'http://ner-transformers:8080'
      SUM_INFERENCE_API: 'http://sum-transformers:8080'
      SPELLCHECK_INFERENCE_API: 'http://text-spellcheck:8080'
      OPENAI_APIKEY: 
      QUERY_DEFAULTS_LIMIT: 25
      AUTHENTICATION_ANONYMOUS_ACCESS_ENABLED: 'true'
      PERSISTENCE_DATA_PATH: '/var/lib/weaviate'
      DEFAULT_VECTORIZER_MODULE: 'text2vec-transformers'
      ENABLE_MODULES: 'text2vec-transformers,qna-transformers,ner-transformers,sum-transformers,text-spellcheck,img2vec-neural,ref2vec-centroid,generative-openai'
      CLUSTER_HOSTNAME: 'node1'
    volumes:
      - /var/weaviate:/var/lib/weaviate
    networks:
      - weaviate-network
      - putty-network
  t2v-transformers:
    container_name: t2v-transformers
    image: semitechnologies/transformers-inference:sentence-transformers-multi-qa-MiniLM-L6-cos-v1
    environment:
      ENABLE_CUDA: '0'
  qna-transformers:
    container_name: qna-transformers
    image: semitechnologies/qna-transformers:distilbert-base-uncased-distilled-squad
    environment:
      ENABLE_CUDA: '0'
  ner-transformers:
    container_name: ner-transformers
    image: semitechnologies/ner-transformers:dbmdz-bert-large-cased-finetuned-conll03-english
    environment:
      ENABLE_CUDA: '0'
  sum-transformers:
    container_name: sum-transformers
    image: semitechnologies/sum-transformers:facebook-bart-large-cnn-1.0.0
    environment:
      ENABLE_CUDA: '0'
  text-spellcheck:
    container_name: text-transformers
    image: semitechnologies/text-spellcheck-model:pyspellchecker-en
  i2v-neural:
    container_name: i2v-transformers
    image: semitechnologies/img2vec-pytorch:resnet50
    environment:
      ENABLE_CUDA: '0'

networks:
  default:
    name: weaviate-network

  putty-network:
    external: true

This means there is no using of a (potentially paid) 3rd party service to store our data or go down or some other things that are outside our control. That's also why I use searxNG for running search engine queries, and on top of that it provides enhanced privacy. AND you don't have to pay for google. Amazing.

moover-shaker commented 1 year ago

Trying to postpone installing docker just yet - it’s a bit of a pain on Windows (actually, docker itself on Win is ok but can’t seem to upgrade from WSL 1 to WSL 2). But, I know I’ll need it soon so I’ll start trying harder.

MalikMAlna commented 1 year ago

So….are you proposing a whole new open source project “inspired by” BabyAGI but not using any of its code, based fully on LC? That is certainly one alternative. I would like to see what @MalikMAlna and others think about that.

I don't think we need to go that far to add in LC, but that's just me.

Another alternative is to break out an AiEngine layer with an AiEngineFactory (both abstract classes) and have two implementations of AiEngine:

1. AiLangChainEngine - based on pure LC implementation (ala your Eve (great name, BTW))

2. AiNativePythonEngine - based on (slightly refactored) current BabyAGI code, all in Python with no heavy weight 3rd party libraries.

AiEngineFactory (which may actually not be abstract, not sure) would have a createAiEngine(aiEngineName) method that would create one or the other concrete ai engine classes. Not sure exactly where the abstraction would be but one option is to have something like a AiEngine:runLoop() method (though this is TBD).

This has two advantages:

1. Keeps this in the existing BabyAGI community along with its traction and developers.

2. Allows switching between a lighter engine with less functionality and a heavier engine with more functionality.

it does have (at least) the disadvantage of an additional abstraction layer, with is not always justified.

It doesn't sound like a bad option either. I'd prefer abstraction over a complete rewrite.

As for your sprint goals, that are highly commendable and I really think we can get there soon. But it’s a bit ambitious for the first step. Would be great to see the ABSOLUTE MINIMUM running version of Eve first so we can clone it and start to help you code together.

Yeah, exactly. I'd ideally like something small, but achievable/successful before anything major. Hence the approach I took with @Ryangr0's toddleragi implementation.

@Ryangr0 -- Totally agree with everything you said. I think there's real value in trying to keep both the BabyAGI and LC communities involved.

I did a clone of @MalikMAlna's forked langchain-full-implementation branch and see quite a few differences between this and the toddleragi in your PuttyGPT repo . Some are minor (adding langchain to requirements.tx) but some are larger. Seems like you're using Weaviate in your toddleragi whereas the fork is using Pinecone. As an aside, LC already has built-in support for both of these:

* https://python.langchain.com/en/latest/ecosystem/pinecone.html

* https://python.langchain.com/en/latest/ecosystem/weaviate.html

But we can refactor to use those later.

Oh, that's neat! I chose Pinecone for simplicity because we're already using that for babyagi, I'm more familiar with it versus Weaviate, and I mostly just wanted to get something working locally for what we want to work towards in the long term. That being said, I'd be happy to have a set-up that's more vector db agnostic. Ideally, one can just choose their preferred vector db and go from there, but that seemed like a more long-term objective given my lack of experience with LangChain. If you feel like it's a simple refactor then feel free to add it to the code.

Also, I don't see anything in @MalikMAlna fork that allows setting of the OPEN_API_KEY in env vars or elsewhere (maybe just missing this?).

Yeah, I changed the .env.example to .env so I could keep my credentials local. I can add that back with the current .env var options later today, but it should basically be the same as the .env.example file that comes with the code already. I'm pretty sure that I didn't add any new .env vars that weren't already in .env.example

@MalikMAlna -- Any chance you could set @Ryangr0 and myself up as contributors to your fork? And also enable Issues in the Github repo settings so we can start putting some of our discussions in there?

Absolutely! Gonna add you @radman-rt, @Ryangr0, and @moover-shaker as contributors to the fork ASAP! Feel free to mess around with it, but I would ask that you set up PRs for any final changes to the branch that we plan to put into the babyagi codebase.

Trying to postpone installing docker just yet - it’s a bit of a pain on Windows (actually, docker itself on Win is ok but can’t seem to upgrade from WSL 1 to WSL 2). But, I know I’ll need it soon so I’ll start trying harder.

Actually, you don't need it. I've been using a /venv that I set up earlier this whole time. That should still work for dev purposes but do let us know if you install any additional modules so that appropriate changes can be made to the Docker files.

MalikMAlna commented 1 year ago

Alright, I've sent invites to all three of you. I've also enabled Issues and Discussions (Because I think it's important to distinguish between the two. This whole thread is definitely now a discussion ;) ). But yeah, feel free to clean things up and get things working where you can. All I ask is that you submit a PR into the original langchain-full-implementation branch for anything you want going into the final PR.

moover-shaker commented 1 year ago

Can we just do a commit into langchain-full-implementation branch or would you prefer a full PR for each commit?

MalikMAlna commented 1 year ago

Great question @moover-shaker! I would ask that we stick to full PRs for now. I want to get an idea of what's changing and where we want to go with things. After that, we can move into small changes being fine, but larger, multi-file changes require a PR, and after that we can decide where to go from there.

jimwhite commented 1 year ago

I've made a copy of LangChain's version of BabyAGI with Tools that has a few tweaks so that it will just run in Google Colab without any changes necessary: https://github.com/jimwhite/babyagi-langchain/blob/main/baby_agi_with_agent.ipynb

MalikMAlna commented 1 year ago

I've made a copy of LangChain's version of BabyAGI with Tools that has a few tweaks so that it will just run in Google Colab without any changes necessary: https://github.com/jimwhite/babyagi-langchain/blob/main/baby_agi_with_agent.ipynb

I think you’re kind of missing the point of what we are trying to do, @jimwhite. We were trying to integrate LC into the implementation of the code itself without using Google Colab. Like having it as a part of the code within the codebase.

Also, we don’t actually have permission from the LangChain creator of that code to bring in their implementation of babyagi into this codebase. So that was something that I think we considered as a given for not taking that approach.

jimwhite commented 1 year ago

The LangChain license is MIT, just like babyagi, so permission has already been explicitly granted (although IANAL). And no I'm not missing the point. Just sharing here because I think the lightweight approach of LangChain's code and ability to run in Colab supports rapid iteration to do design work as mentioned in some of the comments above. While there is nothing Colab specific in the Jupyter notebook (apart from a convenient "one click" launch button), the ability to start working with BabyAGI without spending any time fiddling with environment setup should be of interest and value to folks with little CLI experience attracted to this project. Guess I could have elaborated a bit more in my first comment but kind of assumed the utility of the code would be self-evident.

MalikMAlna commented 1 year ago

No worries, it happens @jimwhite. In either case, the whole topic might be moot with the inclusion of the recent task orchestration merge into LC. It looks like their implementation of babyagi has been put in from the wiki.

moover-shaker commented 1 year ago

So…with the above BabyAGI and task orchestration integration onto LC (things change so fast in this space🙃) I’m wondering whether we should even continue working on this fork/PR? What do you guys think, esp. @Ryangr0 and @MalikMAlna ?

Ryangr0 commented 1 year ago

I'm going to spend most of my time implementing new ideas and newly merged PRs into Putty for the next few days at least. Refactoring along the way, maybe put in a few prs in here and there. I don't think there's much point doing this right now. Like I said, the idea is great, but it's not difficult to implement by any measure. It belongs in langchain in my opinion. This repo could focus more on the core functionality, which is the prompting.

MalikMAlna commented 1 year ago

Yeah, I largely agree with @Ryangr0.

With the implementation added as an official part of LangChain for Task Orchestration, it ultimately makes the project here moot because LangChain now contains the functionality we thought should go in here, but it does make more sense that they finally have it in there. So we can focus more on the core functionality while they use it to further build out LC.

So given that being the case, I’m just gonna close out this issue and the PR. You all can still work on the branch if you so choose, but I’m probably just allocate more of my time directly to babyagi outside of an LC specific implementation.

moover-shaker commented 1 year ago

Ok, makes sense. GL to you guys.