Open zudsniper opened 1 year ago
Hey! Thank you for being so thorough with the ticket & list of features :) Addressing a few of these for now -
π¦ Adding autogpt Branch/Version Specification at Runtime
Kurtosis runs of Docker Images that are published to the DockerHub (or that are available locally in your Docker). I have added support for changing the version of AutoGPT we spin up using the AUTOGPT_IMAGE
flag. Last code snippet in this section.
π Creating a Wiki / Nested Documentation
Would love for you to start doing this! You should be able to do so now!
π Constructing a "build" System for Configuration / Local Caching / Pausable Instances / etc
This I am trying to understand a bit more! Say you a have a file called args.json
in your current working directory and say it looks like
{"GPT_4ALL": true, "MODEL_URL": "https://gpt4all.io/models/ggml-gpt4all-l13b-snoozy.bin"}
You should be able to pass it to Kurtosis using
kurtosis run github.com/kurtosis-tech/autogpt-package --enclave autogpt "$(cat args.json)"
Does that work for you?
𧱠Supporting Local/Unpublished/Private Auto-GPT Plugins
I can think of some ways of supporting this. I can definitely add support for repositories that aren't in the main.star but are still public and available on GitHub.
βΈ "Pausing" & "Unpausing" Instances for Prolonged Use / Execution
This we are working on as a team as an addition to core Kurtosis where users should be able to have persistent environments that they can stop and restart. This might take a while!
π¦ Adding autogpt Branch/Version Specification at Runtime
Kurtosis runs of Docker Images that are published to the DockerHub (or that are available locally in your Docker). I have added support for changing the version of AutoGPT we spin up using the AUTOGPT_IMAGE flag. Last code snippet in this section.
Sweet, and thank you for pointing me to this.
π Creating a Wiki / Nested Documentation
Would love for you to start doing this! You should be able to do so now!
Sure, though it might be sporadic due to my obligations at work and at home. I do write markdown really, really hard though... whatever that means.
π Constructing a "build" System for Configuration / Local Caching / Pausable Instances / etc
This I am trying to understand a bit more! Say you a have a file called args.json in your current working directory and say it looks like
[etc]
Yes... I could definitely just do this... don't mind me, I was tired or something oops!
𧱠Supporting Local/Unpublished/Private Auto-GPT Plugins
I can think of some ways of supporting this. I can definitely add support for repositories that aren't in the main.star but are still public and available on GitHub.
Adding support for public repositories is definitely the right direction to go in, but ideally I think being able to develop proprietary ones, or just develop offline, would make this project usable as the only development tool necessary for autogpt
. If you guys think it's the right thing to do or not: that's up to you! haha
βΈ "Pausing" & "Unpausing" Instances for Prolonged Use / Execution
This we are working on as a team as an addition to core Kurtosis where users should be able to have persistent environments that they can stop and restart. This might take a while!
That's ok! It's cool to know I stumbled upon a feature under active development. It's certainly a difficult problem, especially when addressed from the scale of the entire kurtosis
project. I hope to see it developed one day -- I'll be waiting C:
Thanks for the thorough response to my thorough suggestions. I will be sharing this project as much as possible to other autogpt
enthusiasts, especially with the GPT4ALL
addition - I would say that is a game changer.
Hey @zudsniper , catching up on this thread! As I understand, the two potential outstanding pieces of work are:
For no. 2, we're tracking something sort of similar here: https://github.com/kurtosis-tech/kurtosis/issues/705 (though I think we'll also need the persistent data that I mentioned in #84 ).
For no. 1, would you mind opening a new ticket for just that piece of work, and what you had in mind, so we can track it individually? That way we can close this issue and track the individual followups
Hey @zudsniper , catching up on this thread! As I understand, the two potential outstanding pieces of work are:
- Supporting Local/Unpublished/Private Auto-GPT Plugins 2."Pausing" & "Unpausing" Instances for Prolonged Use / Execution
For no. 2, we're tracking something sort of similar here: kurtosis-tech/kurtosis#705 (though I think we'll also need the persistent data that I mentioned in #84 ).
For no. 1, would you mind opening a new ticket for just that piece of work, and what you had in mind, so we can track it individually? That way we can close this issue and track the individual followups
Sure, I will make an issue just for it @mieubrisse
Hello
kurtosis
tech team!I have various thoughts about this project, all of which are simply ways to improve -- at least as I see it -- the usability, execution speed, functional application space, or simply the quality of life. This is to say, I think the current state of this project is already great.
Here is my vision for how this package could be used which would, for me at least, supercharge usage of this project.
π¬ Suggestions
These are not provided in any particular order, and it may well be that some should definitely be implemented before others. I leave that up to the more knowledgeable.
π Implementthanks!gpt4all
supportπ¦ Addingautogpt
Branch/Version Specification at Runtimeπ Constructing a "build-config
" System for Configuration / Local Caching / Pausable Instances / etckinda dumb lol
𧱠Supporting Local/Unpublished/Private Auto-GPT Plugins
I think it would be a very useful feature to allow users who are actively developing, working with a proprietary codebase, or for any other reason are deploying
autogpt
with currently unpublished forks ofAuto-GPT-Plugin-Template
to attach their local code to an instance ofautogpt-package
.As for the implementation of this, I am less clear on how it would work, as I don't fully understand
kurtosis
under the hood. However, I do think it is possible. I also always think that... but I digressπ¦ Adding
autogpt
Branch/Version Specification at RuntimeA method with which we could specify the version of
autogpt
we want to use would be very helpful, especially given the drastic changes version to version -- such as the scrapping of theMEMORY_BACKEND
system in Auto-GPTv0.4.0
.Currently, I would like to be able to use
autogpt
version0.3.1
, which can be located under the branchstable-0.3.1
in theAuto-GPT
repository.π Creating a Wiki / Nested Documentation
While you guys have definitely documented things well enough to use your project through just
README.md
, I think it's time you have a full-on wiki. I think the easiest, most user-friendly way to do this would be to utilize GitHub repository Wikis, but there are various possible ways. I would be happy to help create such documentation, if you would like.βΈ "Pausing" & "Unpausing" Instances for Prolonged Use / Execution
Finding a way to save the state of your
autogpt-package
instance, write it to a file or files, and then return to it at a later date, would make utilizing this tool much nicer in terms of quality of life (in my opinion of course)This could also potentially be a benefit when it comes to fault-tolerance / error recovery, which would be a great thing to support, both implemented by you guys, but also opening up a way for us to write our own conditions for "failure" and hook in our own subsequent handling systems.
π Constructing a "
build
" System for Configuration / Local Caching / Pausable Instances / etcThis one is by far the biggest ask. (save the best for last or something)
I am personally tired of having to provide all my arguments to the project in the form of a single CLI string. Even escaping special characters aside, growth while ultimately hampered by this constraint adds in my opinion unnecessary overhead.
As an alternative, I propose a
build
system not unlike that ofnpm
orsetuptools
for Node.js and Python respectively, involving ajson
[^2] file, maybe by convention calledagpkg_config.json
[^3], which would facilitate multiple executions of the same instance -- especially ifpausing
is supported. (somehow)Here's my suggestion, in terms of file examples.
agpkg_config.jsonc
[^4]With this concept, a
.env
file which is exactly the same format as that which is used in standardautogpt
instances can be utilized off-the-shelf by a new user intending to try outautogpt-package
. Same idea withai_settings.yaml
.Along with that, all the specification for other things, such as
gpt4all
configuration, or networking, or any other custom stuff you decide to put on top ofautogpt
in this package to increase its usefulness, could all be specified from a central file, and we wouldn't need to parse and merge.env
files intojson
data then escape it. It just seems like this would be a more scalable approach in my eyes, but I could be wrong.I also understand this would NOT be as simple as I have put it out to be -- it would be a somewhat major undertaking for the codebase -- but I wanted to put my suggestion out here. Feel free to use it as you see fit; I will not be offended. C:
π Summary
Here is the current
README.md
's method of initiatingautogpt-package
with aredis
memory backend.[^1]Here is how I would have things work, in a perfect world for me -- which is obviously not everyone's perfect world.
Alright, this is insanely long, and I apologize in advance! Hopefully something in here will inspire somebody. Thank you to everyone at
kurtosis-tech
again for making this package. It was the first way that I got Auto-GPT to work at all when I was first playing with it, and it was as seamless as it purported to be.But then I wanted more... hah.
Cheers, Jason, or
zod
[^1]: I realize that the
README.md
document is, while replete with information about the actual use of the repository, a bit scattered. I think you guys should add either just adocs/
directory with various.md
files, or a full-on GitHub Pages Wiki -- but I will make a separate issue for that [^2]: Oryaml
, or even justxml
if you don't like us very much. Format doesn't really matter here[^3]: This should be able to be overwritten by a CLI flag, perhaps
--build-configuration <filename.json>
[^4]: This is just a simple extension ofjson
which supports multi line, single line, and inline comments usingjs
syntax. Here is a parser fromMicrosoft
written for Node.js This is not required by any means; simplejson
should be used, & developers can precompile ourjsonc
orjsonl
into appropriatejson
conforming structures on our own time.