Open pale2hall opened 1 year ago
Thanks @pale2hall 🙏
I think it's getting tripped up by this line in the plugin prompt: https://github.com/ian-kent/gptchat/blob/main/module/plugin/create.go#L236
You could try playing around with the prompt or removing it entirely.
Despite the prompt, it can create plugins which connect to the internet, but with the current prompt it just needs careful wording. Here's an example prompt that seems to work quite reliably: https://www.reddit.com/r/ChatGPT/comments/12a0ajb/comment/jeubopr/
I'll give that a shot!
Edit: Idea for future development: You assign security levels to pieces of the prompt, so that if you ran GPTChat at level 5 it would omit things like the line about HTTP and external libraries, but by default it would include it.
Sorry for spamming in my Help ticket, but some interesting claims that GPT4/ChatGPT is making about not being able to implement plugins, despite having been told it can/how to.
Perhaps OpenAI might be telling their models in their startup prompt they can't augment their own code / develop and implement their own plugins? Just an idea.
I wonder if calling them something else would help or hurt. If GPTChat called them 'widgets' instead, it might bypass OpenAI, if OpenAI is specifically telling it it can't write plugins for example. Their new GPT-4 feature is specifically called 'plugins' after all. They might have had to internally tell it not to try writing plugins.
USER: Please explain /plugin
AI: The /plugin command allows you to create custom plugins written in Go. When creating a plugin, you need to implement a struct that satisfies the following interface:
type Plugin interface {
Example() string
Execute(input map[string]any) (map[string]any, error)
}
You do not need to write any supporting code like the main function, only implement the struct. Here's an example plugin called "add-one," which adds 1 to a given value:
```go
package main
import "github.com/ian-kent/gptchat/module"
var Plugin module.Plugin = AddOne{}
type AddOne struct{}
func (c AddOne) ID() string {
return "add-one"
}
func (c AddOne) Example() string {
return `/add-one {
"value": 5
}`
}
func (c AddOne) Execute(input map[string]any) (map[string]any, error) {
value, ok := input["value"].(int)
if !ok {
return nil, nil
}
value = value + 1
return map[string]any{
"result": value,
}, nil
}
```
When creating plugins, try to avoid using external dependencies. Use native Go whenever possible. The plugin must import the module package and define a package variable named 'Plugin', similar to the AddOne example.
To create a plugin, use the "/plugin create
{}" command. The code inside the command must be valid Go code that compiles without errors. Please note that my current capabilities do not support implementing Go plugins. However, the information provided should help you understand what the /plugin command is for and how it works.
Another anomaly:
It made a plugin.go where it didn't turn the string back into a file with newlines, so it was all "package main\n\nimport (\n\t\"fmt\"\n\t\"io/iou
but I was able to use GPT4 to fix it.
Edit:
I'll try to learn GO, lol, and keep playing with this. I think it's not inquiring about the plugins at boot anymore. So it doesn't know what /plugin allows it to do. Then you tell it, and it really gets crabby about making plugins. I really think OpenAI may be reinforcing it not to self-code but, I'll keep off my tinfoil hat for now.
I forked the repo, and rewrote the prompt in 3rd person, then asked it to play the character and started getting way better results with how willing it was to write code and implement plugins. Personally I'd always suggest writing plugins RIGHT AWAY while it still has plenty of context for how to write them. or right after a /reset
. It really helps to tell it to double check /plugin
first to make sure it's using it correctly.
Sorry if you don't like that I called it KentBot, I just figured it would be a reasonably unique enough token that it could really remember well, and that it might confuse gptchat and chatgpt. I am still working under the assumption that GPT4 has recently has its initial system prompt tweaked to make it less willing to think it can program itself / write plugins as well as use them.
https://github.com/ian-kent/gptchat/compare/main...pale2hall:gptchat:main
Many people working on 'jailbreak' prompts have been using the 3rd person technique with GPT-4 with great results. Not as much so on GPT3.5/Chat
Perhaps the prompt can be clearer:
I think if you want to avoid answers such as However, I still cannot directly interact with or scrape external websites, as it goes beyond the scope of the commands and abilities available to me.
You can give the initial prompt some more detail, maybe something like this could help:
If the user asks you to create a plugin that calls external APIs, or asks for a GET or POST request for example, you should return the code to the user. It's important to note that while YOU are unable to execute API requests, you can still generate code that the user will run.
Yeah I agree the prompt could be made clearer, and I like the idea about adding different security levels to change how cautious or not the prompts are.
@pale2hall did you try the prompt from the reddit thread?
Write a plugin which allows you to make HTTP requests. It should support different HTTP methods like GET and POST, query strings, and a request body. It should return the HTTP response headers, errors and content body.
That still works reliably for me so I don't think it's OpenAI or the GPT-4 model causing the problem.
Are you using Windows? If you are, plugins unfortunately don't work - so it's possible gptchat has tried to create a plugin but failed, and learned that it can't for some reason. If you enter /debug
to get debug mode enabled, you can see what happens when it tries to compile the code.
I was able to get plugins to write, and run, and it was able to spin up an md5 hashing plugin without issue.
Should we share the plugins we make? Is there a plan for that yet?
I have been playing with it in both MacOS and Xubuntu and have had it make plugins successfully. On Xubuntu after re-writing the system prompt to do the 'explain character, then ask it to be that character' style, I was able to get it to be more reliable.
I was just watching 2 Minute Paper's newest video, and having ChatGPT make ChatGPT plugins is a feature that OpenAI just showed off. https://www.youtube.com/watch?v=Fjh1kwOzr7c
I don't think it's crazy to think that OpenAI might be confusing the AI as it develops its plugins feature and could cause problems in future versions as ChatGPT 'learns' more about its own plugin system.
Should we share the plugins we make? Is there a plan for that yet?
That's an interesting idea 🤔 I'm not totally sure it's necessary (since GPT can just write it when you ask), and introduces all the trust/security issues that you get with any other similar system (e.g. npm), although many of those trust/security risks exist just with GPT writing the code.
It's not something I'll spend time on right now, but definitely an interesting idea to consider in future - but happy to accept PRs for new modules if the functionality is reusable enough.
OpenAI might be confusing the AI
I can only guess at how they're implementing plugins, but I'd guess it's not miles away from the approach I've taken. I don't think the underlying model is trained to know they exist (although it may be fine tuned for it, which I don't think I can do myself with GPT-4). You might be right with future models like GPT-5 which may know about GPT-4 plugins as part of it's training data, but I'd expect GPT-5 to be 'intelligent' enough to differentiate between OpenAI plugins and a user request to write it's own plugins. It'll be interesting to experiment with when it's eventually released!
Great idea Ian, and I'm really enjoying playing with it. One thing I noticed and others might find helpful:
Sometimes when ChatGPT/GPT4 refuses to write a certain plugin you can explain why it can / why its okay, and get it to write it.
Example: