Closed michaellzc closed 9 months ago
I faced the same issue too. Here’s my solution:
if response.Choices != nil && len(response.Choices) > 0 {
slog.Debug("stream", "content", response.Choices[0].Delta.Content)
}
@michaellzc thanks for reporting this issue!
Unfortunately, I've been unable to reproduce the problem with the steps you've listed on my machine. I suspect this is due to either a difference in our environments (e.g. OS/arch, GPTScript related environment variables, OpenAI API settings, etc) or a transient issue with the OpenAI API that I'm just not hitting at the moment.
To narrow things down, could you post the output from running the following command on your system:
uname -mo
as well as whether you've set any of the environment variables listed in the output of gptscript -h
?
Putting reproduction aside for a moment; @darkthread I think adding some conditionals to prevent this panic is a great idea! If you open a PR with your fix I'd be happy to review it.
@michaellzc Just wanted to note that anything passed to the gptscript
command after the PROGRAM_FILE
argument is given is interpreted as script input. This means that gptscript
options must be passed before the PROGRAM_FILE
argument.
$ gptscript examples/echo.gpt --cache false --input 'Hello World'
16:59:33 started [main] [input=--cache false --input Hello World]
...
Notice the log input=--cache false --input Hello World
includes all of the options you were trying to pass for gptscript
as part of the script's input.
Additionally, the --input
argument is for optionally specifying an input file; e.g. --input=<path-to-input-file>
.
Finally, options that take arguments (like --cache
and --input
) require a =
between the option name and the argument value (which is most-likely not the desired behavior, but it's the current state nonetheless).
Putting it all together, here's the "corrected" command invocation:
$ gptscript --cache=false examples/echo.gpt 'Hello World!'
All that being said, gptscript
's UX is still rough, and any suggestions are definitely welcome.
@njhale
To narrow things down, could you post the output from running the following command on your system:
$ uname -mo
Darwin arm64
More on OS environment:
OS: macOS 14.3.1
CPU: Apple M1 Max
as well as whether you've set any of the environment variables listed in the output of
Yes. Only OPENAI_API_KEY
is configured.
OpenAI API settings
I did not consider this in the beginning. I was using an off-the-shelf OpenAI API Key and the default OpenAI API endpoint (e.g., no proxy, or any of the Azure stuff). Additionally, I took a glance at the admin panel on https://platform.openai.com, and I didn't find anything that could make OpenAI produce special responses for me.
Putting it all together, here's the "corrected" command invocation:
Ahh, I made a lot of assumptions on how the CLI args should work 😅. But it seems like with the "correct" CLI flags, I can still reproduce the problem consistently. I believe this is not a transient issue, at least not for me.
$ gptscript -v
gptscript version v0.1.4-701b9c8e-dirty
$ gptscript --cache=false examples/echo.gpt 'Hello World!'
22:35:02 started [main] [input=Hello World!]
22:35:02 sent [main]
content [1] content | Waiting for model response... content [1] content | Hello World!panic: runtime error: index out of range [0] with length 0
goroutine 1 [running]:
github.com/gptscript-ai/gptscript/pkg/openai.(*Client).call(_, {_, _}, {{0x105270100, 0x13}, {0x140002ea1e0, 0x3, 0x4}, 0x0, 0x322bcc77, ...}, ...)
github.com/gptscript-ai/gptscript/pkg/openai/client.go:425 +0x7cc
github.com/gptscript-ai/gptscript/pkg/openai.(*Client).Call(0x140000c6040, {0x1053127a0, 0x14000100410}, {{0x105270100, 0x13}, 0x0, {0x0, 0x0, 0x0}, {0x1400007e2a0, ...}, ...}, ...)
github.com/gptscript-ai/gptscript/pkg/openai/client.go:295 +0x4ec
github.com/gptscript-ai/gptscript/pkg/llm.(*Registry).Call(0x104a6c024?, {0x1053127a0, 0x14000100410}, {{0x105270100, 0x13}, 0x0, {0x0, 0x0, 0x0}, {0x1400007e2a0, ...}, ...}, ...)
github.com/gptscript-ai/gptscript/pkg/llm/registry.go:53 +0x94
github.com/gptscript-ai/gptscript/pkg/engine.(*Engine).complete(0x1400029e930, {0x1053127a0, 0x14000100410}, 0x140002d23f0)
github.com/gptscript-ai/gptscript/pkg/engine/engine.go:218 +0x1f8
github.com/gptscript-ai/gptscript/pkg/engine.(*Engine).Start(_, {{0x1055d4088, 0x1}, {0x1053127a0, 0x14000100410}, 0x0, 0x1400029e7b0, {{{0x0, 0x0}, {0x140000ba0cd, ...}, ...}, ...}}, ...)
github.com/gptscript-ai/gptscript/pkg/engine/engine.go:189 +0x98c
github.com/gptscript-ai/gptscript/pkg/runner.(*Runner).call(_, {{0x1055d4088, 0x1}, {0x1053127a0, 0x14000100410}, 0x0, 0x1400029e7b0, {{{0x0, 0x0}, {0x140000ba0cd, ...}, ...}, ...}}, ...)
github.com/gptscript-ai/gptscript/pkg/runner/runner.go:104 +0x234
github.com/gptscript-ai/gptscript/pkg/runner.(*Runner).Run(0x14000288080, {0x1053127a0, 0x14000100410}, {{0x16b3ab4c0, 0x11}, {0x140000aa138, 0x13}, 0x140000b42d0, 0x0}, {0x14000020288, ...}, ...)
github.com/gptscript-ai/gptscript/pkg/runner/runner.go:60 +0x1f0
github.com/gptscript-ai/gptscript/pkg/cli.(*GPTScript).Run(0x140001b2100, 0x140001b6f08, {0x140000b4240, 0x2, 0x3})
github.com/gptscript-ai/gptscript/pkg/cli/gptscript.go:238 +0x848
github.com/acorn-io/cmd.Command.bind.func4(0x140001b6f08, {0x140000b4240, 0x2, 0x3})
github.com/acorn-io/cmd@v0.0.0-20240203032901-e9e631185ddb/builder.go:477 +0x188
github.com/spf13/cobra.(*Command).execute(0x140001b6f08, {0x14000126050, 0x3, 0x3})
github.com/spf13/cobra@v1.8.0/command.go:983 +0x840
github.com/spf13/cobra.(*Command).ExecuteC(0x140001b6f08)
github.com/spf13/cobra@v1.8.0/command.go:1115 +0x344
github.com/spf13/cobra.(*Command).Execute(...)
github.com/spf13/cobra@v1.8.0/command.go:1039
github.com/spf13/cobra.(*Command).ExecuteContext(...)
github.com/spf13/cobra@v1.8.0/command.go:1032
github.com/acorn-io/cmd.Main(0x140001b6f08)
github.com/acorn-io/cmd@v0.0.0-20240203032901-e9e631185ddb/builder.go:74 +0x54
main.main()
github.com/gptscript-ai/gptscript/main.go:9 +0x44
@michaellzc Thanks for posting the results!
We should have a fix in soon, but it would still be awesome to figure out why this is happening
Sorry to ask again, but could you try one last time with the --debug
option set, then post the results (sans any sensitive info)?
@njhale
$ gptscript -v
gptscript version v0.1.4-701b9c8e-dirty
$ gptscript --cache=false --debug examples/echo.gpt 'Hello World!'
{
"level": "debug",
"logger": "pkg/loader",
"msg": "opened examples/echo.gpt",
"time": "2024-03-05T17:48:34-08:00"
}
{
"input": "Hello World!",
"level": "debug",
"logger": "pkg/monitor",
"msg": "Run started",
"program": {
"name": "examples/echo.gpt",
"entryToolId": "examples/echo.gpt:1",
"toolSet": {
"examples/echo.gpt:1": {
"description": "Returns back the input of the script",
"modelName": "gpt-4-turbo-preview",
"internalPrompt": null,
"arguments": {
"type": "object",
"properties": {
"input": {
"description": " Any string",
"type": "string"
}
}
},
"instructions": "echo \"${input}\"",
"id": "examples/echo.gpt:1",
"localTools": {
"": "examples/echo.gpt:1"
},
"source": {
"file": "examples/echo.gpt",
"lineNo": 1
}
}
}
},
"runID": "1",
"time": "2024-03-05T17:48:35-08:00"
}
{
"id": "1",
"input": "Hello World!",
"level": "info",
"logger": "pkg/monitor",
"msg": "started [main]",
"parentID": "",
"time": "2024-03-05T17:48:35-08:00",
"toolID": "examples/echo.gpt:1"
}
{
"id": "1",
"level": "info",
"logger": "pkg/monitor",
"msg": "sent [main]",
"parentID": "",
"time": "2024-03-05T17:48:35-08:00",
"toolID": "examples/echo.gpt:1"
}
{
"completionID": "1",
"id": "1",
"level": "debug",
"logger": "pkg/monitor",
"msg": "messages",
"parentID": "",
"request": {
"model": "gpt-4-turbo-preview",
"messages": [
{
"role": "system",
"content": "\nYou are task oriented system.\nYou receive input from a user, process the input from the given instructions, and then output the result.\nYour objective is to provide consistent and correct results.\nYou do not need to explain the steps taken, only provide the result to the given instructions.\nYou are referred to as a tool.\n"
},
{
"role": "system",
"content": "echo \"${input}\""
},
{
"role": "user",
"content": "Hello World!"
}
],
"temperature": 1e-8
},
"time": "2024-03-05T17:48:35-08:00",
"toolID": "examples/echo.gpt:1"
}
content [1] content | Waiting for model response... content [1] content | Hello World!{
"err": null,
"level": "debug",
"logger": "pkg/monitor",
"msg": "Run stopped",
"output": "",
"runID": "1",
"time": "2024-03-05T17:48:35-08:00"
}
@michaellzc thanks!
Comparing your debug log to mine, it looks like there's no response being returned by the model.
For posterity, the end of my debug output has a response from the model, which looks like this:
{
"completionID": "1",
"id": "1",
"level": "debug",
"logger": "pkg/monitor",
"msg": "messages",
"parentID": "",
"request": {
"model": "gpt-4-turbo-preview",
"messages": [
{
"role": "system",
"content": "\nYou are task oriented system.\nYou receive input from a user, process the input from the given instructions, and then output the result.\nYour objective is to provide consistent and correct results.\nYou do not need to explain the steps taken, only provide the result to the given instructions.\nYou are referred to as a tool.\n"
},
{
"role": "system",
"content": "echo \"${input}\""
},
{
"role": "user",
"content": "Hello World!"
}
],
"temperature": 1e-8
},
"time": "2024-03-06T12:46:36-05:00",
"toolID": "examples/echo.gpt:1"
}
# The messages below are missing from your debug output
{
"cached": false,
"completionID": "1",
"id": "1",
"level": "debug",
"logger": "pkg/monitor",
"msg": "messages",
"parentID": "",
"response": {
"role": "assistant",
"content": [
{
"text": "Hello World!"
}
]
},
"time": "2024-03-06T12:46:36-05:00",
"toolID": "examples/echo.gpt:1"
}
{
"id": "1",
"level": "info",
"logger": "pkg/monitor",
"msg": "ended [main]",
"output": "Hello World!",
"parentID": "",
"time": "2024-03-06T12:46:36-05:00",
"toolID": "examples/echo.gpt:1"
}
{
"err": null,
"level": "debug",
"logger": "pkg/monitor",
"msg": "Run stopped",
"output": "Hello World!",
"runID": "1",
"time": "2024-03-06T12:46:36-05:00"
}
@darkthread's fix has merged, so this should be fixed in main and will be in the next release (after 0.1.4
).
I tried the released version or compiled from
main
. Regardless of which example scripts I ran, it will always panic at the same place:https://github.com/gptscript-ai/gptscript/blob/abdf8d3ba52a94d8bab758e514211abc41e9d5c0/pkg/openai/client.go#L425
How to reproduce
Then, if you apply this patch:
Run the fresh binary:
Notes
On cache hit (the same input is used) it will not panic even without the patch. I suppose it will return the cached response right away without hitting LLM. Of course, without the patch, you won't be able to cache the response in the first place.