Closed yhshu closed 1 year ago
The observations in the prompts are truncated to save context length. The observations in the regular runs are normal.
I see it's truncation. Thanks.
parser.add_argument(
"--max_obs_length",
type=int,
help="when not zero, will truncate the observation to this length before feeding to the model",
default=1920,
)
This paper is really nice. I would like to ask if the observation given to the LLM input is only 5 lines at a time as shown in the prompt example? How does LLM locate the relevant element in a web page? Is it completely dependent on the
scroll
operation to scroll the view?Thank you, looking forward to your reply.