openai / evals

Evals is a framework for evaluating LLMs and LLM systems, and an open-source registry of benchmarks.
Other
15.11k stars 2.62k forks source link

Log injection alert #1542

Open arpitjain099 opened 3 months ago

arpitjain099 commented 3 months ago

Describe the bug

Potential log injection alert here - https://github.com/openai/evals/blob/234bcde34b5951233681455faeb92baaaef97573/evals/elsuite/multistep_web_tasks/docker/flask-playwright/app.py#L187-L187

To Reproduce

https://github.com/openai/evals/blob/234bcde34b5951233681455faeb92baaaef97573/evals/elsuite/multistep_web_tasks/docker/flask-playwright/app.py#L187-L187

Code snippets

https://github.com/openai/evals/blob/234bcde34b5951233681455faeb92baaaef97573/evals/elsuite/multistep_web_tasks/docker/flask-playwright/app.py#L187-L187

OS

macOS

Python version

3.11.4

Library version

1.40.1