confident-ai / deepeval

The LLM Evaluation Framework
https://docs.confident-ai.com/
Apache License 2.0
3.45k stars 272 forks source link

faithfulness print extraction_limit for each test case #1130

Closed rjiangnju closed 8 hours ago

rjiangnju commented 3 days ago

When run evaluate with faithfulness metrics, the template print extraction_limit for each test case. which is very annoying. The print it due to line 36 at deepeval/metrics/faithfulness/template.py:

print(extraction_limit)

I don't see why this is needed, should be remove.

penguine-ip commented 8 hours ago

@rjiangnju its just a mistake