Closed evangriffiths closed 6 months ago
It would be nice to be able to track tokens used by a fact check call using the python api, e.g. something like:
from factcheck import FactCheck results = FactCheck().check_response("The sky is green") print(results["token_count"])
which would contain prompt and completion token info, like
{'num_raw_tokens': 4, 'num_checkworthy_tokens': 5, 'total_prompt_tokens': 1748, 'total_completion_tokens': 231}
Hacky example implementation for openai client only here: https://github.com/Libr-AI/OpenFactVerification/pull/11
Thank you and welcome to submit a pr, this will be a good feature for transparency.
This feature has been added in v0.0.3
It would be nice to be able to track tokens used by a fact check call using the python api, e.g. something like:
which would contain prompt and completion token info, like
Hacky example implementation for openai client only here: https://github.com/Libr-AI/OpenFactVerification/pull/11