Closed JFChi closed 1 year ago
Hi @JFChi,
I added the code as query_gpt3.py
to the project. It expects your OpenAI api key in the environment variable OPENAI_API_KEY
and a directory output
in the TabLLM project (you must create it). Also, if you haven't install promptsource yet, you must install it via pip install promptsource
into your environment.
Executing the script for the text serialization
of the jungle
dataset you would need to execute the following command. Every 50 requests the current results are stored in the output
folder.
query_gpt3.py --input /root/TabLLM/datasets_serialized/jungle --task jungle --model gpt3
Hope that helps!
Hi @JFChi,
I added the code as
query_gpt3.py
to the project. It expects your OpenAI api key in the environment variableOPENAI_API_KEY
and a directoryoutput
in the TabLLM project (you must create it). Also, if you haven't install promptsource yet, you must install it viapip install promptsource
into your environment.Executing the script for the
text serialization
of thejungle
dataset you would need to execute the following command. Every 50 requests the current results are stored in theoutput
folder.
query_gpt3.py --input /root/TabLLM/datasets_serialized/jungle --task jungle --model gpt3
Hope that helps!
Thanks, I will check that out.
https://help.openai.com/en/articles/6283125-what-happened-to-engines
Regarding the GPT3 querying script, the above deprecation seems to be relevant
Hi there,
In your
evaluate_external_dataset.py
, thegpt3_output
file inadd_gpt3_zero_shot_predictions
is missing. Could you please share the script to query gpt-3 and generate the zero-shot prediction? Thanks!