Open YH-UtMSB opened 1 year ago
Hi @YH-UtMSB,
we relied on the T-Few fine tuning method and, hence, the src/pl_train.py
was also mostly adopted from them. Could you please give their module a try? You can find it here: https://github.com/r-three/t-few/blob/master/src/pl_train.py
If you have any additional problems, please let us know!
Best, Stefan
Hi authors, I'd like to reproduce the results you displayed on the paper "TabLLM: Few-shot Classification of Tabular Data with Large Language Models" table 1, especially the TabLLM rows for each of the dataset. I understand that it requires pl_train.py from the t-few repository you mentioned, but the script itself is not on the current repository, so can you show me how I can properly run the training of the T0 LLM and run the inference on each of the serialized datasets?
Hi @hansfarrell ,
sorry for the late reply and thanks for your interest in our work!
As stated in the readme, we only include our changes of the the t-few codebase. Hence, if you clone the t-few repository and add the files given in the t-few
folder from our repository you should be able to run t-few/bin/few-shot-pretrained-100k.sh
to reproduce our results.
Let us know if this works for you or if you encounter any issues!
Best, Stefan
Hello @YH-UtMSB and @hansfarrell,
just as a heads-up: based on the feedback in the issues, we updated the readme now with all steps to reproduce a performance entry from the paper. Maybe that is also helpful for you!
Hello TabLLM authors, thanks for providing the source code. I am especially interested in the fine-tuning scheme of your model. However, I could not find any training script from the current repository. The closest thing I found is the 'src.pl_train', which is mentioned in the shell script located in t-few/bin, and appears to be the module that was executed over there. Could you kindly provide this module?