kathrinse / be_great

A novel approach for synthesizing tabular data using pretrained large language models
MIT License
252 stars 41 forks source link

Not able to generate synthetic data after model fitting #45

Open MedhaviShruti opened 5 months ago

MedhaviShruti commented 5 months ago

I have a tabular data with shape of 11 rows and 25 columns. I have trained two models based on the following command
model = GReaT(llm='distilgpt2', batch_size=32, epochs=25)

and tried to generate synthetic data for this table after fitting based on these model but it fails with the below error:

An error has occurred: Breaking the generation loop! To address this issue, consider fine-tuning the GReaT model for an longer period. This can be achieved by increasing the number of epochs. model = GReaT(llm='distilgpt2', batch_size=25, epochs=100) ( Tried with this model as well but same error) Alternatively, you might consider increasing the max_length parameter within the sample function. For example: model.sample(n_samples=10, max_length=2000)

Please let me know if there is a way the command has to be given for successful generation.

unnir commented 2 months ago

I suggest to train it longer 100+ epochs.

However, 11 rows and 25 columns is rather a very small dataset. I would recommend here to do a prompt engineering with ChatGPT, Mixtral, or Claude.

bvanbreugel commented 1 month ago

Hi all,

I very much appreciate the clean and easy-to-use repo. In my limited experience with the repo, I've encountered OP's issue many times, however---that the generation loop is broken and no data is outputted. I've tried increasing the number of epochs (e.g. 200) and max_length, but neither helps reliably. This remains true even when datasets are not tiny (e.g. 100 samples). To reproduce, e.g. use the UCI Spambase dataset (58 features):

from be_great import GReaT
from sklearn.datasets import fetch_openml

# load spam dataset, reduce to 100 samples
data = fetch_openml(data_id=44,as_frame=True).frame[:100]

# train model, takes about 12 minutes on a single GPU
model = GReaT(llm='distilgpt2', batch_size=32,  epochs=200, fp16=True)
model.fit(data.to_numpy(), column_names=list(data.columns))

# generate---this will raise error "Breaking the generation Loop!"
synthetic_data = model.sample(n_samples=1000, max_length=2000)

print(len(synthetic_data))
assert len(synthetic_data)>0 # This will fail

Of course, in an example like the above you would expect the mode to overfit, but it's frustrating the model doesn't generate anything at all. Is there any guidance on when GReaT can be used reliably?

unnir commented 1 month ago

Thank you for providing your script and sorry for the issues with our model's sampling function.

I agree that the current behavior of the model is not optimal, and we should guide users better. I will try make an update in the near future.

bvanbreugel commented 1 month ago

Thanks for the quick response! That'd be very helpful 😀

iamamiramine commented 5 days ago

Hello, I am facing the same issue with NHANES 1999-2014 Dataset which consists of 6833 samples (rows) and 29 features (columns). I trained the model for 300 epochs, and tried generating with different max_length parameter values. Any suggestions on how to fix this issue?