Open YMX2022 opened 3 months ago
Hi, I suggest using GPUs with higher memory for running the data generation codes. Additionally, consider generating the bounding boxes/masks first using the bounding box/mask generation section of the code, and then save these results locally. Once saved, you can proceed with running the instance-caption generation codes. This approach should help in reducing memory usage by avoiding the need to generate everything simultaneously.
Hope it helps.
Not sure if you have solved the problem, but I can run the Data Generation Process using two RTX A5000 GPUs by distributing the setup as follows: Loading the BLIP2 model to a GPU, while the rest of the models are loaded to another
Doing so will use ~24GB on the BLIP2 GPU and ~11GB on the other models GPU. Hope this helps!
Hi! Wonderful work. I am currently training the model with my own data using the dataset-generation code you provided. However, there is an error returning as
.
I am training on 2 RTX4090 GPUs with 24Gb for each one, is this ram enough for training data generation?
If not, how can I revise the script to decrease the GPU ram for data generation.
Thank you.