Open zhiqi-li opened 1 year ago
I also want to ask if the training log of train_seg.py can be provided as a reference.
We use distributed training on four V100 GPUs, each with the batchsize being 2.
For the coco stuff dataset, one image corresponds to several captions, what I want to ask is whether the amount of data in one epoch is equal to the number of captions or the number of images. If it is the number of captions, one epoch has more than 600k data, and it seems that it needs much longer time than 2 days for 10 epochs.
For the coco stuff dataset, one image corresponds to several captions, what I want to ask is whether the amount of data in one epoch is equal to the number of captions or the number of images. If it is the number of captions, one epoch has more than 600k data, and it seems that it needs much longer time than 2 days for 10 epochs.
The same issue, I trained on 8 V100, with a total batch size of 8x2, The time of one epoch was about 10 hours.
In coco dataset, each image contains 5 captions. The current open source code is that these 5 captions appear in an epoch. The training procedure we use is to randomly select a caption for each image. The adapter converges quickly with training.
@zhiqi-li @blackmagicianZ Hi, have you successfully trained the model to achieve results close to those in the paper?
@MC-E Hi, I am training your code on semantic segmentation map. Do you remember how many epochs it takes to see the convergence of adapter in this condition?
@MC-E Hi, I would like to train your code on the celebA dataset, May I ask if I should use script train.py or write another script train_human_face.py by myself? thx a lot for your help!
@MC-E @zhiqi-li Hi, I can't complete training on multiple GPUs, each training only uses one GPU, how do I start multi-GPU training?
@MERONAL To kick off multi-GPU training properly, ensure you set the RANK and WORLD_SIZE parameters before diving into training (these are torch distributed training parameters). Also, be cautious about the default GPU_IDS specified in the code – they're configured for 4 GPUs (0, 1, 2, 3). You'll need to adjust these according to your own setup. Additionally, remember to run the torchrun command with the --nproc_per_node flag to orchestrate the process effectively.
For the coco stuff dataset, one image corresponds to several captions, what I want to ask is whether the amount of data in one epoch is equal to the number of captions or the number of images. If it is the number of captions, one epoch has more than 600k data, and it seems that it needs much longer time than 2 days for 10 epochs.
The same issue, I trained on 8 V100, with a total batch size of 8x2, The time of one epoch was about 10 hours.
What was your one GPU size, I am curious? Is there any-way to work on this repo(in SD branch - excluding XL part), 8-16 GB V100 GPU? I am kind of angry to Google, there is no available resource for A100 right now, all my work stuck in the machine! @bychen7
how mush training step can arise Controlable ability? I found loss is very small at the begin of training.
@wanghao14 Hi, Could you please add your WeChat and ask some questions about training? my email is dmm2020@sjtu.edu.cn
@zhiqi-li Hi, Could you please add your WeChat and ask some questions about training? my email is dmm2020@sjtu.edu.cn.
@wanghao14 Hi, Could you please add your WeChat and ask some questions about training? my email is dmm2020@sjtu.edu.cn
Post your question here instead of requesting personal contact information.
Hi Wanghao, https://github.com/TencentARC/T2I-Adapter/blob/16bba674b472121d5a86e3ed6b935f91d516bc74/train_sketch.py#L231 How do you obtain the mask images of train2017_color? Are you using stuff_train2017_pixelmaps? Look forward for your reply.
发件人: Wang Hao 发送时间: 2023年12月27日 0:05 收件人: TencentARC/T2I-Adapter 抄送: dmmSJTU; Comment 主题: Re: [TencentARC/T2I-Adapter] train_seg out of memory with a batchsize of 8 (Issue #38)
@wanghao14 Hi, Could you please add your WeChat and ask some questions about training? my email is @. Post your question here instead of requesting personal contact information. — Reply to this email directly, view it on GitHub, or unsubscribe. You are receiving this because you commented.Message ID: @.>
@dmmSJTU Yes, I have used the segmentation map provided by COCO and converted the IDs (pixel class) to RGB values using this code. I have trained an image inpainting model on segmentation condition based on the idea of T2I-Adapter and it works well.
Hope this could help you.
You can also refer to:
Thank you. When I used “CUDA_VISIBLE_DEVICES=0,1 python3 -m torch.distributed.launch --nproc_per_node=2 --use-env train_seg.py --ckpt models/sd-v1-4.ckpt --bsize 2”, it produced:
Could you help me to solve it? 发件人: Wang Hao 发送时间: 2023年12月27日 15:26 收件人: TencentARC/T2I-Adapter 抄送: dmmSJTU; Mention 主题: Re: [TencentARC/T2I-Adapter] train_seg out of memory with a batchsize of 8 (Issue #38)
@dmmSJTU Yes, I have used the segmentation map provided by COCO and converted the IDs (pixel class) to RGB values using this code. I have trained an image inpainting model on segmentation condition based on the idea of T2I-Adapter and it works well. Hope this could help you. You can also refer to:
— Reply to this email directly, view it on GitHub, or unsubscribe. You are receiving this because you were mentioned.Message ID: @.***>
@wanghao14 @zhiqi-li @MERONAL Hi, When i train the train_seg.py on single gpu, it produce:
Besides, When i use "CUDA_VISIBLE_DEVICES=0,1 python3 -m torch.distributed.launch --nproc_per_node=2 --use-env train_seg.py --ckpt models/sd-v1-4.ckpt --bsize 2" to run on multi-gpu, it produce:
Have you encountered the same problem? Look forward to your reply!
wanghao 同学你好,可以加你好友问几个问题吗?在这里问比较麻烦,时效性也不太好。
-----原始邮件-----
发件人: Wang @.> 目标语言: TencentARC @.> 抄送: dmmSJTU @.>; Mention @.> 日期: 2023年12月28日星期四 15:41 CST 主题: Re: [TencentARC/T2I-Adapter] train_seg out of memory with a batch size of 8 (Issue #38)
@dmmSJTU There appears to be an issue with the initialization of distributed training. Please verify the number of GPUs available in your environment and check whether a command has been included in the code to specify the utilization of a particular graphics card, for example, using 'os.environ['CUDA_VISIBLE_DEVICES'] = '0'." This issue might not be related to this code. — Reply to this email directly, view it on GitHub, or unsubscribe. You are receiving this because you were mentioned.Message ID: @.***>
I am sorry, but the question you've asked isn't pertinent to this project. It seems to be related to your personal environment configuration, and I'm not interested in it.
HI, the paper reports the model is trained with a batch size of 8 on 32G V100, but I got out of memory with the default settings in train_seg.py. When I set the batch size to 4, the memory is about 27G, so I am slightly confused about this problem because the learning rate maybe need to adjust when I set a different batch size from yours.