Open GoutamKelam opened 2 years ago
What dataset are you training it on?
I'm getting noisy input too running provided example (see below). Or is there are some pre-training need to be done?
import torch
from video_diffusion_pytorch import Unet3D, GaussianDiffusion
model = Unet3D(
dim = 64,
use_bert_text_cond = True, # this must be set to True to auto-use the bert model dimensions
dim_mults = (1, 2, 4, 8),
)
diffusion = GaussianDiffusion(
model,
image_size = 32, # height and width of frames
num_frames = 5, # number of video frames
timesteps = 1000, # number of steps
loss_type = 'l1' # L1 or L2
)
videos = torch.randn(3, 3, 5, 32, 32) # video (batch, channels, frames, height, width)
text = [
'a whale breaching from afar',
'young girl blowing out candles on her birthday cake',
'fireworks with blue and green sparkles'
]
loss = diffusion(videos, cond = text)
loss.backward()
# after a lot of training
sampled_videos = diffusion.sample(cond = text, cond_scale = 2)
sampled_videos.shape # (3, 3, 5, 32, 32)
@DaddyWesker Obviously, you literally just provide with videos = torch.randn(3, 3, 5, 32, 32)
noisy video data as an input to training.
@oxjohanndiep
Hm, I'm just launching provided code. What kind of video should i provide then? I can't see any info in the README about that.
You can try using the moving MNIST, what I also tried was the MSR-VTT dataset to test the training with annotations as well.
Should this video have some correlation with text? FOr example, if you are saying that moving MNIST could be used, text should look like "moving digit five" or something like that?
Yes, but I have not found any annotations out there for the moving MNIST one, hence I only trained it without.
If you have found anything in this area, let me know.
Okay. I will.
@oxjohanndiep How long have you trained on moving mnist this diffusion model and have you got any reasonable results?
I trained it for maybe 100 epochs which took me good 10h with CUDA enabled. No I did not got any good results, but maybe we can have a video chat to discuss this if you want.
@DaddyWesker
Hm. I haven't seen some of those parameters in training code in README and in Trainer class. I guess you wrote your own trainer?
Yes I did, do you get other results with the Trainer class?
I'm currently trying to train this model using trainer. When i'll get some results - i'll let you know
Awesome
Currently, model is being trained. Here are some results. First one on 36000 epoch, second one is on 70000 epoch. Not sure if those results are good or not.
How long did you train it for in terms of time?
That looks amazing!
Several days on 1080ti gpu. From monday till today.
Thats very interesting, I have never trained it for so long, max only around 6 hours! Will give it a go!
Btw it does look like per video, you have more than 5 frames. Did you increase the number of frames accepted by the model as well?
20 frames as i remember. As in moving mnist samples. Though i can use batch_size = 1 only =)
Here is the parameters I've changed
diffusion = GaussianDiffusion(
model,
image_size = 64,
num_frames = 20,
timesteps = 1000, # number of steps
loss_type = 'l1' # L1 or L2
).cuda()
And batch_size in trainer of course.
Alright, let me increase the frame number as well and give it a go. Report you the results in a couple of days!
@DaddyWesker How did you plot those little GIFs of the results actually?
@DaddyWesker And have you tried testing it on a more sophisticated dataset, i.e. Kinetic-600 with their text annotation? Would be very interesting to see how the results are conditioned on text.
No i haven't tested on different dataset. I'll see if i will have enough time for this.
About gifs. In this repo in video_diffusion_pytorch/video_diffusion_pytorch.py
lies function
def video_tensor_to_gif(tensor, path, duration = 120, loop = 0, optimize = True):
images = map(T.ToPILImage(), tensor.unbind(dim = 1))
first_img, *rest_imgs = images
first_img.save(path, save_all = True, append_images = rest_imgs, duration = duration, loop = loop, optimize = optimize)
return images
I'm using this one. It saves gif and returns it as ndarray output (if you somehow need it)
@DaddyWesker Have to admit, your results looks far better than mine:
This took me 3 days to train, and I only got 1000 epochs. How were you able to run 70k epochs? And what kind of learning-rate did you choose?
train_lr = 1e-4
Well, i don't know what to say about "how was i able to train 70k epochs". I've just ran training code from README on mnist. Nothing special.
import torch
from video_diffusion_pytorch import Unet3D, GaussianDiffusion, Trainer
def video_tensor_to_gif(tensor, path, duration = 120, loop = 0, optimize = True):
images = map(T.ToPILImage(), tensor.unbind(dim = 1))
first_img, *rest_imgs = images
first_img.save(path, save_all = True, append_images = rest_imgs, duration = duration, loop = loop, optimize = optimize)
return images
model = Unet3D(
dim = 64,
dim_mults = (1, 2, 4, 8),
)
diffusion = GaussianDiffusion(
model,
image_size = 64,
num_frames = 20,
timesteps = 1000, # number of steps
loss_type = 'l1' # L1 or L2
).cuda()
trainer = Trainer(
diffusion,
'./data', # this folder path needs to contain all your training data, as .gif files, of correct image size and number of frames
train_batch_size = 1,
train_lr = 1e-4,
save_and_sample_every = 1000,
train_num_steps = 700000, # total training steps
gradient_accumulate_every = 2, # gradient accumulation steps
ema_decay = 0.995, # exponential moving average decay
amp = True # turn on mixed precision
)
trainer.train()
sampled_videos = diffusion.sample(batch_size = 4)
u_sampled_videos = sampled_videos.unbind(dim = 1)
for i in range(len(u_sampled_videos)):
images = video_tensor_to_gif(u_sampled_videos[i], "result_"+str(i)+".gif")
May I ask if you use normalization for your dataset
The _"name text_use_bertcls is not defined" error occurs when trying to use explicit texts as mentioned in the 3rd example. The error occurs as the variable is not directly linked to the class in the function _"plosses". On fixing that, when I ran the code, the output samples generated are random noise. I ran the inference for 1K and 50K steps respectively. Can you please guide if I am missing any step.
Attaching the output generated.
.