Closed johnwalking closed 2 years ago
Hey, I'm sorry for not responding... I've just noticed this issue. Did you make it work?
Yes, I eventually solved the problem by force setting the parameter in create_model_and_diffusion to match the pertaining model parameter. But now I have the another question, how to avoid the CPU allocate error, is it available to use disk memory to store the huge tensor variables, please help me to figure it out.
yep, you can definitely store the features on the disk and update FeatureDataset to use np.memmap for example. However, since you have 64x64 images, I would reduce the number of features or samples. For ex, 8192d features for 50 images of 64x64 should consume about 6-7GB.
Hi, I tried the way to deal with image size 64, but when I changed the size to 256, even I use the np.memmap to initialize the label x data, it said I don't have enough memory for transpose calclation, how can i fix this problem, thanks.
Transpose by itself doesn't make a copy. However, if you use a series of permutes, reshapes and so on, one can make a deep copy and allocate extra memory. For example, it makes a deep copy of the tensor here. So, just preprocess the data by batches.
Hi, I tried the way to deal with image size 64, but when I changed the size to 256, even I use the np.memmap to initialize the label x data, it said I don't have enough memory for transpose calclation, how can i fix this problem, thanks.
How do you apply this to 64x64 data sets? I have a big problem now, please help me, thank you very much
Hey, I'm confused. I just trained a model with image size 64 and steps 4000 from improved-diffusion, but when I want to use it into segmentation , it got error and it shows that there are many tensor size can't match, can you help me to find out where should i advise in experiments/ddpm.json?