Closed powerspowers closed 3 years ago
TileGAN uses pkl files to store the generator network in the network pkl. The content is a tfutil.Network
object. You can examine the contents of the object using
network = pickle.load(file)
print(network.__getstate__())
The output will look something like:
{'version': 2,
'name': 'Gs',
'static_kwargs': {'resolution': 512, 'num_channels': 3, 'label_size': 0},
'build_module_src': { . . . },
'build_func_name': 'G_paper',
'variables': (odict_keys(['lod', '4x4/Dense/weight', '4x4/Dense/bias', '4x4/Conv/weight', '4x4/Conv/bias', 'ToRGB_lod7/weight', 'ToRGB_lod7/bias', '8x8/Conv0_up/weight', '8x8/Conv0_up/bias', '8x8/Conv1/weight', '8x8/Conv1/bias', 'ToRGB_lod6/weight', 'ToRGB_lod6/bias', '16x16/Conv0_up/weight', '16x16/Conv0_up/bias', '16x16/Conv1/weight', '16x16/Conv1/bias', 'ToRGB_lod5/weight', 'ToRGB_lod5/bias', '32x32/Conv0_up/weight', '32x32/Conv0_up/bias', '32x32/Conv1/weight', '32x32/Conv1/bias', 'ToRGB_lod4/weight', 'ToRGB_lod4/bias', '64x64/Conv0_up/weight', '64x64/Conv0_up/bias', '64x64/Conv1/weight', '64x64/Conv1/bias', 'ToRGB_lod3/weight', 'ToRGB_lod3/bias', '128x128/Conv0_up/weight', '128x128/Conv0_up/bias', '128x128/Conv1/weight', '128x128/Conv1/bias', 'ToRGB_lod2/weight', 'ToRGB_lod2/bias', '256x256/Conv0_up/weight', '256x256/Conv0_up/bias', '256x256/Conv1/weight', '256x256/Conv1/bias', 'ToRGB_lod1/weight', 'ToRGB_lod1/bias', '512x512/Conv0_up/weight', '512x512/Conv0_up/bias', '512x512/Conv1/weight', '512x512/Conv1/bias', 'ToRGB_lod0/weight', 'ToRGB_lod0/bias']), { . . . } )}
If your can provide the weights in exactly the same way - which I unfortunately doubt - you can probably convert your network as a pkl that can be loaded directly. Alternatively, you can definitely convert TileGAN to work for BMSGGAN. :)
Finally back to this project after a ton of research and wanderings. So it looks like I can take my pytorch trained models and convert then to ONYX and then to Tensorflow pkl format. So is the output of ProGAN a standard Tensorflow model out file?
so it looks like I can take my pytorch trained models and convert then to ONYX and then to Tensorflow pkl format
Being able to convert from varying formats sounds promising, however I'm sceptical that your models have the exact same network shape and thus the required weights to be used in ProGAN. You need to have an identical network structure to be able to reuse pre-trained weights.
I have many trained models I created in BMSGGAN (pytorch) that produce separate D, G and G shadow PTHs. If I output these as a PKL file would TileGAN be able to take this in to produce the working files. It probably shows my naivete about the saved models between ProGAN and BMSGGAN and also the difference in Tensorflow and pytorch. That said it sure would be great if I could massage the 10 or so modern art models I've created rather than retraining on ProGAN on tensorflow.