alex-sage / logo-gen

Accompanying code for the paper "Logo Synthesis and Manipulation with Clustered Generative Adversarial Networks"
MIT License
86 stars 25 forks source link

How to do a quick test of logo generation with your Pretrained Models? #19

Open HymEric opened 4 years ago

HymEric commented 4 years ago

How to do a quick test of logo generation with your Pretrained Models? Different command lines of DCGAN and WGAN.... Thank you.

alex-sage commented 4 years ago

Hi, for a detailed instruction on WGAN inference please refer to this answer: https://github.com/alex-sage/logo-gen/issues/8#issuecomment-462795707 DCGAN has to be run directly with the appropriate flag which then starts a ipython environment that can be used very similarly. This should be more clear from looking at the main dcgan script. Hope this helps!

rookiexiao123 commented 4 years ago

Hi, I refer to this answer: #8 (comment) on WGAN inferface.when I python logo_wgan.py, it has no error.when I run `import tensorflow as tf import numpy as np import vector from logo_wgan import WGAN import os

session = tf.Session() wgan = WGAN(session, load_config='LLD-logo-rc_64') print('go on vector') vec = vector.Vector(wgan) vec.show_random()`

this is refer to you. but it has errors.

2019-12-17 16:07:42.880433: W tensorflow/core/platform/cpu_feature_guard.cc:45] The TensorFlow library wasn't compiled to use SSE4.1 instructions, but these are available on your machine and could speed up CPU computations. 2019-12-17 16:07:42.880456: W tensorflow/core/platform/cpu_feature_guard.cc:45] The TensorFlow library wasn't compiled to use SSE4.2 instructions, but these are available on your machine and could speed up CPU computations. 2019-12-17 16:07:42.880461: W tensorflow/core/platform/cpu_feature_guard.cc:45] The TensorFlow library wasn't compiled to use AVX instructions, but these are available on your machine and could speed up CPU computations. 2019-12-17 16:07:42.880465: W tensorflow/core/platform/cpu_feature_guard.cc:45] The TensorFlow library wasn't compiled to use AVX2 instructions, but these are available on your machine and could speed up CPU computations. 2019-12-17 16:07:42.880484: W tensorflow/core/platform/cpu_feature_guard.cc:45] The TensorFlow library wasn't compiled to use FMA instructions, but these are available on your machine and could speed up CPU computations. 2019-12-17 16:07:42.995579: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:893] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero 2019-12-17 16:07:42.995845: I tensorflow/core/common_runtime/gpu/gpu_device.cc:955] Found device 0 with properties: name: GeForce RTX 2070 major: 7 minor: 5 memoryClockRate (GHz) 1.62 pciBusID 0000:01:00.0 Total memory: 7.79GiB Free memory: 7.35GiB 2019-12-17 16:07:42.995860: I tensorflow/core/common_runtime/gpu/gpu_device.cc:976] DMA: 0 2019-12-17 16:07:42.995865: I tensorflow/core/common_runtime/gpu/gpu_device.cc:986] 0: Y 2019-12-17 16:07:42.995871: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1045] Creating TensorFlow device (/gpu:0) -> (device: 0, name: GeForce RTX 2070, pci bus id: 0000:01:00.0) 2019-12-17 16:07:43.176882: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1045] Creating TensorFlow device (/gpu:0) -> (device: 0, name: GeForce RTX 2070, pci bus id: 0000:01:00.0) Settings dict: ACGAN: 0 ACGAN_SCALE: 1.0 ACGAN_SCALE_G: 0.1 ARCHITECTURE: resnet-64 BATCH_SIZE: 64 CONDITIONAL: 1 DATA: data/LLD-logo.hdf5 DATA_LOADER: lld-logo DECAY: 1 DIM_D: 64 DIM_G: 64 GEN_BS_MULTIPLE: 2 INCEPTION_FREQUENCY: 0 ITERS: 100000 KEEP_CHECKPOINTS: 5 LABELS: labels/resnet/rc_64 LAMBDA: 10 LAYER_COND: 1 LR: 0.0002 MODE: wgan-gp NORMALIZATION_D: 0 NORMALIZATION_G: 1 N_CRITIC: 5 N_GENERATOR: 3 N_GPUS: 1 N_LABELS: 64 OUTPUT_DIM: 12288 OUTPUT_RES: 64 RUN_NAME: LLD-logo-rc_64 SUMMARY_FREQUENCY: 1 bn_init: False train: False go on vector Error in `python': free(): invalid next size (fast): 0x00007f2310010520 ======= Backtrace: ========= /lib/x86_64-linux-gnu/libc.so.6(+0x777e5)[0x7f25f3eab7e5] /lib/x86_64-linux-gnu/libc.so.6(+0x8037a)[0x7f25f3eb437a] ...... 7f25f3a2f000-7f25f3c2e000 ---p 00002000 08:05 20714238 /lib/x86_64-linux-gnu/libutil-2.23.so 7f25f3c2e000-7f25f3c2f000 r--p 00001000 08:05 20714238 /lib/x86_64-linux-gnu/libutil-2.23.so 7f25f3c2f000-7f25f3c30000 rw-p 00002000 08:05 20714238 /lib/x86_64-linux-gnu/libutil-2.23.so 7f25f3c30000-7f25f3c33000 r-xp 00000000 08:05 20714081 /lib/x86_64-linux-gnu/libdl-2.23.so 7f25f3c33000-7f25f3e32000 ---p 00003000 08:05 20714081 /lib/x86_64-linux-gnu/libdl-2.23.so 7f25f3e32000-7f25f3e33000 r--p 00002000 08:05 20714081 /lib/x86_64-linux-gnu/libdl-2.23.so 7f25f3e33000-7f25f3e34000 rw-p 00003000 08:05 20714081 /lib/x86_64-linux-gnu/libdl-2.23.so 7f25f3e34000-7f25f3ff4000 r-xp 00000000 08:05 20714057 /lib/x86_64-linux-gnu/libc-2.23.so 7f25f3ff4000-7f25f41f4000 ---p 001c0000 08:05 20714057 /lib/x86_64-linux-gnu/libc-2.23.so 7f25f41f4000-7f25f41f8000 r--p 001c0000 08:05 20714057 /lib/x86_64-linux-gnu/libc-2.23.so 7f25f41f8000-7f25f41fa000 rw-p 001c4000 08:05 20714057 /lib/x86_64-linux-gnu/libc-2.23.so 7f25f41fa000-7f25f41fe000 rw-p 00000000 00:00 0 7f25f41fe000-7f25f4216000 r-xp 00000000 08:05 20714203 /lib/x86_64-linux-gnu/libpthread-2.23.so 7f25f4216000-7f25f4415000 ---p 00018000 08:05 20714203 /lib/x86_64-linux-gnu/libpthread-2.23.so 7f25f4415000-7f25f4416000 r--p 00017000 08:05 20714203 /lib/x86_64-linux-gnu/libpthread-2.23.so 7f25f4416000-7f25f4417000 rw-p 00018000 08:05 20714203 /lib/x86_64-linux-gnu/libpthread-2.23.so 7f25f4417000-7f25f441b000 rw-p 00000000 00:00 0 7f25f441b000-7f25f4441000 r-xp 00000000 08:05 20714029 /lib/x86_64-linux-gnu/ld-2.23.so 7f25f444d000-7f25f444e000 rw-p 00000000 00:00 0 7f25f444e000-7f25f444f000 rw-s 00000000 00:06 570 /dev/nvidiactl 7f25f444f000-7f25f4450000 rw-s 00000000 00:06 570 /dev/nvidiactl 7f25f4450000-7f25f4451000 rw-s 00000000 00:06 570 /dev/nvidiactl 7f25f4451000-7f25f4452000 rw-s 00000000 00:06 570 /dev/nvidiactl 7f25f4452000-7f25f4453000 rw-s 00000000 00:06 570 /dev/nvidiactl 7f25f4453000-7f25f4454000 rw-s 00000000 00:06 570 /dev/nvidiactl 7f25f4454000-7f25f4455000 rw-s 00000000 00:06 570 /dev/nvidiactl 7f25f4455000-7f25f4456000 rw-s 00000000 00:06 570 /dev/nvidiactl 7f25f4456000-7f25f4457000 rw-s 00000000 00:06 570 /dev/nvidiactl 7f25f4457000-7f25f4458000 rw-s 00000000 00:06 570 /dev/nvidiactl 7f25f4458000-7f25f4459000 rw-s 00000000 00:06 570 /dev/nvidiactl 7f25f4459000-7f25f445a000 rw-s 00000000 00:06 570 /dev/nvidiactl 7f25f445a000-7f25f445b000 rw-s 00000000 00:06 570 /dev/nvidiactl 7f25f445b000-7f25f445c000 rw-s 00000000 00:06 570 /dev/nvidiactl 7f25f445c000-7f25f445d000 rw-s 00000000 00:06 570 /dev/nvidiactl 7f25f445d000-7f25f445e000 rw-s 00000000 00:06 570 /dev/nvidiactl 7f25f445e000-7f25f4624000 rw-p 00000000 00:00 0 7f25f4624000-7f25f4625000 rw-s 00000000 00:06 570 /dev/nvidiactl 7f25f4625000-7f25f4626000 rw-s 00000000 00:06 570 /dev/nvidiactl 7f25f4626000-7f25f4627000 rw-s 00000000 00:06 570 /dev/nvidiactl 7f25f4627000-7f25f4628000 rw-s 00000000 00:06 570 /dev/nvidiactl 7f25f4628000-7f25f4638000 -w-s 00000000 00:06 571 /dev/nvidia0 7f25f4638000-7f25f4639000 rwxp 00000000 00:00 0 7f25f4639000-7f25f4640000 r--s 00000000 08:05 20449976 /usr/lib/x86_64-linux-gnu/gconv/gconv-modules.cache 7f25f4640000-7f25f4641000 r--p 00025000 08:05 20714029 /lib/x86_64-linux-gnu/ld-2.23.so 7f25f4641000-7f25f4642000 rw-p 00026000 08:05 20714029 /lib/x86_64-linux-gnu/ld-2.23.so 7f25f4642000-7f25f4643000 rw-p 00000000 00:00 0 7ffddf37c000-7ffddf39d000 rw-p 00000000 00:00 0 [stack] 7ffddf3ec000-7ffddf3ef000 r--p 00000000 00:00 0 [vvar] 7ffddf3ef000-7ffddf3f1000 r-xp 00000000 00:00 0 [vdso] ffffffffff600000-ffffffffff601000 r-xp 00000000 00:00 0 [vsyscall] 已放弃 (核心已转储)

I tried it in tensorflow1.3.0, python3.5. I found your code did't fit tensorflow0.12.1 because tf0.12.1 don't have layers.the packages are the same.I'd like to see if you have any suggestions.

rookiexiao123 commented 4 years ago

I found the problem is

session = tf.Session() wgan = WGAN(session, load_config='LLD-logo-rc_64') print('go on vector') vec = vector.Vector(wgan) vec.show_random()

I try

session = tf.Session() wgan = WGAN(session, load_config='LLD-logo-rc_64') print('go on vector') vec = vector.Vector(wgan) #vec.show_random()

it runs ok.

I will study its role.

rookiexiao123 commented 4 years ago

find here is the problem.the model can't restore successfully.

def restore_model(self):

initialize saver

if self.saver is None: self.saver = tf.train.Saver()

try to restore checkpoint

ckpt = tf.train.get_checkpoint_state(self.save_dir) if ckpt: with open(os.path.join(self.run_dir, 'config.json'), 'r') as f: old_dict = json.load(f) new_dict = self.cfg.dict equal = True for key, value in olddict.items():#iteritems if (key != 'train') and (key[:3] != 'bn') and new_dict[key] != value: print('New: %s: %s' % (key, new_dict[key])) print('Old: %s: %s' % (key, value)) equal = False if not equal: raise Exception('Config for existing checkpoint is not the same, aborting!') self.saver.restore(self.session, ckpt.model_checkpoint_path)