hamzapehlivan / StyleRes

Other
70 stars 3 forks source link

dimsions of direction/boundary file #3

Open shartoo opened 11 months ago

shartoo commented 11 months ago

Thank you for your awesome work, it has been shown to achieve the best image reconstruction quality so far. Your work is better than psp,e4e and PTI(and hunderds of times faster). I got a question about the shape of direction/boundary array. Your paper uses the W+ latent space which should be shape of 18x512,but the example direction/boundary array in this repo are of 1x512 which should me z/w latent space? What if i want to use direction/boundary arry with shape 18x512(from StyleGAN2) to edit inverted latent space ?

hamzapehlivan commented 11 months ago

Thank you for your interest in our paper

First you can put your extracted directions into this folder: editings/interfacegan_directions/

Let's say you put a new direction named gender.pt. Similar to other edits, change the options/editing_options/template.py file. So you would add another dictionary entry like: dict( method='interfacegan', edit='gender', strength=2)

I think this will work, Best

shartoo commented 10 months ago

The boundary file trained by InterFaceGAN or config file by GANSpace are proved poor edit capacity, because of the entangle nature of vector in W/z/W+ space. Is there any better way to apply good direction editing on current model ,like StyleTransformer, it could acheive reasonable result both on Label-based Editing & Reference-based Editing

hamzapehlivan commented 9 months ago

Hi, Yes, it is possible to obtain a better editing direction, however, it is not contained in this repo. If you want to code it yourself, here is an outline of the required steps, for Reference-based Editing:

Although this particular method is not tested, StyleRes is designed to work with unknown or undiscovered edits. I tried with the famous DragGan method, and it was working quite well. Therefore, I believe it should work with the Reference-based editing method mentioned in StyleTransformer.

Hope this helps,

shartoo commented 9 months ago

Thank you for your patience! I'm trying an approach named latent transformer for better direction editing,it directly trained on pair of reversed latent code and attribute labels. I'll use StyleRes as generator and retrain the latent transformer network,i'll post further result. The StyleTransformer is designed for low resolution $256\times 256$ image rebuilding and editing. I did some testing and found that the results were nowhere near as good as the paper described. Its edting method is highly bound to the framework. It‘’s editing method seemed to reasonable, but the framework itself didn't work very well, so it was abandoned

shartoo commented 8 months ago

Some edit result from latent transformer is below,seems good,but not as clean as original image

Bags_Under_Eyes_iter668_ori

Bags_Under_Eyes_iter668_edit

Bangs_iter668_edit

Eyeglasses_iter668_edit

Gray_Hair_iter668_edit

Young_iter336_edit