Closed repu1sion closed 6 years ago
Try this Model_LowMem.py: https://pastebin.com/aiGGxyRs
And command for run:
python3 faceswap.py train -A data/target_faces -B data/source_faces -m model -t LowMem -p -ag -bs 16
Succesfully run on 2GB GTX 950.
Thanks! it works. Maybe this model should be accepted as LowMem then?
@qzmenko лол, так что, заработало у тебя?) А я так и забил, не получилось( Тоже 2гб памяти
Thanks! it works. Maybe this model should be accepted as LowMem then?
I will create PR for it today
@qzmenko лол, так что, заработало у тебя?) А я так и забил, не получилось( Тоже 2гб памяти
да, заработало. И результаты отличные :)
насчёт результатов, за сутки обучения выглядит весьма фигово. лица размытые и бледные даже на превью. сколько приблизительно нужно времени обучать такую модель, чтобы можно было приемлемый faceswap для видео сделать?
LowMem is different from the common model with 2 lines: ENCODER_DIM = 512 # instead of 1024
x = self.conv(1024)(x) - commented out.
But it's still not enough to run under Ubuntu 16.04, cuda8, 1.7Gb of free video RAM. It fails with OOM on any batch size, even with bs=1 and bs=2.
What about having some configurable params here? Like reducing filters numbers or ENCODER_DIM or smth else? Also that would be great to have some doc which describes few main params and their influence on quality etc. For example fakeapp allows to select number of layers, nodes etc.
P.S. I managed to run it with ENCODER_DIM = 64 and bs=16, but results are not so good (after 15 hours).