Open b-adkins opened 6 years ago
It should run fine on your input, assuming it is an image type supported by pillow. Preprocessing is done by the script. Image size is a bit tricky, as it influences the output, I usually run between 500-1500 pixel lengths, but it really depends on how detailed the image is.
Thanks for the model, it looks really awesome!
But I have a bit to add to this:
pytorch
version 0.3.1
gan
and mse
model but mse
does not output anything. Yet this is the ouput I get (looks nothing like the original image)
Any recommendations?
Ps, if you need any extra information to help diagnose it, just ask. Happy to chat :)
Alright, I've managed to get output with the mse
model.
A couple of extra thoughts.
This file was originally .jpg, wonder if that has anything to do with it?
@jakubLangr Could I see the input image? I'm assuming the network is firing on the paper texture and the contrast is very low which could explain those results.
Yes, probably the case. So how did you obtain the training dataset? it does not look like it's scanned but i would never get the light so perfect.
For example this image produced similar results:
Or this one
The models were not trained with data taken from pictures, which explains the low performance on the images you supplied. Retraining with data more similar to the images you want to use with would work better (training code is available now). Our new approach should be able to handle that much better, however, I still have to prepare the code and models to make them public.
Hi sure, no problem. Thanks for you response and let me know when the models are public!
pá 29. 6. 2018 v 1:30 odesílatel Edgar Simo-Serra notifications@github.com napsal:
The models were not trained with data taken from pictures, which explains the low performance on the images you supplied. Retraining with data more similar to the images you want to use with would work better (training code is available now). Our new approach should be able to handle that much better, however, I still have to prepare the code and models to make them public.
— You are receiving this because you were mentioned. Reply to this email directly, view it on GitHub https://github.com/bobbens/sketch_simplification/issues/4#issuecomment-401212486, or mute the thread https://github.com/notifications/unsubscribe-auth/ABy_ZHkfMlm-kqUGBfMTstUbLcyOIym2ks5uBXUXgaJpZM4SJbBr .
-- Jakub Langr Mobile (UK): +44 7511 624 004 Mobile (CZ): +420 731 088 879 Mobile (US): +1 650 338 8807 See my blog at jakublangr.com http://goo.gl/Zbq4t3!
Could you post the code that you use? I get two errors when trying to run.
$ python simplify.py
Traceback (most recent call last):
File "simplify.py", line 4, in
Hi! I was interested in this library as a user, not a developer. I hate inking my comics and wanted an AI inker. I could run the included example data with no issue, but the application failed on my own drawings.
What does it take to run my own scanned pencil drawing through your neural net? Is there preprocessing required? Are there specific details that need to be correct in an image file? What range of resolutions does it accept? E.g. a human heads of height 30px to 700px.