Closed My12123 closed 1 year ago
Usually more than 1000 people visit waifu2x.udp.jp at the same time. So it is not allowed to take a lot of time for one request.
Currently images are cached for 30 minutes, and that is more than 5GB in 30 minutes. So it is not allowed to keep caches for long hours. Also, for privacy policy reasons, I prefer not to keep the image cashes as much as possible.
Is it possible to do this for the local version?
Where are the processed images stored in the local version?
Where are the processed images stored in the local version?
You can specify it with --cache-dir
option, tmp/waifu2x_cache
by default.
Also, you can specify the expiration time of the cache with the following option. ( 30 minutes by default)
--cache-ttl CACHE_TTL
cache TTL(min) (default: 30)
However, cache files are diskcache format so they cannot be viewed with common image viewers.
Is it possible to do this for the local version?
All you need is a web crawler (scraper) software. You should use another software for that purpose.
All you need is a web crawler (scraper) software. You should use another software for that purpose. Downloading images with crawler software Processing images with waifu2x CLI
How can this be done? I found a crawler https://github.com/BruceDone/awesome-crawler
It depends on the structure of the target website, so find good software for yourself. For example, for twitter, you can use gallery-dl. (supported sites)
If it is not a well-known website, you may need to write your own rules using a generic web crawler. Anyway, I am not knowledgeable about crawler softwares, so find good software for yourself.
Will a waifu2x crawler version be created?
https://github.com/1j01/rezzy-zoom-and-enhance I found this but it doesn't really help the Art/Scans model is better suited for manga.
Will a waifu2x crawler version be created?
I will not develop it.