Open yumetodo opened 7 years ago
Initially I intent to write a general HTML scraper and downloader. Only when I later discovered MPFR, GMP, etc. also hosted on the same server as GCC, I moved them to use FTP download function, leaving a single component for HTML(GitHub) download.
If you are willing to, you can implement a NEW function with similar function signature as the existing HTML download function and replace the calls to it. However, DO NOT REMOVE the original function as that MAY BE useful in the future (in case things move to other webpage or sourceforge)
Extracting url with regex hack is dirty. Why don't you use Github API?