Closed soulspark666 closed 3 years ago
Sounds a little bloaty to me. Do we just separate URL by a newline character or do we apply some regex to extract URLs from arbitrary text? What if we get HTML? What if we get non-ascii data? This can get complex quickly.
And it is really quite simple to script with common CLI tools.
I'm not going to add this, but feel free to make a PR with proper tests for all edge cases and we'll see.
I asked a friend to get something to work for the sample I provided, and we quickly whipped something up. Unfortunately, it may not work for all that you pointed out, since I don't have a test data for it, but feel free to comment on it.
import requests
url = 'https://ngosang.github.io/trackerslist/trackers_best.txt'
r = requests.get(url, allow_redirects = True)
R = r.content.decode("utf-8")
f = R.split('\n\n')
f.pop()
for i in f:
print('-t ' + i + " \\")
Looks like a quickly-whipped-up script that does what you need for that particular URL. I'm not sure what you want me to say about it. I'd probably use a one-liner shell script with curl. The double newlines are tricky to split in shell scripts. I'd probably filter out all empty lines first.
I know it may be possible to script it in, but if there is a flag that can get it automatically and also use the tracker defined via
-t
flag.Like: