tywil04 / slavartdl

Simple tool written in Go(lang) to download music using the SlavArt Divolt server
GNU General Public License v3.0
17 stars 1 forks source link

Pass a .txt as download arguments. #16

Closed Porygon31 closed 1 year ago

Porygon31 commented 1 year ago

Hi,

I think that being able to give the path to a text file containing one download link per line as an argument for the download argument is necessary for better download optimization. I find it impractical to input or even assemble the various links one by one directly in the console.

In summary: Add the ability for the download argument to handle processing a text file containing one download host/link per line by providing the path to the .txt file.

Thank you!

bacon-cheeseburger commented 1 year ago

Something I think would be cool is if it ran as a daemon and monitored a user-defined path for files (.slav?) containing urls. That way you wouldn't have to run it all the time and you could easily paste link(s) into new files and it would automagically pick them up and download the link(s). This is basically the same thing that other download clients do like usenet clients (.nzb) and torrent clients (.torrent). Would definitely add convenience in my opinion.

Porygon31 commented 1 year ago

@bacon-cheeseburger A bit like a monitored folder? As soon as a new .txt file formatted with 1 link per line is saved in the monitored folder, the file is automatically scanned and the download process starts.

Is that what you're describing as an improvement?

bacon-cheeseburger commented 1 year ago

@Porygon31 Yes, exactly. A monitored folder that's checked for new .slav files for example. The .slav file(s) are just normal text files with 1 url per line. When a new file is found, the download process is automatically started on those urls. If a new *.slav file is found while a download process is active, the new url(s) are simply appended to the active download batch.

Also, if there's a malformed or bad url, an error is logged and reported to the user and it's skipped. It's pretty easy to track the source of the url too for error reporting. (ie: invalid url imported from <FILENAME OR "command line">: )

tywil04 commented 1 year ago

Hi,

I think that being able to give the path to a text file containing one download link per line as an argument for the download argument is necessary for better download optimization. I find it impractical to input or even assemble the various links one by one directly in the console.

In summary: Add the ability for the download argument to handle processing a text file containing one download host/link per line by providing the path to the .txt file.

Thank you!

I agree that this should be added, I'm thinking a --fromFile and that it will be combined with any urls passed via regular arguments.

tywil04 commented 1 year ago

Something I think would be cool is if it ran as a daemon and monitored a user-defined path for files (.slav?) containing urls. That way you wouldn't have to run it all the time and you could easily paste link(s) into new files and it would automagically pick them up and download the link(s). This is basically the same thing that other download clients do like usenet clients (.nzb) and torrent clients (.torrent). Would definitely add convenience in my opinion.

The issue with using a daemon is coming up with a way that makes sense for Windows users, daemons on Linux/MacOS are a well understood thing and used commonly, I don't think this is the case on Windows (I could be very wrong, so inform me if so).

I want to maintain feature parody between Windows, MacOS and Linux so having specific features for specific operating systems won't be added to this tool.

I'm open for suggestions on how this would be implemented well on Windows, though.

bacon-cheeseburger commented 1 year ago

Something I think would be cool is if it ran as a daemon and monitored a user-defined path for files (.slav?) containing urls. That way you wouldn't have to run it all the time and you could easily paste link(s) into new files and it would automagically pick them up and download the link(s). This is basically the same thing that other download clients do like usenet clients (.nzb) and torrent clients (.torrent). Would definitely add convenience in my opinion.

The issue with using a daemon is coming up with a way that makes sense for Windows users, daemons on Linux/MacOS are a well understood thing and used commonly, I don't think this is the case on Windows (I could be very wrong, so inform me if so).

I want to maintain feature parody between Windows, MacOS and Linux so having specific features for specific operating systems won't be added to this tool.

I'm open for suggestions on how this would be implemented well on Windows, though.

Maybe something like this? https://github.com/emcrisostomo/fswatch

Another option could be using Cmd.StdinPipe to run a powershell command to do it. While it's not quite an all-in-one solution, it'd only require a check to see if the host OS is Windows or not.

tywil04 commented 1 year ago

I have decided that I'm not going to implement a daemon/service, instead I will create a flag for reading urls via standard input separated by new lines.

I will also write a guide that shows people on Windows/MacOS/Linux how they can create a daemon that watches the file system and interacts with slavartdl via standard input.

I don't think having this as a functionality built into slavartdl is really beneficial for most users, it will instead be mostly confusing and unnecessary complexity.

tywil04 commented 1 year ago

The code for allowing reading from a text file is now in the repo, it will be included in the next release.

Pull request if interested: https://github.com/tywil04/slavartdl/pull/17.

I will next be working on introducing support for reading from standard input.

tywil04 commented 1 year ago

reading from standard input is now merged.

pull request if interested: https://github.com/tywil04/slavartdl/pull/18

bacon-cheeseburger commented 1 year ago

reading from standard input is now merged.

pull request if interested: #18

How is this supposed to work? Before I could do slavartdl download <URL> -o "<OUTPUT DIR>" --timeout 600 -C "<PATH TO CONFIG JSON>" and it worked.

Now when I do that I get "Error: failed to read urls from file". If I append a "-s" flag on the end of the command line I still get the same error.

Also, it says Stdin urls need to be seperated by a newline but how are you supposed to do that on command line?

The command line design seems weird to me, like needing to specify the "download" command. Shouldn't that just be the default behavior if "config","version","help",or "update" isn't given? I'd also expect any command line arguments that don't start with - or --, and any optional values associated with them, to be a list of url's. Here's some examples:

download the url using all default settings: slavartdl <.url1> download the urls with output dir: slavartdl -o /some/output/dir <.url1> <.url2> download the urls with output dir and custom config: slavartdl -o /some/output/dir -C /path/to/myconfig.json <.url1> <.url2> <.url3>

Also, some commands (like config and version) don't need to be commands at all but rather flags to enhance output. Instead of a "config" command that shows you where the config file is located, just move it to --configlocation or something and when set, include it in the app output. Same with the version command (to --version). This would let you see the config location, and the version, and download urls all in the same command line.

for example: slavartdl --version --configlocation -o /some/output/dir -C /path/to/myconfig.json <.url1> <.url2> <.url3> would output:

Version: v1.1.11 The config file is located at: /path/to/myconfig.json Output dir set to: /some/output/dir

Getting download link for <.url1> Downloading zip for <.url1> Done

Getting download link for <.url2> Downloading zip for <.url2> Done

Getting download link for <.url3> Downloading zip for <.url3> Done

tywil04 commented 1 year ago

How is this supposed to work? Before I could do slavartdl download -o "" --timeout 600 -C "" and it worked.

Now when I do that I get "Error: failed to read urls from file". If I append a "-s" flag on the end of the command line I still get the same error.

This was a small bug that I just fixed, its in the repo now.

Also, it says Stdin urls need to be seperated by a newline but how are you supposed to do that on command line?

The thought behind this was simple, it makes it possible for scripts and code to interact with the tool. In scripts and code separating strings with newlines is trivial. Its not meant to be used directly in the command line, for that you would be much better off putting your urls in a text file and reading them that way or just by directly entering the urls in the command line.

The command line design seems weird to me, like needing to specify the "download" command. Shouldn't that just be the default behavior if "config","version","help",or "update" isn't given? I'd also expect any command line arguments that don't start with - or --, and any optional values associated with them, to be a list of url's. Here's some examples:

download the url using all default settings: slavartdl <.url1> download the urls with output dir: slavartdl -o /some/output/dir <.url1> <.url2> download the urls with output dir and custom config: slavartdl -o /some/output/dir -C /path/to/myconfig.json <.url1> <.url2> <.url3>

Also, some commands (like config and version) don't need to be commands at all but rather flags to enhance output. Instead of a "config" command that shows you where the config file is located, just move it to --configlocation or something and when set, include it in the app output. Same with the version command (to --version). This would let you see the config location, and the version, and download urls all in the same command line.

for example: slavartdl --version --configlocation -o /some/output/dir -C /path/to/myconfig.json <.url1> <.url2> <.url3> would output:

Version: v1.1.11 The config file is located at: /path/to/myconfig.json Output dir set to: /some/output/dir

Getting download link for <.url1> Downloading zip for <.url1> Done

Getting download link for <.url2> Downloading zip for <.url2> Done

Getting download link for <.url3> Downloading zip for <.url3> Done

The design of this tool is in line with my expectations (and experience) using command line tools, take a look at the go command line interface. It uses subcommands like version instead of flags.

I personally think the design you provided is clumsy at best. In what scenario would you need the tool to print out the current version, print out the location of the default config file, select the output directory, override the default config (so now the previously printed location is pointless because you are overriding it) and then lastly download urls from the divolt server? Having separate subcommands for version, config etc makes the experience much easier for everyone involved.

One thing I will be implementing is Getting download link for URL and the same for zipping because it means the user can know what is being downloaded.

I will also admit, the config subcommand(s) are fiddly and I am looking to replace them with something although I haven't found something that works well at this moment in time.

bacon-cheeseburger commented 1 year ago

Also, it says Stdin urls need to be seperated by a newline but how are you supposed to do that on command line?

The thought behind this was simple, it makes it possible for scripts and code to interact with the tool. In scripts and code separating strings with newlines is trivial. Its not meant to be used directly in the command line, for that you would be much better off putting your urls in a text file and reading them that way or just by directly entering the urls in the command line.

Entering the urls on the command line didn't work though. I see your point about using newlines in scripts cuz I do see that occasionally but I also see arrays or simple spaces used as separators equally (if not more).

The design of this tool is in line with my expectations (and experience) using command line tools, take a look at the go command line interface. It uses subcommands like version instead of flags.

Must be a go thing, which I'm admittedly new to. The vast majority of tools I've used and written over years follow the design I pointed out so that's my main frame-of-reference.

I personally think the design you provided is clumsy at best. In what scenario would you need the tool to print out the current version, print out the location of the default config file, select the output directory, override the default config (so now the previously printed location is pointless because you are overriding it) and then lastly download urls from the divolt server? Having separate subcommands for version, config etc makes the experience much easier for everyone involved.

Need is a strong word, want for informational or parsing purposes would be more accurate. The default config file doesn't matter, only printing the currently used config file is useful, be it default or user-defined. One obvious advantage of using a command line design like the one I mentioned is only needing a single app execution to get everything over the current design requiring multiple executions to achieve the same result.

What you consider clumsy, many (myself included) consider more robust and functional. Generally, the more flexible your command line is, the better. When a command line is too limited or awkward is when it's clumsy. I'd be interested to hear the opinions of what's easier or more sensical from non-coding end-users just as a curiosity. I wouldn't be surprised if either of our preferences differ from theirs to some degree.

In the end there's really only 3 options; live with it/not that serious, write a wrapper, fork and modify it to your liking. Most people would probably opt for the former over the latter two and call it a day.

tywil04 commented 1 year ago

Sorry if I came of a little harsh, it wasn't the intention.

I feel that having the standard input separate urls via newlines is best by default, however I am open to the idea of creating a flag to change the deliminator.

For now, its would be a hassle to rewrite the structure of the command line, so it won't be happening. However, in future versions (maybe v2) I would consider a different structure. Even then, I still thing that there is no need (or want) from regular users for the tool to print out the current version, print out the location of the default config file, select the output directory, override the default config (so now the previously printed location is pointless because you are overriding it) and then lastly download urls from the divolt server (as previously said).

If it does annoy you that much, feel free to fork and/or create a wrapper. I'm all for that hence the licenses I have chosen. This is what FOSS is about after all.