bellingcat / auto-archiver

Automatically archive links to videos, images, and social media content from Google Sheets (and more).
https://pypi.org/project/auto-archiver/
MIT License
543 stars 53 forks source link

Feature Request: Allow local text or TSV files instead of Google Spreadsheets #148

Open kkarhan opened 1 month ago

kkarhan commented 1 month ago

Hi, as I asked on the fediverse, there's like a not-so insignificant need to allow self-hosting, which admittedly it doesn't do as of now.

I sincerely hope this will help your project going forward and if needed I'll gladly provide samples of sites that one may want to archive.

Yours faithfully, Kevin Karhan

GalenReich commented 1 month ago

Hi kkarhan, thanks for opening the issue - this is something we may look at and would welcome pull requests to add a TSV feeder.

Currently we do support using a command line feeder (cli_feeder) if you wanted to bypass the Google dependency in the interim - you can set this in your orchestration.yaml.

We are planning on working on the documentation of the auto-archiver so hopefully that will help with correctly configuring for different workflows.

msramalho commented 3 weeks ago

Hey @kkarhan thanks for the clear issue and suggestion.

Adding to Galen's answer: for now we only implemented 2 main feeders: GoogleSheets and CommandLine. Internally, that covers all our needs, so this is not something we will not be worked on by us atm (adding wontfix label).

Still, we'll leave this issue open for a while in case you or others find it a valuable addition and want to contribute it to the project.

kkarhan commented 3 weeks ago

Thanks so far for the feedback and keeping the issue open.

Is there any conclusive documentation re: cli_feeder ?

Cuz if similar to wget & curl I could just iterate over things that way...

msramalho commented 2 weeks ago

No good documentation on it unfortunately.

If you look at the code https://github.com/bellingcat/auto-archiver/blob/b166d57e61285dba585ca3bfd3af2acfb5696501/src/auto_archiver/feeders/cli_feeder.py#L17-L24

it is essentially expecting a --cli_feeder.urls parameter, an example call would be: python -m src.auto_archiver --config secrets/orchestration.yaml --cli_feeder.urls="https://example.com,https://example2.wow"

What I'd suggest is you either create a new, very similar feeder, that accepts a filename instead of a csv of hardcoded urls OR actually modify the cli_feeder to have another parameter just for filenames and force at least one of them to be present.

This should not be hard to achieve assuming you've been able to run/test the auto-archiver locally on your development environment.

msramalho commented 2 weeks ago

*this would be preferable to piping giving the current sofware architecture of the library.