Closed kmuncie closed 1 year ago
This is a great idea and a use case that I had not considered. I'd definitely be open to a PR if you want to give it a shot.
I think you can just change
source_statuses += item.split(",")
to
source_statuses += filter_status(item.split(","))
https://github.com/tommeagher/heroku_ebooks/blob/3dfd3c1715e6c31d9f7fc4933451c7cb11c881fc/ebooks.py#L150-L151
Or alternatively, just run filter_status
after ebook_status
is generated
https://github.com/tommeagher/heroku_ebooks/blob/3dfd3c1715e6c31d9f7fc4933451c7cb11c881fc/ebooks.py#L202-L203
Archiving this repo and closing this issue as WONTFIX
Because of the Twitter API limits and my source accounts being over 40,000 tweets I have ended up combining the archives of my source accounts into one large CSV. I wish that static sources were treated as a first class source instead only treating it as a testing source.
The main issue I have encountered is that
filter_status
is not being run on static sources which is resulting in some awkward @'s. Could each string from static sources be run throughfilter_status
the same as API tweets and toots?I have little experience with Python but I am happy to take a stab at a PR if that is preferred.
Thanks