Open becks0815 opened 2 years ago
A quick and dirty fix is to add the lorem package to the requirements:
pip install lorem
and add in main.py: import lorem import time
In the area where the comments are deleted, change:
for comment in tqdm((reddit.redditor(args.username).comments.new(limit=None)), desc="1000 most recent comments",
unit=" comments"):
if not check_submission_date(comment) and not check_submission_subreddit(comment) and not args.dry_run:
comment.delete()
to something like
unit=" comments"):
if not check_submission_date(comment) and not check_submission_subreddit(comment) and not args.dry_run:
comment.edit(lorem.sentence())
time.sleep(3)
comment.delete()
time.sleep(10)
That's a good idea - your implementation would slow down execution by quite a lot but I'll definitely look into implementing it!
Hi, it is true that the deletion process slows down a lot. But you can schedule the deletions, and then it doesn't really matter how long it runs. You could even add a command line option to skip overwriting the data first.
And if you are interested: I have used this code here to create some additional tools which can be used to extract essential data from an account (saved comments and saved submissions plus the list of subscribed subreddits), to upload/attach this data to another account, to unsubscribe an account from all sub reddits and finally one to erase the list of saved comments/submission from an account.
That's a good point about adding a command line option. The solution I've come up with is doing the process in two steps - first, overwrite all the comments, then wait a moment, then delete them. This eliminates the need for the time.sleeps
that were slowing down the initial implementation.
I'm also very interested in seeing the additional tools you've made!
Thank you.
I will ommit the parts with initializing praw and to link it to an account, Plus you need to copy the username from args.username to username to make it work. I have decided to add a username.lower() conversion to all entries, because your reddit username is not case sensitive, but the program will ask for the credetials again if you make a mistake when user upper-/lowercase while running the program.
Show subscriptions:
for subreddit in reddit.user.subreddits(limit=None):
print(str(subreddit))
You just need to pipe the output into a textfile.
Subscribe Then you can read it, drop duplicated entries and subscribe. I am using pandas to create a dataframe, drop the dupes and then run the "subscribe" routine. Maybe you want to enhance it to first read in existing subscriptions, removed duplicates and then only add the missing ones, but this works:
# Buffer list of reddits in dataframe
df = pd.read_csv("./subreddit_subscripe.txt", header = None)
df = df.drop_duplicates()
df = df.rename(columns = {0:'reddit'})
for idx, row in df.iterrows():
print(row['reddit'])
reddit.subreddit(row['reddit']).subscribe()
Unsubscribe:
for subreddit in reddit.user.subreddits(limit=None):
reddit.subreddit(subreddit).unsubscribe()
Export saved comments:
f = open('./saved.txt', 'a')
i = 0;
for comment in reddit.redditor(username).saved():
i = i +1;
f.write(comment.id)
f.write("\n")
f.close()
print("added %s entries to saved.txt"%(i))
Import saved comments:
# Buffer list of comments/submissions in dataframe
# Comments have a 7 character ID, submission a 6 character ID
df = pd.read_csv("./saved.txt", header = None)
df = df.drop_duplicates()
df = df.rename(columns = {0:'reddit'})
print("Importing %s comments and submissions"%(len(df)))
for idx, row in df.iterrows():
print(row['reddit'])
if len(row['reddit']) == 7:
try:
comment = reddit.comment(row['reddit'])
comment.save()
except:
print("import failed")
if len(row['reddit']) == 6:
try:
submission = reddit.submission(row['reddit'])
submission.save()
except:
print("import failed")
If I understand the code, then comments and posts are simply deleted. However, as reddit does not really delete content but just hides it from you, I would prefer to overwrite anything first with random data and then delete the comment after that. the original script shreddit did this.
maybe this can be implemented?