Closed TjohAGq6VQWLt7gKMo closed 6 years ago
This is a network issue. To avoid this in the future set download.timeout in rip.properties to a smaller value (Around 5000 is alright)
Hello,
Here is my rip.properties config file:
# Download threads to use per ripper
threads.size = 5
# Overwrite existing files
file.overwrite = false
# Number of retries on failed downloads
download.retries = 5
# File download timeout (in milliseconds)
download.timeout = 10000
# Page download timeout (in milliseconds)
page.timeout = 5000
# Maximum size of downloaded files in bytes (required)
download.max_size = 104857600
# Don't retry on 404 errors
error.skip404 = true
# API creds
twitter.auth = VW9Ybjdjb1pkd2J0U3kwTUh2VXVnOm9GTzVQVzNqM29LQU1xVGhnS3pFZzhKbGVqbXU0c2lHQ3JrUFNNZm8=
tumblr.auth = JFNLu3CbINQjRdUvZibXW9VpSEVYYtiPJ86o8YmvgLZIoKyuNX
gw.api = gonewild
twitter.max_requests = 10
clipboard.autorip = false
download.save_order = false
album_titles.save = false
remember.url_history = false
window.position = false
descriptions.save = false
prefer.mp4 = true
auto.update = false
log.level = Log level: Error
play.sound = false
download.show_popup = false
log.save = false
urls_only.save = false
When processing the Reddit account provided Ripme hangs indefinitely. Ripme doesn't appear to be timing out after 5000 or 10000 milliseconds per my config. When I grabbed the above snippet of code Ripme was sitting at the same place for over an hour.
Maybe I am misunderstanding the issue, but shouldn't Ripme try to process the URL(s) for up to 10000 milliseconds and then proceed onto the next URL with the exception of the configured number of retries?
Ripme doesn't appear to be timing out after 5000 or 10000 milliseconds per my config. When I grabbed the above snippet of code Ripme was sitting at the same place for over an hour.
That ought not happen
Maybe I am misunderstanding the issue, but shouldn't Ripme try to process the URL(s) for up to 10000 milliseconds and then proceed onto the next URL with the exception of the configured number of retries?
Yes it should. Testing it out with ripme 1.7.60 however it seems like the timeout is being ignored
Thank you for taking the time to check, much appreciated.
Is it safe to assume their is an issue with Ripme or am I doing something incorrectly?
Doing a bit of reading it looks like the reason the timeout isn't working here is because ripme set setConnectTimeout()
but not setReadTimeout()
. This means that the timeout will only fire if ripme can't connect to the server before the timeout ends
Is it safe to assume their is an issue with Ripme or am I doing something incorrectly?
This is a bug in ripme
Thank you very much!
I've written a fix for this and it will be in ripme 1.7.60 which will be out in 3 days
v1.7.60
Ubuntu 16.0.5 x86_64
https://www.reddit.com/user/mrsmeeseeks
Expected Behavior
Ripme to rip the Reddit user.
Actual Behavior
Ripme begins ripping the Reddit account, however after processing ~143 URL's no further output is printed to the console and Ripme never progresses.
The below snippet is the last of the lines printed to the console.