Open gotmilk0112 opened 4 years ago
issue adressed in #38
the exe is not the latest release, need to use the source, unfortunately, as Wulfre can't make new exe versions any more.
The python version doesn't work either. After installing Python and clicking on the .py file, it opens a command prompt and instantly closes before any text appears.
I went through all of #38 and installed the various things, but it still just keeps giving an error about some "yaml" thing.
Same thing happens to me. Closes instantly.
for windows users, in order to diagnose the problem, you really should run the .py under a batch file with these commands
@echo off cd H:\E621\e621dl-3.1.1\ py -3 e621dl.py @echo off pause
you can change the "cd H:\E621\e621dl-3.1.1\" to where your ed621dl folder is located. (I'm just to lazy to rename my folder to the actual version number.)
this will allow you to see what the error is to provide further support, as "closes instantly" literally informs nothing, other than there was an error, and you did not run a bat file to pause to read what happened.
Well he posted a screenshot of what he had, and this is what I have.
you have an error in line 43, column 1 of your config.yaml file. Open it with notepad++ for ease of use, and find the error. looks like you did a \t instead of hitting the tab key.
C:\Python38\python.exe: can't find 'main' module in 'e621dl'
New error.
I run the program itself without using powershell, and it creates the downloads folder but doesn't get any files. It doesn't give an error either, I don't think. Or at least it doesn't show one for a brief second like it used to. I'll try running that bat thing.
When using command prompt it gives this error:
py -3 e621dl.py
[i] Running e621dl version 5.0.0.
[i] Getting config...
[i] Getting posts for search 'Latias'.
Traceback (most recent call last):
File "e621dl.py", line 53, in
That's my mistake. I didn't put in a catch for the user not having something in the blacklist.
I thought the blacklist was gotten from putting in your account info?
Putting in your account info should filter the posts before they even get into the python script. They're two separate implementations of the blacklist. Nothing gets copied from your account into the script. I left the local blacklist as an option for people without accounts.
@darkspyro92 just pushed a fix that should work when the blacklist is empty. Thanks for finding that for me, and sorry for the trouble that you had.
Just like old times, eh? When I used to stress test your program with just about every form of unreasonable thing I could think of. xD
Got another error after a while.
Traceback (most recent call last):
File "C:\Python38\lib\site-packages\urllib3\connectionpool.py", line 426, in _make_request
six.raise_from(e, None)
File "<string>", line 3, in raise_from
File "C:\Python38\lib\site-packages\urllib3\connectionpool.py", line 421, in _make_request
httplib_response = conn.getresponse()
File "C:\Python38\lib\http\client.py", line 1322, in getresponse
response.begin()
File "C:\Python38\lib\http\client.py", line 303, in begin
version, status, reason = self._read_status()
File "C:\Python38\lib\http\client.py", line 264, in _read_status
line = str(self.fp.readline(_MAXLINE + 1), "iso-8859-1")
File "C:\Python38\lib\socket.py", line 669, in readinto
return self._sock.recv_into(b)
File "C:\Python38\lib\ssl.py", line 1241, in recv_into
return self.read(nbytes, buffer)
File "C:\Python38\lib\ssl.py", line 1099, in read
return self._sslobj.read(len, buffer)
socket.timeout: The read operation timed out
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "C:\Python38\lib\site-packages\urllib3\connectionpool.py", line 670, in urlopen
httplib_response = self._make_request(
File "C:\Python38\lib\site-packages\urllib3\connectionpool.py", line 428, in _make_request
self._raise_timeout(err=e, url=url, timeout_value=read_timeout)
File "C:\Python38\lib\site-packages\urllib3\connectionpool.py", line 335, in _raise_timeout
raise ReadTimeoutError(
urllib3.exceptions.ReadTimeoutError: HTTPSConnectionPool(host='e621.net', port=443): Read timed out. (read timeout=5.0)
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "C:\Python38\lib\site-packages\httpx\_utils.py", line 364, in as_network_error
yield
File "C:\Python38\lib\site-packages\httpx\_dispatch\urllib3.py", line 98, in send
conn = self.pool.urlopen(
File "C:\Python38\lib\site-packages\urllib3\poolmanager.py", line 336, in urlopen
response = conn.urlopen(method, u.request_uri, **kw)
File "C:\Python38\lib\site-packages\urllib3\connectionpool.py", line 724, in urlopen
retries = retries.increment(
File "C:\Python38\lib\site-packages\urllib3\util\retry.py", line 439, in increment
raise MaxRetryError(_pool, url, error or ResponseError(cause))
urllib3.exceptions.MaxRetryError: HTTPSConnectionPool(host='e621.net', port=443): Max retries exceeded with url: /posts.json?limit=320&tags=rape+date%3A%3E%3D0001-01-01+id%3A%3C2147905 (Caused by ReadTimeoutError("HTTPSConnectionPool(host='e621.net', port=443): Read timed out. (read timeout=5.0)"))
The above exception was the direct cause of the following exception:
Traceback (most recent call last):
File "C:\Users\umbre\Pictures\Private\e621dl-master\e621dl.py", line 42, in <module>
posts = remote.get_posts(client, ' '.join(search['tags']), search['start_date'], last_id)
File "C:\Users\umbre\Pictures\Private\e621dl-master\e621dl\remote.py", line 5, in get_posts
response = client.get(
File "C:\Python38\lib\site-packages\httpx\_client.py", line 706, in get
return self.request(
File "C:\Python38\lib\site-packages\httpx\_client.py", line 570, in request
return self.send(
File "C:\Python38\lib\site-packages\httpx\_client.py", line 590, in send
response = self.send_handling_redirects(
File "C:\Python38\lib\site-packages\httpx\_client.py", line 617, in send_handling_redirects
response = self.send_handling_auth(
File "C:\Python38\lib\site-packages\httpx\_client.py", line 654, in send_handling_auth
response = self.send_single_request(request, timeout)
File "C:\Python38\lib\site-packages\httpx\_client.py", line 678, in send_single_request
response = dispatcher.send(request, timeout=timeout)
File "C:\Python38\lib\site-packages\httpx\_dispatch\urllib3.py", line 98, in send
conn = self.pool.urlopen(
File "C:\Python38\lib\contextlib.py", line 131, in __exit__
self.gen.throw(type, value, traceback)
File "C:\Python38\lib\site-packages\httpx\_utils.py", line 368, in as_network_error
raise NetworkError(exc) from exc
httpx._exceptions.NetworkError: HTTPSConnectionPool(host='e621.net', port=443): Max retries exceeded with url: /posts.json?limit=320&tags=rape+date%3A%3E%3D0001-01-01+id%3A%3C2147905 (Caused by ReadTimeoutError("HTTPSConnectionPool(host='e621.net', port=443): Read timed out. (read timeout=5.0)"))
That's because the connection wasn't established after 5 seconds and I didn't include a retry. I'll see about including functionality for it to retry the connection if it fails the first time, and also to give an easier to read error.
The blacklist fix seems to have broken blacklists completely. Neither the account BL nor the config BL are filtering my search. That's my test config:
auth:
username: *********
api_key: **************************
# Note that if you included your auth above, then your account blacklist will already be applied.
blacklist:
-friendship_is_magic
search_defaults:
days: 62
min_score: -1
min_fav_count: 0
allowed_ratings:
- q
- e
searches:
Equine:
days: 1
tags:
- Equine
Double checked login and API key and the filter works on the website.
It works for me. Not sure what it is with yours. Try putting a space between the hyphen and the tag on your blacklist.
Then again, my search didn't say there were any blacklisted tags. When I get home from work I'll try a test as well.
Ok, that fixed the config BL, but my account BL still does nothing.
I'll have to do testing. As far as I know once you provide the API key then searches are automatically filtered with your blacklist, as in the posts that conflict with your blacklist wouldn't even get returned by the API in the first place. Something might have changed with the site update.
Okay, I read the documentation incorrectly. The blacklist is client-side, even on the official website. Using your API key does not automatically apply the blacklist.
Is there an internal blacklist of things we can't search? Or is it limited to so many searches? I tried to search two artists and it skipped right over them. It said searching for posts for the names I input in the tags, but didn't download anything.
that can happen when there are no posts within the given date range.
I gave a date range of 9999999999999. That worked before on the other searches. Or rather, I did change something. Maybe I'm doing something wrong. I had my default set to that before so I didn't have to add the date range to all the searches. Now it's set to 2 and I have the days set on the new individual searches.
search_defaults:
days: 2
min_score: 0
min_fav_count: 0
allowed_ratings:
- e
Youjomodoki:
days: 99999999999999
tags:
-youjomodoki
I just ran a blank config with this and it worked fine.
oh wait
I see the problem. you need a space in the line that says
Youjomodoki: days: 99999999999999 tags: -youjomodoki
should be
Youjomodoki: days: 99999999999999 tags: - youjomodoki
it's not looking for a tag, because of that missing space
You know, I'm usually good about putting those spaces. xD
Okay, so I put the space, and it still skipped over them.
Nevermind. I didn't save the file. Don't worry, I'm not normally this dumb, lol.
So I moved the program to another hard drive and running the .bat works fine manually, but the task scheduler gives an error. Someone said it has something to do with the $PATH.
Does your task scheduler action have a hard path to the file on your old drive?
Is this what you mean?
Try editing the action and set the 'start in' directory to S:\e621-dl\e621dl-master
. If I remember correctly the bat file that I posted in another issue (the one I'm assuming you're using) uses a relative directory so it's looking for the file in system32 (the default 'start in' directory).
Well that worked. xD Thanks. I wish I knew more about this stuff. Had to move the program to an empty hard drive because it was taking up so much space on my main hard drive. Glad to have this sorted out now.
Now I've got another error. I thought it was the program closing because e621 shut down the search, but now I'm not so sure. I split up searches into three different categories so that it doesn't search so many things in one search.
Any idea what this error is for?
Okay, so I ran the program again just to try it and it worked this time. Not sure why it popped the error last time.
It seemed to happen again. It only happens with my artist searches. Maybe there's too many artist search tags? I'll try splitting them up into two searches as well.
"Read timed out"
This looks like connection suddenly dropped, either because e621 server is unstable (unlikely) or your ISP has some issues (less unlikely) or your hardware is degrading (most probable). Maybe router is under heavy load, like torrents, or it or your network adapter is about to break. Router you can check by directly plugging a cable to your laptop/PC and setting up internet access manually if needed. Network adapter you can check by replacing it with Wi-Fi dongle or USB-Ethernet or plugging extension card. Or another PC/Laptop, of course. If this keeps happening your can indirectly check if sites are not always fully load on first try and need a reload from time to time. AFAIK at least one person here had the same problem.
That would be me, but I was getting "ConnectionResetError: [WinError 10054] An existing connection was forcibly closed by the remote host" errors when my mobo NIC was dying. Never got "Read timed out" errors.
https://puu.sh/FoqAk/a00cc0a471.png
Keeps giving that error. Using the newest release.