Wulfre / e621dl

An automated download script for e621.net.
19 stars 8 forks source link

An existing connection was forcibly closed by the remote host #45

Closed RedFoxNightfox closed 4 years ago

RedFoxNightfox commented 4 years ago

Getting alot of these errors all of a sudden.

latest one:


  File "C:\Python38\lib\site-packages\urllib3\connectionpool.py", line 665, in urlopen
    httplib_response = self._make_request(
  File "C:\Python38\lib\site-packages\urllib3\connectionpool.py", line 421, in _make_request
    six.raise_from(e, None)
  File "<string>", line 3, in raise_from
  File "C:\Python38\lib\site-packages\urllib3\connectionpool.py", line 416, in _make_request
    httplib_response = conn.getresponse()
  File "C:\Python38\lib\http\client.py", line 1322, in getresponse
    response.begin()
  File "C:\Python38\lib\http\client.py", line 303, in begin
    version, status, reason = self._read_status()
  File "C:\Python38\lib\http\client.py", line 264, in _read_status
    line = str(self.fp.readline(_MAXLINE + 1), "iso-8859-1")
  File "C:\Python38\lib\socket.py", line 669, in readinto
    return self._sock.recv_into(b)
  File "C:\Python38\lib\ssl.py", line 1241, in recv_into
    return self.read(nbytes, buffer)
  File "C:\Python38\lib\ssl.py", line 1099, in read
    return self._sslobj.read(len, buffer)
ConnectionResetError: [WinError 10054] An existing connection was forcibly closed by the remote host

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
  File "C:\Python38\lib\site-packages\httpx\_utils.py", line 364, in as_network_error
    yield
  File "C:\Python38\lib\site-packages\httpx\_dispatch\urllib3.py", line 98, in send
    conn = self.pool.urlopen(
  File "C:\Python38\lib\site-packages\urllib3\poolmanager.py", line 330, in urlopen
    response = conn.urlopen(method, u.request_uri, **kw)
  File "C:\Python38\lib\site-packages\urllib3\connectionpool.py", line 719, in urlopen
    retries = retries.increment(
  File "C:\Python38\lib\site-packages\urllib3\util\retry.py", line 436, in increment
    raise MaxRetryError(_pool, url, error or ResponseError(cause))
urllib3.exceptions.MaxRetryError: HTTPSConnectionPool(host='e621.net', port=443): Max retries exceeded with url: /posts.json?limit=320&tags=-type%3Agif+uyu+date%3A%3E%3D2020-04-03+ (Caused by ProtocolError('Connection aborted.', ConnectionResetError(10054, 'An existing connection was forcibly closed by the remote host', None, 10054, None)))

The above exception was the direct cause of the following exception:

Traceback (most recent call last):
  File "e621dl.py", line 42, in <module>
    posts = remote.get_posts(client, ' '.join(search['tags']), search['start_date'], last_id)
  File "H:\E621\e621dl-3.1.1\e621dl\remote.py", line 5, in get_posts
    response = client.get(
  File "C:\Python38\lib\site-packages\httpx\_client.py", line 706, in get
    return self.request(
  File "C:\Python38\lib\site-packages\httpx\_client.py", line 570, in request
    return self.send(
  File "C:\Python38\lib\site-packages\httpx\_client.py", line 590, in send
    response = self.send_handling_redirects(
  File "C:\Python38\lib\site-packages\httpx\_client.py", line 617, in send_handling_redirects
    response = self.send_handling_auth(
  File "C:\Python38\lib\site-packages\httpx\_client.py", line 654, in send_handling_auth
    response = self.send_single_request(request, timeout)
  File "C:\Python38\lib\site-packages\httpx\_client.py", line 678, in send_single_request
    response = dispatcher.send(request, timeout=timeout)
  File "C:\Python38\lib\site-packages\httpx\_dispatch\urllib3.py", line 98, in send
    conn = self.pool.urlopen(
  File "C:\Python38\lib\contextlib.py", line 131, in __exit__
    self.gen.throw(type, value, traceback)
  File "C:\Python38\lib\site-packages\httpx\_utils.py", line 368, in as_network_error
    raise NetworkError(exc) from exc
httpx._exceptions.NetworkError: HTTPSConnectionPool(host='e621.net', port=443): Max retries exceeded with url: /posts.json?limit=320&tags=-type%3Agif+uyu+date%3A%3E%3D2020-04-03+ (Caused by ProtocolError('Connection aborted.', ConnectionResetError(10054, 'An existing connection was forcibly closed by the remote host', None, 10054, None)))
Press any key to continue . . .```
RedFoxNightfox commented 4 years ago

do API calls need to be adjusted again?

Wulfre commented 4 years ago

Is this issue still happening for you or was it a temporary outage? It looks like it's working for me when I tested it recently.

RedFoxNightfox commented 4 years ago

just tried it again a few minutes ago. it will run for a bit, and then do the same.

RedFoxNightfox commented 4 years ago

It usually happens when the script skips a bunch of already downloaded files, it seems.

Wulfre commented 4 years ago

I just tested it with the script skipping about 1000 files and it worked. Since the connection is being closed by the host, I can only imagine that the script is being rate limited or banned, but neither of those should be true since I'm using a proper user agent and purposely slowing down the request rate. There might be something that I'm missing.

EDIT: Additionally, I don't think the skipping of already downloaded files should have anything to do with it since the filtering all happens locally. The API calls only happen every 320 posts.

lurkbbs commented 4 years ago

Just to be sure, does e621.net works from a browser, especially from incognito? Do you use a proxy or a VPN of some kind? Also, was the script changed since its last successful launch? Was the config? Essentially, was anything related to the script or your network changed?

RedFoxNightfox commented 4 years ago

e621.net does work from browser, yes, in incognito mode. Don't use proxy nor VPN.

No, the script was not changed since last successful launch, nor the config.

also, the script works for a short time before gives the "An existing connection was forcibly closed by the remote host" error, for a minute or so.

RedFoxNightfox commented 4 years ago

when I get the error is a bit sporadic, but here is the full dialog box of the last I ran it, seconds before posting this:


[i] Getting config...
[i] Getting posts for search '_artist/aaron_(artist)'.
[i] Post 2240518 was already downloaded.
[i] Post 2235273 was already downloaded.
[i] Post 2229637 was already downloaded.
[i] Post 2229514 was already downloaded.
[i] Post 2225225 was already downloaded.
[i] Post 2222394 was already downloaded.
[i] Post 2220281 was already downloaded.
[i] Post 2215188 was already downloaded.
[i] Post 2212338 was already downloaded.
[i] Post 2202414 was already downloaded.
[i] Getting posts for search '_artist/badgengar'.
[i] Post 2220609 was already downloaded.
[i] Getting posts for search '_artist/blargsnarf'.
[i] Getting posts for search '_artist/celestial'.
[i] Post 2226985 was already downloaded.
[✗] Post 2226214 was skipped for having a mismatched rating.
[i] Post 2223998 was already downloaded.
[i] Post 2223997 was already downloaded.
[i] Post 2223937 was already downloaded.
[i] Post 2223915 was already downloaded.
[✗] Post 2223898 was skipped for having a mismatched rating.
[i] Post 2223879 was already downloaded.
[✗] Post 2223829 was skipped for having a mismatched rating.
[i] Post 2223789 was already downloaded.
[i] Post 2220619 was already downloaded.
[i] Post 2203807 was already downloaded.
[i] Getting posts for search '_artist/chasm-006'.
[i] Post 2227135 was already downloaded.
[i] Post 2225565 was already downloaded.
[i] Post 2206599 was already downloaded.
[i] Post 2206575 was already downloaded.
[i] Post 2202440 was already downloaded.
[i] Getting posts for search '_artist/cheetahpaws'.
[✗] Post 2223008 was skipped for having a mismatched rating.
[i] Post 2220840 was already downloaded.
[i] Getting posts for search '_artist/citrus_doodles'.
[i] Post 2217378 was already downloaded.
[i] Post 2207848 was already downloaded.
[i] Post 2207846 was already downloaded.
[i] Getting posts for search '_artist/crunchyspoon'.
[i] Post 2219406 was already downloaded.
[i] Post 2219397 was already downloaded.
[i] Getting posts for search '_artist/ctw36'.
[i] Getting posts for search '_artist/dacad'.
[i] Post 2233477 was already downloaded.
[i] Post 2233475 was already downloaded.
[i] Post 2227298 was already downloaded.
[i] Post 2217675 was already downloaded.
[i] Post 2217669 was already downloaded.
[i] Post 2216680 was already downloaded.
[i] Post 2205710 was already downloaded.
[i] Getting posts for search '_artist/dalley-the-alpha'.
[i] Post 2233148 was already downloaded.
[i] Getting posts for search '_artist/dark-moltres'.
[i] Post 2238759 was already downloaded.
[i] Post 2238757 was already downloaded.
[i] Post 2216636 was already downloaded.
[i] Getting posts for search '_artist/dgemporium'.
Traceback (most recent call last):
  File "C:\Python38\lib\site-packages\urllib3\connectionpool.py", line 665, in urlopen
    httplib_response = self._make_request(
  File "C:\Python38\lib\site-packages\urllib3\connectionpool.py", line 421, in _make_request
    six.raise_from(e, None)
  File "<string>", line 3, in raise_from
  File "C:\Python38\lib\site-packages\urllib3\connectionpool.py", line 416, in _make_request
    httplib_response = conn.getresponse()
  File "C:\Python38\lib\http\client.py", line 1322, in getresponse
    response.begin()
  File "C:\Python38\lib\http\client.py", line 303, in begin
    version, status, reason = self._read_status()
  File "C:\Python38\lib\http\client.py", line 264, in _read_status
    line = str(self.fp.readline(_MAXLINE + 1), "iso-8859-1")
  File "C:\Python38\lib\socket.py", line 669, in readinto
    return self._sock.recv_into(b)
  File "C:\Python38\lib\ssl.py", line 1241, in recv_into
    return self.read(nbytes, buffer)
  File "C:\Python38\lib\ssl.py", line 1099, in read
    return self._sslobj.read(len, buffer)
ConnectionResetError: [WinError 10054] An existing connection was forcibly closed by the remote host

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
  File "C:\Python38\lib\site-packages\httpx\_utils.py", line 364, in as_network_error
    yield
  File "C:\Python38\lib\site-packages\httpx\_dispatch\urllib3.py", line 98, in send
    conn = self.pool.urlopen(
  File "C:\Python38\lib\site-packages\urllib3\poolmanager.py", line 330, in urlopen
    response = conn.urlopen(method, u.request_uri, **kw)
  File "C:\Python38\lib\site-packages\urllib3\connectionpool.py", line 719, in urlopen
    retries = retries.increment(
  File "C:\Python38\lib\site-packages\urllib3\util\retry.py", line 436, in increment
    raise MaxRetryError(_pool, url, error or ResponseError(cause))
urllib3.exceptions.MaxRetryError: HTTPSConnectionPool(host='e621.net', port=443): Max retries exceeded with url: /posts.json?limit=320&tags=-type%3Agif+dgemporium+date%3A%3E%3D2020-04-04+ (Caused by ProtocolError('Connection aborted.', ConnectionResetError(10054, 'An existing connection was forcibly closed by the remote host', None, 10054, None)))

The above exception was the direct cause of the following exception:

Traceback (most recent call last):
  File "e621dl.py", line 42, in <module>
    posts = remote.get_posts(client, ' '.join(search['tags']), search['start_date'], last_id)
  File "H:\E621\e621dl-3.1.1\e621dl\remote.py", line 5, in get_posts
    response = client.get(
  File "C:\Python38\lib\site-packages\httpx\_client.py", line 706, in get
    return self.request(
  File "C:\Python38\lib\site-packages\httpx\_client.py", line 570, in request
    return self.send(
  File "C:\Python38\lib\site-packages\httpx\_client.py", line 590, in send
    response = self.send_handling_redirects(
  File "C:\Python38\lib\site-packages\httpx\_client.py", line 617, in send_handling_redirects
    response = self.send_handling_auth(
  File "C:\Python38\lib\site-packages\httpx\_client.py", line 654, in send_handling_auth
    response = self.send_single_request(request, timeout)
  File "C:\Python38\lib\site-packages\httpx\_client.py", line 678, in send_single_request
    response = dispatcher.send(request, timeout=timeout)
  File "C:\Python38\lib\site-packages\httpx\_dispatch\urllib3.py", line 98, in send
    conn = self.pool.urlopen(
  File "C:\Python38\lib\contextlib.py", line 131, in __exit__
    self.gen.throw(type, value, traceback)
  File "C:\Python38\lib\site-packages\httpx\_utils.py", line 368, in as_network_error
    raise NetworkError(exc) from exc
httpx._exceptions.NetworkError: HTTPSConnectionPool(host='e621.net', port=443): Max retries exceeded with url: /posts.json?limit=320&tags=-type%3Agif+dgemporium+date%3A%3E%3D2020-04-04+ (Caused by ProtocolError('Connection aborted.', ConnectionResetError(10054, 'An existing connection was forcibly closed by the remote host', None, 10054, None)))
Press any key to continue . . .```
RedFoxNightfox commented 4 years ago

Honestly, this error is happening without rhyme nor reason anymore. the only consistent thing about this is that it is so inconsistent as to WHEN it happens, only that it happens every time now.

Wulfre commented 4 years ago

It looks like rate limiting to me, but I can't request posts fast enough to get the same error. I'm fairly sure that I put enough wait to to prevent that from happening in the first place.

RedFoxNightfox commented 4 years ago

If it helps, this is a portion of my config.yaml so you can try and replicate:

blacklist:
    - vore
    - scat
    - gore
    - watersports
    - diaper
    - guru
    - penis_tongue
    - inflation
    - five_nights_at_freddy's
    - fart
    - mass_effect
    - silent_hill
    - rape
    - monkey
    - blood
    - censored
    - ape
    - gorilla
    - macro
    - micro
    - monster
    - warcraft
    - xenomorph
    - halo_(series)
    - grotesque
    - infestation
    - maggots
    - nightmare_fuel
    - parasite
    - where_is_your_god_now
    - asphyxiation
    - noose
    - snuff
    - execution
    - death
    - vomit
    - smegma
    - nipple_penetration
    - cock_vore
    - anal_vore
    - absorption_vore
    - unbirthing
    - creepy
    - wetting
    - clown
    - nipple_mouth
    - zombie
    - dicknipples
    - fallout
    - primate
    - censored
    - quadruple_amputee
    - deathclaw
    - starcraft
    - monster_hunter
    - scp_foundation

search_defaults:
    days: 30
    min_score: -99999999
    min_fav_count: 0
    allowed_ratings:
        - q
        - e

# The most common search structure has already been exemplified, but you may overwrite any of the default search settings for a specific search.
#
# searches:
#   dogs:
#       days: 30
#       min_score: 10
#       min_fav_count: 10
#       allowed_ratings:
#           - s
#           - q
#           - e
#       tags:
#           - dog
#           - brown_fur

searches:
    _artist/aaron_(artist):
        tags:
            - -type:gif
            - aaron_(artist)
    _artist/badgengar:
        tags:
            - -type:gif
            - badgengar
    _artist/blargsnarf:
        tags:
            - -type:gif
            - blargsnarf
    _artist/celestial:
        tags:
            - -type:gif
            - celestial
    _artist/chasm-006:
        tags:
            - -type:gif
            - chasm-006
    _artist/cheetahpaws:
        tags:
            - -type:gif
            - cheetahpaws
    _artist/citrus_doodles:
        tags:
            - -type:gif
            - citrus_doodles
    _artist/crunchyspoon:
        tags:
            - -type:gif
            - crunchyspoon
    _artist/ctw36:
        tags:
            - -type:gif
            - ctw36
    _artist/dacad:
        tags:
            - -type:gif
            - dacad
    _artist/dalley-the-alpha:
        tags:
            - -type:gif
            - dalley-the-alpha
    _artist/dark-moltres:
        tags:
            - -type:gif
            - dark-moltres
    _artist/dgemporium:
        tags:
            - -type:gif
            - dgemporium
    _artist/dryadex:
        tags:
            - -type:gif
            - dryadex
    _artist/elchilenito:
        tags:
            - -type:gif
            - elchilenito
    _artist/emenius:
        tags:
            - -type:gif
            - emenius   
    _artist/eroborus:
        tags:
        - -type:gif
        - eroborus
    _artist/erostud:
        tags:
        - -type:gif
        - erostud
    _artist/excito:
        tags:
            - -type:gif
            - excito
    _artist/fluffydonuts:
        tags:
            - -type:gif
            - fluffydonuts
    _artist/forsaken_(artist):
        tags:
            - -type:gif
            - forsaken_(artist)
    _artist/foxxfire:
        tags:
            - -type:gif
            - foxxfire
    _artist/fred_perry:
        tags:
            - -type:gif
            - fred_perry
lurkbbs commented 4 years ago

I checked e621:api again, it says:

"Hitting the rate limit will result in a 503 HTTP response code."

So rate limiting is less likely now.

This whole thing is strange and irregular, and that could mean a hardware problem with your router or a network card (wired or wireless). Generally that would mean minor problem with sites once in a while solved by reload, since most browsers would either ignore or retry anyway.

Try reload your everything, connect to ISP without router (if any), try another laptop/PC if possible. Try connect with your mobile ISP. In general, try anything to eliminate any possible hardware failure.

Wulfre commented 4 years ago

Ah @lurkbbs is right, I forgot about the new responses. Just for sanity I ran @RedFoxNightfox's config 5 times and didn't have any issues. There is almost definitely something external going on with your system.

RedFoxNightfox commented 4 years ago

okay. I will have to look into it then. only have the one PC, and no mobile ISP, and I am on a wired connection, so I will have some work ahead of me to see what's going on.

EDIT: I do have to have a ISP tech to come out and find out why my upstream channels are out of dBmV range again, not sure if that is causing the problem or not though. (upstream has been unstable again as of late due to this.)

RedFoxNightfox commented 4 years ago

If unstable upstream can cause this error, then hopefully it will get fixed tomorrow when the ISP tech comes.

RedFoxNightfox commented 4 years ago

ISP came to fix unstable upstream, everything should be fixed, but I am still getting this


[i] Getting config...
[i] Getting posts for search '_artist/aaron_(artist)'.
[i] Post 2240518 was already downloaded.
[i] Post 2235273 was already downloaded.
[i] Post 2229637 was already downloaded.
[i] Post 2229514 was already downloaded.
[i] Post 2225225 was already downloaded.
[i] Post 2222394 was already downloaded.
[i] Post 2220281 was already downloaded.
[i] Post 2215188 was already downloaded.
[i] Post 2212338 was already downloaded.
[i] Getting posts for search '_artist/badgengar'.
[i] Post 2220609 was already downloaded.
[i] Getting posts for search '_artist/blargsnarf'.
[i] Getting posts for search '_artist/celestial'.
[i] Post 2226985 was already downloaded.
[✗] Post 2226214 was skipped for having a mismatched rating.
[i] Post 2223998 was already downloaded.
[i] Post 2223997 was already downloaded.
[i] Post 2223937 was already downloaded.
[i] Post 2223915 was already downloaded.
[✗] Post 2223898 was skipped for having a mismatched rating.
[i] Post 2223879 was already downloaded.
[✗] Post 2223829 was skipped for having a mismatched rating.
[i] Post 2223789 was already downloaded.
[i] Post 2220619 was already downloaded.
[i] Post 2203807 was already downloaded.
[i] Getting posts for search '_artist/chasm-006'.
[i] Post 2227135 was already downloaded.
[i] Post 2225565 was already downloaded.
[i] Post 2206599 was already downloaded.
[i] Post 2206575 was already downloaded.
[i] Getting posts for search '_artist/cheetahpaws'.
[✗] Post 2223008 was skipped for having a mismatched rating.
[i] Post 2220840 was already downloaded.
[i] Getting posts for search '_artist/citrus_doodles'.
[i] Post 2217378 was already downloaded.
[i] Post 2207848 was already downloaded.
[i] Post 2207846 was already downloaded.
[i] Getting posts for search '_artist/crunchyspoon'.
Traceback (most recent call last):
  File "C:\Python38\lib\site-packages\urllib3\connectionpool.py", line 665, in urlopen
    httplib_response = self._make_request(
  File "C:\Python38\lib\site-packages\urllib3\connectionpool.py", line 421, in _make_request
    six.raise_from(e, None)
  File "<string>", line 3, in raise_from
  File "C:\Python38\lib\site-packages\urllib3\connectionpool.py", line 416, in _make_request
    httplib_response = conn.getresponse()
  File "C:\Python38\lib\http\client.py", line 1322, in getresponse
    response.begin()
  File "C:\Python38\lib\http\client.py", line 303, in begin
    version, status, reason = self._read_status()
  File "C:\Python38\lib\http\client.py", line 264, in _read_status
    line = str(self.fp.readline(_MAXLINE + 1), "iso-8859-1")
  File "C:\Python38\lib\socket.py", line 669, in readinto
    return self._sock.recv_into(b)
  File "C:\Python38\lib\ssl.py", line 1241, in recv_into
    return self.read(nbytes, buffer)
  File "C:\Python38\lib\ssl.py", line 1099, in read
    return self._sslobj.read(len, buffer)
ConnectionResetError: [WinError 10054] An existing connection was forcibly closed by the remote host

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
  File "C:\Python38\lib\site-packages\httpx\_utils.py", line 364, in as_network_error
    yield
  File "C:\Python38\lib\site-packages\httpx\_dispatch\urllib3.py", line 98, in send
    conn = self.pool.urlopen(
  File "C:\Python38\lib\site-packages\urllib3\poolmanager.py", line 330, in urlopen
    response = conn.urlopen(method, u.request_uri, **kw)
  File "C:\Python38\lib\site-packages\urllib3\connectionpool.py", line 719, in urlopen
    retries = retries.increment(
  File "C:\Python38\lib\site-packages\urllib3\util\retry.py", line 436, in increment
    raise MaxRetryError(_pool, url, error or ResponseError(cause))
urllib3.exceptions.MaxRetryError: HTTPSConnectionPool(host='e621.net', port=443): Max retries exceeded with url: /posts.json?limit=320&tags=-type%3Agif+crunchyspoon+date%3A%3E%3D2020-04-05+ (Caused by ProtocolError('Connection aborted.', ConnectionResetError(10054, 'An existing connection was forcibly closed by the remote host', None, 10054, None)))

The above exception was the direct cause of the following exception:

Traceback (most recent call last):
  File "e621dl.py", line 42, in <module>
    posts = remote.get_posts(client, ' '.join(search['tags']), search['start_date'], last_id)
  File "H:\E621\e621dl-3.1.1\e621dl\remote.py", line 5, in get_posts
    response = client.get(
  File "C:\Python38\lib\site-packages\httpx\_client.py", line 706, in get
    return self.request(
  File "C:\Python38\lib\site-packages\httpx\_client.py", line 570, in request
    return self.send(
  File "C:\Python38\lib\site-packages\httpx\_client.py", line 590, in send
    response = self.send_handling_redirects(
  File "C:\Python38\lib\site-packages\httpx\_client.py", line 617, in send_handling_redirects
    response = self.send_handling_auth(
  File "C:\Python38\lib\site-packages\httpx\_client.py", line 654, in send_handling_auth
    response = self.send_single_request(request, timeout)
  File "C:\Python38\lib\site-packages\httpx\_client.py", line 678, in send_single_request
    response = dispatcher.send(request, timeout=timeout)
  File "C:\Python38\lib\site-packages\httpx\_dispatch\urllib3.py", line 98, in send
    conn = self.pool.urlopen(
  File "C:\Python38\lib\contextlib.py", line 131, in __exit__
    self.gen.throw(type, value, traceback)
  File "C:\Python38\lib\site-packages\httpx\_utils.py", line 368, in as_network_error
    raise NetworkError(exc) from exc
httpx._exceptions.NetworkError: HTTPSConnectionPool(host='e621.net', port=443): Max retries exceeded with url: /posts.json?limit=320&tags=-type%3Agif+crunchyspoon+date%3A%3E%3D2020-04-05+ (Caused by ProtocolError('Connection aborted.', ConnectionResetError(10054, 'An existing connection was forcibly closed by the remote host', None, 10054, None)))
Press any key to continue . . .```
RedFoxNightfox commented 4 years ago

ISP says everything is fine on my end, so @lurkbbs 's advice seems to not have helped.

RedFoxNightfox commented 4 years ago

I have run every trouble shooter I could think of, reset the network, checked every log I could think of, and everything points to nothing being wrong with my router, nor with my motherboard network adapter. My network does not appear to be the problem.

However, I have found something interesting.

The error only occurs when I run my .bat script for the .py file in command. I just tried the .bat script in Windows PowerShell, and it's running just fine.

Why?

Edit: to be clear, I normally just have a shortcut to the .bat file that I click, it opens in command, and has always worked fine before the errors started.

RedFoxNightfox commented 4 years ago

Never mind. Took alot longer, but still came to the same error

RedFoxNightfox commented 4 years ago

according to codeproject.com, the error "Tells you that there was already an active connection at the place of error where you probably try to open a second one."

and

"Common solution is obvious: Never try to open more than 1 connection. Problem and solution are in your code."

Wulfre commented 4 years ago

Unless I made a mistake somewhere, I'll have to read over the code again, every request uses the same session. This is also a single threaded script so it shouldn't be possible for multiple requests to happen at the same time. This is a really strange issue.

RedFoxNightfox commented 4 years ago

well, speaking to Kira on the e621 discord, I was told the problem is apparently on my end, although I cannot find anything wrong anywhere. so, I guess @lurkbbs is correct, but I cannot find any problem anywhere.

tracert passes every time, ping passes every time, I changed wiring, rebooted everything, reset the network and modem, ran every trouble shooter. I do not see anything wrong with my connection.

so it's apparently out of my hands, and apparently not the program either. I just don't know anymore.

RedFoxNightfox commented 4 years ago

Just thought I'd give an update, as an interesting discovery has been made due to this error.

My motherboard is dying. This error actually allowed me to discover this. I purchased a pcie nic to test if the error was caused by my onboard ethernet port due to microdisconnects, and discovered that not only was that the case, but two of my pcie slots are malfunctioning.

So yeah. That happened.