Open fnielsen opened 2 years ago
The workaround does not work:
From https://stackoverflow.com/questions/61631955/python-requests-ssl-error-during-requests
url = 'https://infoteka.bg.ac.rs/ojs/index.php/Infoteka/issue/view/20'
import requests
import ssl
from urllib3 import poolmanager
class TLSAdapter(requests.adapters.HTTPAdapter):
def init_poolmanager(self, connections, maxsize, block=False):
"""Create and initialize the urllib3 PoolManager."""
ctx = ssl.create_default_context()
ctx.set_ciphers('ALL') # DEFAULT@SECLEVEL=1')
self.poolmanager = poolmanager.PoolManager(
num_pools=connections,
maxsize=maxsize,
block=block,
ssl_version=ssl.PROTOCOL_TLS,
ssl_context=ctx)
session = requests.session()
session.mount('https://', TLSAdapter())
res = session.get(url)
print(res)
Still errs
Add verify=False
neither works:
url = 'https://infoteka.bg.ac.rs/ojs/index.php/Infoteka/issue/view/20'
import requests
res = requests.get(url, verify=False)
print(res)
More info: https://www.openssl.org/news/vulnerabilities.html#CVE-2020-1967 (Why is this not upgraded?)
OJS scraper fails with SSLError on certain websites. For instance,
This can be reproduced with:
A workaround is available: https://stackoverflow.com/questions/61631955/python-requests-ssl-error-during-requests apparently based on https://github.com/psf/requests/issues/4775