lmbringas / packtpub-downloader

Script to download all your books from PacktPub inspired by https://github.com/ozzieperez/packtpub-library-downloader
267 stars 84 forks source link

random error durring download #40

Open shadow-absorber opened 3 years ago

shadow-absorber commented 3 years ago

ERROR (please copy and paste in the issue) {'message': 'jwt expired', 'errorCode': 1000100, 'errorId': 'b7ff1925-fc4a-40f3-8cfd-b967144ced9d'} 401 Starting to download /home/sam_tunder/git/packtpub-downloader/packtpub/Go_Cookbook/Go_Cookbook.code Traceback (most recent call last): File "main.py", line 226, in main(sys.argv[1:]) File "main.py", line 218, in main download_book(filename, url) File "main.py", line 104, in download_book r = requests.get(url, stream=True) File "/home/sam_tunder/.local/lib/python3.8/site-packages/requests/api.py", line 75, in get return request('get', url, params=params, kwargs) File "/home/sam_tunder/.local/lib/python3.8/site-packages/requests/api.py", line 60, in request return session.request(method=method, url=url, kwargs) File "/home/sam_tunder/.local/lib/python3.8/site-packages/requests/sessions.py", line 519, in request prep = self.prepare_request(req) File "/home/sam_tunder/.local/lib/python3.8/site-packages/requests/sessions.py", line 452, in prepare_request p.prepare( File "/home/sam_tunder/.local/lib/python3.8/site-packages/requests/models.py", line 313, in prepare self.prepare_url(url, params) File "/home/sam_tunder/.local/lib/python3.8/site-packages/requests/models.py", line 387, in prepare_url raise MissingSchema(error) requests.exceptions.MissingSchema: Invalid URL '': No schema supplied. Perhaps you meant http://?

reason195 commented 3 years ago

I got the same error

scivish commented 3 years ago

Not really an error so much as a security feature. The jwt exipred error means the login token is expired. (This keeps people from using your account if they get a copy of your token somehow.) Issue for anyone with 100's of books in their collection. short term midigation is delete the last file downloaded (usually 0 bytes), and running the app again over and over. If even that doesn't work, try to only choose one format type at a time. Biggest issue is just running through the book inventory file to sync the list takes too much time once you hit 500 files more or less, so just running the script over and over eventually fails once you hit about 1000 files. Ideally, script should download the list, save it to a file, copy the file to a second file. The second file is our working file from which the name of any book script successfully downloads is deleted. I think the token expires after 30 minutes, so after 20 minutes of downloading, the script should end the connection, then connect again, but this time don't download the list, just continue with the working file where we left off, repeating until everything is downloaded. The first file is kept as an option to run again without having to download (meaning you won't get any new books you added since the last time you downloaded) and also have a list to compair what you downloaded to. (At some point, I will look at the code, but I don't usually use Python, so might be the wrong person to do it.)