Open 123jimenez99 opened 2 months ago
I think the issue occurs when you first download a.com/b/c.html
and then you tried to download a.com/b
because a.com/b
has been created as a folder an now suckit
is trying to write a file with the same name. I think there is an issue with the function that create a path from a given url
Here IIRC.
I'm quite busy at the moment so I would appreciate any help on this
I'm afraid I can't help in that regard as I have no programming knowledge. In the end I managed to create a complete archive using the Browsertrix-Crawler Docker container. In any case, thanks for your support and I hope the best for your project!
I will leave it open as the issue is still there. Thanks for the feedback :)
Hey everyone, I encountered an issue while attempting to clone the wiki.raregamingdump.ca website for archiving purposes. Here's the error message I got:
This this is the command I used:
suckit https://wiki.raregamingdump.ca -v -t5 -c -j 64
Any insights or suggestions on how to resolve this would be greatly appreciated.
Cheers!