Any user can create "bookshelves," essentially lists of books, for whatever reason they want. If they're public, we can easily pull the book_ids for those books and use the existing scraper to scrape all the info we want from them.
Thank you for the pull request and for describing the changes so clearly! This does look useful, but we'd rather not support this functionality (that focuses on specific users) as part of this scraper.
Any user can create "bookshelves," essentially lists of books, for whatever reason they want. If they're public, we can easily pull the book_ids for those books and use the existing scraper to scrape all the info we want from them.
I included an example in test_scripts.sh, but I'll explain another here: Say we wanted to pull the books Bill Gates wants to read. The link to that list is found here: https://www.goodreads.com/review/list/62787798?shelf=to-read
Within that link is the user_id (62787798) and the shelf_name ("to-read"). So to scrape those books, we would run this command:
python get_books.py --l --user_id 62787798 --shelf_name to-read --output_directory_path output
with the "--l" indicating we want to pull our list from a goodreads shelf.