Closed uenro closed 5 years ago
You can start spiders using these lines
scrapy crawl recent --logfile recent.log --loglevel INFO
scrapy crawl debug --logfile detailed.log --loglevel INFO
But beforehand you should set up PostgreSQL and MongoDB on your machine. MongoDB works from scratch - you just need to install it from official website PostgreSQL requires a little more work - you should create user will full privileges over the database, and create the database itself. In settings.py you should set POSTGRES_HOST, POSTGRES_USER, POSTGRES_PASSWORD, POSTGRES_DBNAME accordingly. But the most complicated thing for you will be API_KEY varible because it should be granted to you by Avito support, and since you are not presenting legal entity it's highly unlikely.
@kubikrubikvkube Does Avito grant support to view only my items or I can view (and parse) all items? As I understand from the documentation written here, I can only work with my items.
@kubikrubikvkube Does Avito grant support to view only my items or I can view (and parse) all items? As I understand from the documentation written here, I can only work with my items.
Just your own. It's API limitation as you mentioned in the documentation.
Thank you. But it seems like it's possible to parse the others without API key using the architecture of your parser.
I can only find a cfg file to make parser work with scrapy. Any instructions how to use it? What variables do I have to edit?