Open zinyosrim opened 6 years ago
But, my scraping data is not stored in my_bucket:
What's data you want to store?
If you want to use FILES_STORE
, you need to enabling Files Pipeline
.
Please check here.
https://scrapy.readthedocs.io/en/stable/topics/media-pipeline.html#enabling-your-media-pipeline
I meant doing things like:
in settings_py:
LOG_FILE = 'gs://my_bucket/scrapy_log.txt'
or
scrapy crawl my_spider -o gs://my_bucket/scrapy_items.json
This is possible with s3
Sorry, this function is called Feed exports
and not support yet.
https://github.com/scrapy/scrapy/issues/3044#issuecomment-352942342
If you want to export log to bucket, need to write custom code. I think that it is good to refer to this S3 code.
ok - thanks
Meanwhile I managed to output the images to the bucket - thanks. But, my scraping data is not stored in my_bucket:
This is how I execute:
Did I miss anything? Thanks Zin