We need to scrape all images from the homepage of a specific lifestyle website and store them in an Amazon S3 bucket. You will be provided with the necessary AWS credentials to upload the images to S3.
Requirements:
1.Scraping:
Scrape all image URLs from the homepage of the specified lifestyle website.
Ensure all types of images (e.g., banners, product images, sliders, etc.) are captured.
Use a library like BeautifulSoup or Scrapy for scraping the website.
Image Download:
Download the images from the URLs obtained during scraping.
Ensure all images are downloaded in their original quality and format (e.g., .jpg, .png, etc.).
Amazon S3 Storage:
Use the provided AWS credentials to upload all the scraped images to a specified Amazon S3 bucket.
Ensure the images are stored in an organized manner (e.g., separate folder for each scrape, or descriptive filenames).
Handle any errors during upload (e.g., connection issues, file size limits).
Acceptance Criteria:
All homepage images are successfully scraped and downloaded.
All downloaded images are uploaded to the specified S3 bucket.
There should be no missing images, and all uploads should be organized within the S3 bucket.
Errors during the scraping or uploading process should be logged for review.
We need to scrape all images from the homepage of a specific lifestyle website and store them in an Amazon S3 bucket. You will be provided with the necessary AWS credentials to upload the images to S3.
Requirements: 1.Scraping:
Image Download:
Amazon S3 Storage:
Acceptance Criteria: