:warning: IMPORTANT: This package is no longer maintained or supported. For the latest updates, please use our new package at crawlbase-python.
A lightweight, dependency free Python class that acts as wrapper for ProxyCrawl API.
Choose a way of installing:
pip install proxycrawl
Then import the CrawlingAPI, ScraperAPI, etc as needed.
from proxycrawl import CrawlingAPI, ScraperAPI, LeadsAPI, ScreenshotsAPI, StorageAPI
Version 3 deprecates the usage of ProxyCrawlAPI in favour of CrawlingAPI (although is still usable). Please test the upgrade before deploying to production.
First initialize the CrawlingAPI class.
api = CrawlingAPI({ 'token': 'YOUR_PROXYCRAWL_TOKEN' })
Pass the url that you want to scrape plus any options from the ones available in the API documentation.
api.get(url, options = {})
Example:
response = api.get('https://www.facebook.com/britneyspears')
if response['status_code'] == 200:
print(response['body'])
You can pass any options from ProxyCrawl API.
Example:
response = api.get('https://www.reddit.com/r/pics/comments/5bx4bx/thanks_obama/', {
'user_agent': 'Mozilla/5.0 (Windows NT 6.2; rv:20.0) Gecko/20121202 Firefox/30.0',
'format': 'json'
})
if response['status_code'] == 200:
print(response['body'])
Pass the url that you want to scrape, the data that you want to send which can be either a json or a string, plus any options from the ones available in the API documentation.
api.post(url, dictionary or string data, options = {})
Example:
response = api.post('https://producthunt.com/search', { 'text': 'example search' })
if response['status_code'] == 200:
print(response['body'])
You can send the data as application/json
instead of x-www-form-urlencoded
by setting option post_content_type
as json.
import json
response = api.post('https://httpbin.org/post', json.dumps({ 'some_json': 'with some value' }), { 'post_content_type': 'json' })
if response['status_code'] == 200:
print(response['body'])
If you need to scrape any website built with Javascript like React, Angular, Vue, etc. You just need to pass your javascript token and use the same calls. Note that only .get
is available for javascript and not .post
.
api = CrawlingAPI({ 'token': 'YOUR_JAVASCRIPT_TOKEN' })
response = api.get('https://www.nfl.com')
if response['status_code'] == 200:
print(response['body'])
Same way you can pass javascript additional options.
response = api.get('https://www.freelancer.com', { 'page_wait': 5000 })
if response['status_code'] == 200:
print(response['body'])
You can always get the original status and proxycrawl status from the response. Read the ProxyCrawl documentation to learn more about those status.
response = api.get('https://craiglist.com')
print(response['headers']['original_status'])
print(response['headers']['pc_status'])
If you have questions or need help using the library, please open an issue or contact us.
The usage of the Scraper API is very similar, just change the class name to initialize.
scraper_api = ScraperAPI({ 'token': 'YOUR_NORMAL_TOKEN' })
response = scraper_api.get('https://www.amazon.com/DualSense-Wireless-Controller-PlayStation-5/dp/B08FC6C75Y/')
if response['status_code'] == 200:
print(response['json']['name']) # Will print the name of the Amazon product
To find email leads you can use the leads API, you can check the full API documentation if needed.
leads_api = LeadsAPI({ 'token': 'YOUR_NORMAL_TOKEN' })
response = leads_api.get_from_domain('microsoft.com')
if response['status_code'] == 200:
print(response['json']['leads'])
Initialize with your Screenshots API token and call the get
method.
screenshots_api = ScreenshotsAPI({ 'token': 'YOUR_NORMAL_TOKEN' })
response = screenshots_api.get('https://www.apple.com')
if response['status_code'] == 200:
print(response['headers']['success'])
print(response['headers']['url'])
print(response['headers']['remaining_requests'])
print(response['file'])
or specifying a file path
screenshots_api = ScreenshotsAPI({ 'token': 'YOUR_NORMAL_TOKEN' })
response = screenshots_api.get('https://www.apple.com', { 'save_to_path': 'apple.jpg' })
if response['status_code'] == 200:
print(response['headers']['success'])
print(response['headers']['url'])
print(response['headers']['remaining_requests'])
print(response['file'])
or if you set store=true
then screenshot_url
is set in the returned headers
screenshots_api = ScreenshotsAPI({ 'token': 'YOUR_NORMAL_TOKEN' })
response = screenshots_api.get('https://www.apple.com', { 'store': 'true' })
if response['status_code'] == 200:
print(response['headers']['success'])
print(response['headers']['url'])
print(response['headers']['remaining_requests'])
print(response['file'])
print(response['headers']['screenshot_url'])
Note that screenshots_api.get(url, options)
method accepts an options
Initialize the Storage API using your private token.
storage_api = StorageAPI({ 'token': 'YOUR_NORMAL_TOKEN' })
Pass the url that you want to get from Proxycrawl Storage.
response = storage_api.get('https://www.apple.com')
if response['status_code'] == 200:
print(response['headers']['original_status'])
print(response['headers']['pc_status'])
print(response['headers']['url'])
print(response['headers']['rid'])
print(response['headers']['stored_at'])
print(response['body'])
or you can use the RID
response = storage_api.get('RID_REPLACE')
if response['status_code'] == 200:
print(response['headers']['original_status'])
print(response['headers']['pc_status'])
print(response['headers']['url'])
print(response['headers']['rid'])
print(response['headers']['stored_at'])
print(response['body'])
Note: One of the two RID or URL must be sent. So both are optional but it's mandatory to send one of the two.
To delete a storage item from your storage area, use the correct RID
if storage_api.delete('RID_REPLACE'):
print('delete success')
else:
print('Unable to delete')
To do a bulk request with a list of RIDs, please send the list of rids as an array
response = storage_api.bulk(['RID1', 'RID2', 'RID3', ...])
if response['status_code'] == 200:
for item in response['json']:
print(item['original_status'])
print(item['pc_status'])
print(item['url'])
print(item['rid'])
print(item['stored_at'])
print(item['body'])
To request a bulk list of RIDs from your storage area
rids = storage_api.rids()
print(rids)
You can also specify a limit as a parameter
storage_api.rids(100)
To get the total number of documents in your storage area
total_count = storage_api.totalCount()
print(total_count)
If you need to use a custom timeout, you can pass it to the class instance creation like the following:
api = CrawlingAPI({ 'token': 'TOKEN', 'timeout': 120 })
Timeout is in seconds.
Copyright 2023 ProxyCrawl