This is a LinkedIn Python Web Scraper for companies. The script fully simulate a human activity (using Selenium library) in order to get data from LinkedIn web pages. The purpose is store data from companies of a certain zone, such as:
After collected the above information, these will be stored into an .xls
file.
Any actions and or activities related to the material contained within this repo is solely your responsibility. The misuse of the information in this repo can result in criminal charges brought against the company in question. The author will not be held responsible in the event any criminal charges be brought against any individuals misusing the information in this repo to break the law.
As written in Linkedin User Agreement: you agree you will not use [...] any bots or other automated methods to access the Services, add or download contacts, send or redirect messages.
Clone project
git clone https://github.com/J4NN0/linkedin-web-scraper.git cd linkedin-web-scraper
Install requirements
pip install -r requirements.txt
Download the web driver you prefer and put it inside project folder:
Set missing configs in config.ini
:
EMAIL
and PASSWORD
.WEBDRIVER
(downloaded on step 3
).CITY
from which companies have to be fetched.Note that also others kind of parameters can be set.
Run script
python3 main.py
Data will be store into companies.xlsx
file.
It could happen that, after the logging phase, LinkedIn could ask you to perform some actions/operations (e.g. "I'm not a robot", etc.) instead of redirecting you to the feed (https://www.linkedin.com/feed/) page.
In this case:
If you face some issues using Python 3.9
(e.g. on installing dependencies), please try with Pyton 3.7
or below (but not earlier than version Python 3.0
).