joeyism / linkedin_scraper

A library that scrapes Linkedin for user data
GNU General Public License v3.0
2.01k stars 560 forks source link

NoSuchExceptionWarning #87

Open itsbillzhang opened 3 years ago

itsbillzhang commented 3 years ago

Hi, first off thanks for making the script.

I was using it fine last week. However, opening it today, and trying it on, on the same code, which is:

urls = [ "...", "...", ] bobs = [] for url in urls: a = Person(url, driver=driver, close_on_complete=False) bobs.append(a)

I get a "Message: no such element: Unable to locate element: {"method":"css selector","selector":".pv-top-card"} (Session info: chrome=89.0.4389.90)" error.

The strange thing is that this error only happen sometimes. That is, running the cell 3 times might get me an error on the second URL I am trying to scrape but do the first one great one time, or get the error immediately. another.

Lightmare2 commented 3 years ago

Hi,

I was just setting up my company scraper this week and a similar warning occured.

My code:

import os
from linkedin_scraper import Person, Company, actions
from selenium import webdriver
driver = webdriver.Chrome(executable_path=r"D:\Python\chromedriver_win32\chromedriver.exe")

email = os.getenv("LINKEDIN_USER")
password = os.getenv("LINKEDIN_PASSWORD")
actions.login(driver, email, password) # if email and password isnt given, it'll prompt in terminal

urls = ["https://www.linkedin.com/company/indiatimes","https://www.linkedin.com/company/criticalhitnet/about/", "https://www.linkedin.com/company/enca/"]
companies=[]  

for url in urls:
    company = Company(url, driver=driver, close_on_complete=False)
    companies.append(company)

for company in companies:
    print(company.headquarters)

and got this Error message:

NoSuchElementException: no such element: Unable to locate element: {"method":"css selector","selector":".org-people-profiles-module__profile-list"} (Session info: chrome=89.0.4389.90)

So similar error but in my case, the code doesn't work at all..

Ps thank you for the scraper :)

aradzekler commented 3 years ago

Im too experiencing the Error: NoSuchElementException: no such element: Unable to locate element: {"method":"css selector","selector":".org-people-profiles-module__profile-list"} (Session info: chrome=89.0.4389.90)

With the following code: `chrome_options = Options() chrome_options.add_argument("--headless") driver = webdriver.Chrome("./chromedriver", options=chrome_options) email = "MAIL" password = "PASSWORD" actions.login(driver, email, password) # if email and password isn't given, it'll prompt in terminal

person = Person("https://il.linkedin.com/in/ravid-kuperberg-59752041", driver=driver)

company = Company("http://www.linkedin.com/company/rapiscan-systems", driver=driver) company.scrape(get_employees=False) print("Company: " + company.website)`

Lightmare2 commented 3 years ago

Hi I'managed to skip that error by doing this: (ps. In this case I'm not scraping the empolyees but just the amount of empolyees. Not sure what kind of informtion you need) And yes the scraper is working for me (I get the name, type and size)

import os
from linkedin_scraper import Person, Company, actions
from selenium import webdriver
from selenium.common.exceptions import NoSuchElementException
driver = webdriver.Chrome(executable_path=r"D:\Python\chromedriver_win32\chromedriver.exe")
#options = webdriver.Chrome(executable_path=r"D:\Python\chromedriver_win32\chromedriver.exe")
#options.add_experimental_option('excludeSwitches', ['enable-logging'])
#driver = webdriver.Chrome(options=options)

email = "Some email"
password= "Some password" #os.getenv("LINKEDIN_USER")
#password = #os.getenv("LINKEDIN_PASSWORD")
actions.login(driver, email, password) # if email and password isnt given, it'll prompt in terminal

urls = ["https://www.linkedin.com/company/indiatimes","https://www.linkedin.com/company/criticalhitnet/about/", "https://www.linkedin.com/company/enca/", "https://www.linkedin.com/company/aljazeera/"]
companies=[]  

for url in urls:
    company = Company(url, driver=driver, close_on_complete=False, get_employees=False)
    companies.append(company)

for company in companies:
    print(company.headquarters)
    print(company.name)
    print(company.company_type)
    print(company.company_size)
joeyism commented 3 years ago

Ok, I fixed the top-card error in 2.7.6. Let me know if it still doesn't work for you, I can make the delay longer