Lodour / Weibo-Album-Crawler

A multiprocessing crawler for weibo albums.
MIT License
99 stars 34 forks source link

API限制 #8

Closed Lodour closed 2 years ago

Lodour commented 7 years ago

使用http://photo.weibo.com/photos/get_photo_ids获取相册所有图片id时,所能获得的图片为最新2000张

Lodour commented 7 years ago

考虑用以下api作为替换 URL: http://photo.weibo.com/photos/get_all params: uid, album_id, count(30), page, type, __rnd

Lodour commented 6 years ago

频繁使用该api易引起掉线

soldier828 commented 6 years ago

考虑用以下api作为替换 URL: http://photo.weibo.com/photos/get_all params: uid, album_id, count(30), page, type, __rnd

能具体一点吗? params怎么改?

Lodour commented 6 years ago

已经很具体了

Vopaaz commented 5 years ago

替换后报错:

Traceback (most recent call last):
  File "main.py", line 5, in <module>
    Crawler(target).start()
  File "C:\ClonedProjects\Weibo-Album-Crawler\weibo\core.py", line 56, in start
    self.__download_album(album)
  File "C:\ClonedProjects\Weibo-Album-Crawler\weibo\core.py", line 69, in __download_album
    all_large_pics = self.__fetch_large_pics(album, all_photo_ids)
  File "C:\ClonedProjects\Weibo-Album-Crawler\weibo\core.py", line 110, in __fetch_large_pics
    for i in range(0, len(ids), chunk_size)
  File "C:\ClonedProjects\Weibo-Album-Crawler\weibo\core.py", line 110, in <dictcomp>
    for i in range(0, len(ids), chunk_size)
TypeError: unhashable type: 'slice'
Lodour commented 2 years ago

No longer needed.