Russell-Newton / TikTokPy

Extract data from TikTok without needing any login information or API keys.
https://pypi.org/project/tiktokapipy/
MIT License
197 stars 25 forks source link

[BUG] Unable to collect comments. #30

Closed RichengLiang closed 1 year ago

RichengLiang commented 1 year ago

There is a warning about being unable to collect comments, is there any way to solve it? Thanks! Warning is given as follows:

D:\Python\envs\tiktok\lib\site-packages\tiktokapipy\api.py:478: UserWarning: Was unable to collect comments. A second attempt might work. warnings.warn(

yueqingliang1 commented 1 year ago

Same warning.

Russell-Newton commented 1 year ago

@RichengLiang sorry for the late response, what parameters were you using to initialize TikToPy? Do you have a sample of your code?

@yueqingliang1 same to you.

krishna555 commented 1 year ago

Hi @Russell-Newton , here is a simple way to arrive at the bug.

from tiktokapipy.api import TikTokAPI
with TikTokAPI() as api:
    video = api.video(<INSERT TIKTOK VIDEO URL>)
    comments = video.comments()
    comment_text = [comment.text for comment in comments]
    print("Comments: ", comment_text)

Gives the error,

/Library/Frameworks/Python.framework/Versions/3.11/lib/python3.11/site-packages/tiktokapipy/api.py:477: UserWarning: Was unable to collect comments.
A second attempt might work.
  warnings.warn(
Traceback (most recent call last):
  File "./tiktok_test.py", line 7, in <module>
    comments = video.comments()
Russell-Newton commented 1 year ago

That means this is probably related to #23. Playwright's chromium driver wasn't able to load the comments, so the default driver had to be set to Firefox. It's probably broken with Firefox too. I haven't had time to look at this, but I hope to get to it in the next couple of weeks

Russell-Newton commented 1 year ago

Please try again in version 0.1.13. This should be resolved.

Russell-Newton commented 1 year ago

I'm closing this issue as completed in 0.1.13. If for some reason it hasn't been resolved for you, either reopen this issue or create a new one. Happy scraping!