JoMingyu / google-play-scraper

Google play scraper for Python inspired by <facundoolano/google-play-scraper>
MIT License
757 stars 207 forks source link

[BUG] reviews_all doesn't download all reviews of an app with large amount of reviews #209

Open Jl-wei opened 7 months ago

Jl-wei commented 7 months ago

Library version 1.2.6

Describe the bug I cannot download all the reviews of an app with large amount of reviews. The number of downloaded reviews is always a multiple of 199.

Code

result = reviews_all("com.google.android.apps.fitness")
print(len(result))
# get 995

Expected behavior Expect to download all the reviews with reviews_all, which should be at least 20k

Additional context No

funnan commented 7 months ago

Im seeing the same issue even when I set the number of reviews (25000 in my case). Im only getting back about 500 and the output number changes each time I run it.

Jl-wei commented 7 months ago

Im seeing the same issue even when I set the number of reviews (25000 in my case). Im only getting back about 500 and the output number changes each time I run it.

Me too, and I found that the output number is always a multiple of 199. It seems that Google Play randomly block the retrieval of next page of reviews.

adilosa commented 7 months ago

This is probably a dupe of #208.

The error seems to be the play service intermittently returning an error inside a 200 success code, which then fails to parse as the json the library expects. It seems to contain this ....store.error.PlayDataError message.

)]}'

[["wrb.fr","UsvDTd",null,null,null,[5,null,[["type.googleapis.com/wireless.android.finsky.boq.web.data.store.error.PlayDataError",[1]]]],"generic"],["di",45],["af.httprm",45,"-6355766929392607683",2]]

The error seems to happen frequently but not reliably. Scraping in chunks of 200 reviews, basically every request has a decent chance of crashing, resulting in usually 200-1000 total reviews scraped before it craps out.

Currently, the library swallows this exception silently and quits. Handling this error lets the scraping continue as normal.

We monkey-patched around it like this and seem to have gotten back to workable scraping:

import google_play_scraper
from google_play_scraper.constants.regex import Regex
from google_play_scraper.constants.request import Formats
from google_play_scraper.utils.request import post

def _fetch_review_items(
    url: str,
    app_id: str,
    sort: int,
    count: int,
    filter_score_with: Optional[int],
    pagination_token: Optional[str],
):
    dom = post(
        url,
        Formats.Reviews.build_body(
            app_id,
            sort,
            count,
            "null" if filter_score_with is None else filter_score_with,
            pagination_token,
        ),
        {"content-type": "application/x-www-form-urlencoded"},
    )

    # MOD error handling
    if "error.PlayDataError" in dom:
        return _fetch_review_items(url, app_id, sort, count, filter_score_with, pagination_token)
    # ENDMOD

    match = json.loads(Regex.REVIEWS.findall(dom)[0])

    return json.loads(match[0][2])[0], json.loads(match[0][2])[-1][-1]

google_play_scraper.reviews._fetch_review_items = _fetch_review_items
funnan commented 7 months ago

Still not able to get more than a few hundred reviews.

paulolacombe commented 7 months ago

@funnan, the monkey patch @adilosa posted worked well for me.

Shivam-170103 commented 7 months ago

Hey @adilosa @funnan @paulolacombe can you all please tell how to implement this in order to fix this issue. I am trying to scrape reviews using reviews_all in Google Colab but it wont work for me. It would be great if you could help!

paulolacombe commented 7 months ago

Hey @Shivam-170103, you need to use the code lines that @adilosa provided to replace the corresponding ones in the reviews.py function file in your environment. Let me know if that helps as I am not that familiar with Google Colab.

terrichiachia commented 6 months ago

Thanks @adilosa and @paulolacombe , your posts are worked for me :)

lucasbral commented 6 months ago

I don't know why but even applying @adilosa 's solution the number of reviews returned here is still very low.

image
ej-white commented 6 months ago

Hello! I tried the monkey patch suggested by @adilosa, scraping a big app like eBay.

Instead of getting 8 or 10 reviews, I did end up getting 199, but I am expecting thousands of reviews (that's how it used be several weeks ago).

Any updated for getting this fixed? Cheers, and thank you

sfischerw commented 6 months ago

Same for me TT: the number of reviews scraped has plummeted since around 15 Feb and @adilosa's patch does not change my numbers by much Is there something else I can try?

funnan commented 6 months ago

This mod did not work for me either. I tried a different approach that worked for me:

In reviews.py:

        try:
            review_items, token = _fetch_review_items(
                url,
                app_id,
                sort,
                _fetch_count,
                filter_score_with,
                filter_device_with,
                token,
            )
        except (TypeError, IndexError):
            #funnan MOD start
            token = continuation_token.token
            continue
            #MOD end
sfischerw commented 6 months ago

@funnan, thanks for sharing it! It does not fix the issue for me, I still only retrieve 200-300 reviews for an app like Ebay And every run still yields a different number of reviews

ej-white commented 6 months ago

@funnan Thank you! I tried that and seemed to get a little more reviews, but not the full count. But I'm not sure if I implemented the patch correctly.

What I did was put the entire features/reviews.py into a new file (my_reviews.py), updated the try/except block with your change, and patched it like this:

import google_play_scraper
from my_reviews import reviews  # <- patched version

google_play_scraper.features.reviews = reviews

# Then call google_play_scraper.reviews(app, count=1000, ...)

Is this how to apply your patch? If not, could you provide an example of the correct way? Thanks so much

Bigsy commented 6 months ago

Both mods dont work for me, first doesn't change anything and funnan's just loops forever and never returns.

MemeRunner commented 6 months ago

I'm having the same issue, and trying to use the workaround posted by @adilosa (thx!).

However, it's giving me a pagination token error.

TypeError: Formats._Reviews.build_body() missing 1 required positional argument: 'pagination_token'

Can someone please tell me what this should be set at? I've tried None, 0, 100, 200, and 2000 as values for 'pagination_token', but always get the same TypeError.

This is how I have the variables defined:

google_play_scraper.reviews._fetch_review_items = _fetch_review_items

# Set values for 'url', 'app_id', 'sort', 'count', 'filter_score_with', and 'pagination_token'
url = 'https://play.google.com/store/getreviews'
app_id = 'com.doctorondemand.android.patient'
sort = 1  # 1 for most relevant, 2 for newest
count = 20  # Number of reviews to fetch
filter_score_with = None
pagination_token = 100

# Example call to the function with provided values
_fetch_review_items(url, app_id, sort, count, filter_score_with, pagination_token)

Greatly appreciate any input.

funnan commented 6 months ago

Here's my code ( I am fixing the number of reviews I need and break the loop when that number has crossed):

from google_play_scraper import Sort, reviews
import pandas as pd
from datetime import datetime
from tqdm import tqdm
import time

# Fetch reviews using google_play_scraper, Replace with ur app-id!
app_id = 'com.XXX'

# Fetch reviews
result = []
continuation_token = None
reviews_count = 25000  # change count here

with tqdm(total=reviews_count, position=0, leave=True) as pbar:
    while len(result) < reviews_count:
        new_result, continuation_token = reviews(
            app_id,
            continuation_token=continuation_token,
            lang='en',
            country='us',
            sort=Sort.NEWEST,
            filter_score_with=None,
            count=150
        )
        if not new_result:
            break
        result.extend(new_result)
        pbar.update(len(new_result))

# Create a DataFrame from the reviews & Download the file
df = pd.DataFrame(result)

today = str(datetime.now().strftime("%m-%d-%Y_%H%M%S"))
df.to_csv(f'reviews-{app_id}_{today}.csv', index=False)
print(len(df))
files.download(f'reviews-{app_id}_{today}.csv')

and in reviews.py I added the mod as my original comment.

Mayumiwandi commented 6 months ago

Ini kode saya (saya memperbaiki jumlah ulasan yang saya perlukan dan memutus perulangan ketika angka itu telah melewatinya):

from google_play_scraper import Sort, reviews
import pandas as pd
from datetime import datetime
from tqdm import tqdm
import time

# Fetch reviews using google_play_scraper, Replace with ur app-id!
app_id = 'com.XXX'

# Fetch reviews
result = []
continuation_token = None
reviews_count = 25000  # change count here

with tqdm(total=reviews_count, position=0, leave=True) as pbar:
    while len(result) < reviews_count:
        new_result, continuation_token = reviews(
            app_id,
            continuation_token=continuation_token,
            lang='en',
            country='us',
            sort=Sort.NEWEST,
            filter_score_with=None,
            count=150
        )
        if not new_result:
            break
        result.extend(new_result)
        pbar.update(len(new_result))

# Create a DataFrame from the reviews & Download the file
df = pd.DataFrame(result)

today = str(datetime.now().strftime("%m-%d-%Y_%H%M%S"))
df.to_csv(f'reviews-{app_id}_{today}.csv', index=False)
print(len(df))
files.download(f'reviews-{app_id}_{today}.csv')

dan di review.py saya menambahkan mod sebagai komentar asli saya.

Ini kode saya (saya memperbaiki jumlah ulasan yang saya perlukan dan memutus perulangan ketika angka itu telah melewatinya):

from google_play_scraper import Sort, reviews
import pandas as pd
from datetime import datetime
from tqdm import tqdm
import time

# Fetch reviews using google_play_scraper, Replace with ur app-id!
app_id = 'com.XXX'

# Fetch reviews
result = []
continuation_token = None
reviews_count = 25000  # change count here

with tqdm(total=reviews_count, position=0, leave=True) as pbar:
    while len(result) < reviews_count:
        new_result, continuation_token = reviews(
            app_id,
            continuation_token=continuation_token,
            lang='en',
            country='us',
            sort=Sort.NEWEST,
            filter_score_with=None,
            count=150
        )
        if not new_result:
            break
        result.extend(new_result)
        pbar.update(len(new_result))

# Create a DataFrame from the reviews & Download the file
df = pd.DataFrame(result)

today = str(datetime.now().strftime("%m-%d-%Y_%H%M%S"))
df.to_csv(f'reviews-{app_id}_{today}.csv', index=False)
print(len(df))
files.download(f'reviews-{app_id}_{today}.csv')

dan di review.py saya menambahkan mod sebagai komentar asli saya.

I have tried with your code, and it worked for me running on colab

ej-white commented 6 months ago

@funnan Thank you, that works!

ej-white commented 6 months ago

@JoMingyu Any chance we could get @funnan 's fix added to the code and merged?

It works for me and others, I can once again scrape 10,000's of reviews. Based on this discussion, seems like this issue is affecting many people! Cheers

HuDHuD0x1 commented 6 months ago

Here's my code ( I am fixing the number of reviews I need and break the loop when that number has crossed):

from google_play_scraper import Sort, reviews
import pandas as pd
from datetime import datetime
from tqdm import tqdm
import time

# Fetch reviews using google_play_scraper, Replace with ur app-id!
app_id = 'com.XXX'

# Fetch reviews
result = []
continuation_token = None
reviews_count = 25000  # change count here

with tqdm(total=reviews_count, position=0, leave=True) as pbar:
    while len(result) < reviews_count:
        new_result, continuation_token = reviews(
            app_id,
            continuation_token=continuation_token,
            lang='en',
            country='us',
            sort=Sort.NEWEST,
            filter_score_with=None,
            count=150
        )
        if not new_result:
            break
        result.extend(new_result)
        pbar.update(len(new_result))

# Create a DataFrame from the reviews & Download the file
df = pd.DataFrame(result)

today = str(datetime.now().strftime("%m-%d-%Y_%H%M%S"))
df.to_csv(f'reviews-{app_id}_{today}.csv', index=False)
print(len(df))
files.download(f'reviews-{app_id}_{today}.csv')

and in reviews.py I added the mod as my original comment.

Thanks bro worked for me as well

myownhoney commented 6 months ago

Unfortunately, it is still not working for me. I suspect that Google has put some limitations on the crawling

`from google_play_scraper import Sort, reviews
import pandas as pd
from datetime import datetime
from tqdm import tqdm
import time

app_id = 'com.zhiliaoapp.musically'

result = []
continuation_token = None
reviews_count = 5000

with tqdm(total=reviews_count, position=0, leave=True) as pbar:
    while len(result) < reviews_count:
        new_result, continuation_token = reviews(
            app_id,
            continuation_token=continuation_token,
            lang='en',
            country='us',
            sort=Sort.NEWEST,
            filter_score_with=None,
            count=199
        )
        if not new_result:
            break
        result.extend(new_result)
        pbar.update(len(new_result))

df = pd.DataFrame(result)

today = str(datetime.now().strftime("%m-%d-%Y_%H%M%S"))
print(len(df))`

The progress bar is raised after displaying the following: 8%|▊ | 398/5000 [00:00<00:03, 1302.69it/s]398 sometimes it will get more data like 995. but most time just 199 or 398 data retrieved

AndreasKarasenko commented 6 months ago

@myownhoney did you edit the reviews.py file using the fix from @funnan? I just tested it for v1.2.6 and this app id: "com.ingka.ikea.app" and except for hanging on 10950 reviews it works.

myownhoney commented 6 months ago

@myownhoney您是否使用来自的修复编辑了reviews.py文件@funnan? 我刚刚测试了它的 v1.2.6 和这个应用程序 ID:“com.ingka.ikea.app”,除了挂在 10950 条评论上之外,它可以工作。

it works now :) Cheers!

RamaDNA commented 6 months ago

@AndreasKarasenko @myownhoney can you show me your code please. it is still did not work for me as well

myownhoney commented 6 months ago

@AndreasKarasenko @myownhoney can you show me your code please. it is still did not work for me as well

My code is in the previous comment. Have you tried editing reviews.py? If you're working on colab, I strongly suggest you run this code before running your scrape code `

import json
from time import sleep
from typing import List, Optional, Tuple

from google_play_scraper import Sort
from google_play_scraper.constants.element import ElementSpecs
from google_play_scraper.constants.regex import Regex
from google_play_scraper.constants.request import Formats
from google_play_scraper.utils.request import post

MAX_COUNT_EACH_FETCH = 199

class _ContinuationToken:
    __slots__ = (
        "token",
        "lang",
        "country",
        "sort",
        "count",
        "filter_score_with",
        "filter_device_with",
    )

    def __init__(
        self, token, lang, country, sort, count, filter_score_with, filter_device_with
    ):
        self.token = token
        self.lang = lang
        self.country = country
        self.sort = sort
        self.count = count
        self.filter_score_with = filter_score_with
        self.filter_device_with = filter_device_with

def _fetch_review_items(
    url: str,
    app_id: str,
    sort: int,
    count: int,
    filter_score_with: Optional[int],
    filter_device_with: Optional[int],
    pagination_token: Optional[str],
):
    dom = post(
        url,
        Formats.Reviews.build_body(
            app_id,
            sort,
            count,
            "null" if filter_score_with is None else filter_score_with,
            "null" if filter_device_with is None else filter_device_with,
            pagination_token,
        ),
        {"content-type": "application/x-www-form-urlencoded"},
    )
    match = json.loads(Regex.REVIEWS.findall(dom)[0])

    return json.loads(match[0][2])[0], json.loads(match[0][2])[-2][-1]

def reviews(
    app_id: str,
    lang: str = "en",
    country: str = "us",
    sort: Sort = Sort.NEWEST,
    count: int = 100,
    filter_score_with: int = None,
    filter_device_with: int = None,
    continuation_token: _ContinuationToken = None,
) -> Tuple[List[dict], _ContinuationToken]:
    sort = sort.value

    if continuation_token is not None:
        token = continuation_token.token

        if token is None:
            return (
                [],
                continuation_token,
            )

        lang = continuation_token.lang
        country = continuation_token.country
        sort = continuation_token.sort
        count = continuation_token.count
        filter_score_with = continuation_token.filter_score_with
        filter_device_with = continuation_token.filter_device_with
    else:
        token = None

    url = Formats.Reviews.build(lang=lang, country=country)

    _fetch_count = count

    result = []

    while True:
        if _fetch_count == 0:
            break

        if _fetch_count > MAX_COUNT_EACH_FETCH:
            _fetch_count = MAX_COUNT_EACH_FETCH

        try:
            review_items, token = _fetch_review_items(
                url,
                app_id,
                sort,
                _fetch_count,
                filter_score_with,
                filter_device_with,
                token,
            )
        except (TypeError, IndexError):
            #funnan MOD start
            token = continuation_token.token
            continue
            #MOD end

        for review in review_items:
            result.append(
                {
                    k: spec.extract_content(review)
                    for k, spec in ElementSpecs.Review.items()
                }
            )

        _fetch_count = count - len(result)

        if isinstance(token, list):
            token = None
            break

    return (
        result,
        _ContinuationToken(
            token, lang, country, sort, count, filter_score_with, filter_device_with
        ),
    )

def reviews_all(app_id: str, sleep_milliseconds: int = 0, **kwargs) -> list:
    kwargs.pop("count", None)
    kwargs.pop("continuation_token", None)

    continuation_token = None

    result = []

    while True:
        _result, continuation_token = reviews(
            app_id,
            count=MAX_COUNT_EACH_FETCH,
            continuation_token=continuation_token,
            **kwargs
        )

        result += _result

        if continuation_token.token is None:
            break

        if sleep_milliseconds:
            sleep(sleep_milliseconds / 1000)

    return result

`

HuDHuD0x1 commented 6 months ago

@AndreasKarasenko @myownhoney can you show me your code please. it is still did not work for me as well

My code is in the previous comment. Have you tried editing reviews.py? If you're working on colab, I strongly suggest you run this code before running your scrape code `

import json
from time import sleep
from typing import List, Optional, Tuple

from google_play_scraper import Sort
from google_play_scraper.constants.element import ElementSpecs
from google_play_scraper.constants.regex import Regex
from google_play_scraper.constants.request import Formats
from google_play_scraper.utils.request import post

MAX_COUNT_EACH_FETCH = 199

class _ContinuationToken:
    __slots__ = (
        "token",
        "lang",
        "country",
        "sort",
        "count",
        "filter_score_with",
        "filter_device_with",
    )

    def __init__(
        self, token, lang, country, sort, count, filter_score_with, filter_device_with
    ):
        self.token = token
        self.lang = lang
        self.country = country
        self.sort = sort
        self.count = count
        self.filter_score_with = filter_score_with
        self.filter_device_with = filter_device_with

def _fetch_review_items(
    url: str,
    app_id: str,
    sort: int,
    count: int,
    filter_score_with: Optional[int],
    filter_device_with: Optional[int],
    pagination_token: Optional[str],
):
    dom = post(
        url,
        Formats.Reviews.build_body(
            app_id,
            sort,
            count,
            "null" if filter_score_with is None else filter_score_with,
            "null" if filter_device_with is None else filter_device_with,
            pagination_token,
        ),
        {"content-type": "application/x-www-form-urlencoded"},
    )
    match = json.loads(Regex.REVIEWS.findall(dom)[0])

    return json.loads(match[0][2])[0], json.loads(match[0][2])[-2][-1]

def reviews(
    app_id: str,
    lang: str = "en",
    country: str = "us",
    sort: Sort = Sort.NEWEST,
    count: int = 100,
    filter_score_with: int = None,
    filter_device_with: int = None,
    continuation_token: _ContinuationToken = None,
) -> Tuple[List[dict], _ContinuationToken]:
    sort = sort.value

    if continuation_token is not None:
        token = continuation_token.token

        if token is None:
            return (
                [],
                continuation_token,
            )

        lang = continuation_token.lang
        country = continuation_token.country
        sort = continuation_token.sort
        count = continuation_token.count
        filter_score_with = continuation_token.filter_score_with
        filter_device_with = continuation_token.filter_device_with
    else:
        token = None

    url = Formats.Reviews.build(lang=lang, country=country)

    _fetch_count = count

    result = []

    while True:
        if _fetch_count == 0:
            break

        if _fetch_count > MAX_COUNT_EACH_FETCH:
            _fetch_count = MAX_COUNT_EACH_FETCH

        try:
            review_items, token = _fetch_review_items(
                url,
                app_id,
                sort,
                _fetch_count,
                filter_score_with,
                filter_device_with,
                token,
            )
        except (TypeError, IndexError):
            #funnan MOD start
            token = continuation_token.token
            continue
            #MOD end

        for review in review_items:
            result.append(
                {
                    k: spec.extract_content(review)
                    for k, spec in ElementSpecs.Review.items()
                }
            )

        _fetch_count = count - len(result)

        if isinstance(token, list):
            token = None
            break

    return (
        result,
        _ContinuationToken(
            token, lang, country, sort, count, filter_score_with, filter_device_with
        ),
    )

def reviews_all(app_id: str, sleep_milliseconds: int = 0, **kwargs) -> list:
    kwargs.pop("count", None)
    kwargs.pop("continuation_token", None)

    continuation_token = None

    result = []

    while True:
        _result, continuation_token = reviews(
            app_id,
            count=MAX_COUNT_EACH_FETCH,
            continuation_token=continuation_token,
            **kwargs
        )

        result += _result

        if continuation_token.token is None:
            break

        if sleep_milliseconds:
            sleep(sleep_milliseconds / 1000)

    return result

`

if we use this code before running our script is it compulsory to edit reviews.py first? or just run this code and that's all!! because the @funnan patch is worked for me on Jupiter

RamaDNA commented 6 months ago

@AndreasKarasenko @myownhoney can you show me your code please. it is still did not work for me as well

My code is in the previous comment. Have you tried editing reviews.py? If you're working on colab, I strongly suggest you run this code before running your scrape code `

import json
from time import sleep
from typing import List, Optional, Tuple

from google_play_scraper import Sort
from google_play_scraper.constants.element import ElementSpecs
from google_play_scraper.constants.regex import Regex
from google_play_scraper.constants.request import Formats
from google_play_scraper.utils.request import post

MAX_COUNT_EACH_FETCH = 199

class _ContinuationToken:
    __slots__ = (
        "token",
        "lang",
        "country",
        "sort",
        "count",
        "filter_score_with",
        "filter_device_with",
    )

    def __init__(
        self, token, lang, country, sort, count, filter_score_with, filter_device_with
    ):
        self.token = token
        self.lang = lang
        self.country = country
        self.sort = sort
        self.count = count
        self.filter_score_with = filter_score_with
        self.filter_device_with = filter_device_with

def _fetch_review_items(
    url: str,
    app_id: str,
    sort: int,
    count: int,
    filter_score_with: Optional[int],
    filter_device_with: Optional[int],
    pagination_token: Optional[str],
):
    dom = post(
        url,
        Formats.Reviews.build_body(
            app_id,
            sort,
            count,
            "null" if filter_score_with is None else filter_score_with,
            "null" if filter_device_with is None else filter_device_with,
            pagination_token,
        ),
        {"content-type": "application/x-www-form-urlencoded"},
    )
    match = json.loads(Regex.REVIEWS.findall(dom)[0])

    return json.loads(match[0][2])[0], json.loads(match[0][2])[-2][-1]

def reviews(
    app_id: str,
    lang: str = "en",
    country: str = "us",
    sort: Sort = Sort.NEWEST,
    count: int = 100,
    filter_score_with: int = None,
    filter_device_with: int = None,
    continuation_token: _ContinuationToken = None,
) -> Tuple[List[dict], _ContinuationToken]:
    sort = sort.value

    if continuation_token is not None:
        token = continuation_token.token

        if token is None:
            return (
                [],
                continuation_token,
            )

        lang = continuation_token.lang
        country = continuation_token.country
        sort = continuation_token.sort
        count = continuation_token.count
        filter_score_with = continuation_token.filter_score_with
        filter_device_with = continuation_token.filter_device_with
    else:
        token = None

    url = Formats.Reviews.build(lang=lang, country=country)

    _fetch_count = count

    result = []

    while True:
        if _fetch_count == 0:
            break

        if _fetch_count > MAX_COUNT_EACH_FETCH:
            _fetch_count = MAX_COUNT_EACH_FETCH

        try:
            review_items, token = _fetch_review_items(
                url,
                app_id,
                sort,
                _fetch_count,
                filter_score_with,
                filter_device_with,
                token,
            )
        except (TypeError, IndexError):
            #funnan MOD start
            token = continuation_token.token
            continue
            #MOD end

        for review in review_items:
            result.append(
                {
                    k: spec.extract_content(review)
                    for k, spec in ElementSpecs.Review.items()
                }
            )

        _fetch_count = count - len(result)

        if isinstance(token, list):
            token = None
            break

    return (
        result,
        _ContinuationToken(
            token, lang, country, sort, count, filter_score_with, filter_device_with
        ),
    )

def reviews_all(app_id: str, sleep_milliseconds: int = 0, **kwargs) -> list:
    kwargs.pop("count", None)
    kwargs.pop("continuation_token", None)

    continuation_token = None

    result = []

    while True:
        _result, continuation_token = reviews(
            app_id,
            count=MAX_COUNT_EACH_FETCH,
            continuation_token=continuation_token,
            **kwargs
        )

        result += _result

        if continuation_token.token is None:
            break

        if sleep_milliseconds:
            sleep(sleep_milliseconds / 1000)

    return result

`

so after run this code and after that i should run this code right ? help me please

`from google_play_scraper import Sort, reviews import pandas as pd from datetime import datetime from tqdm import tqdm import time

app_id = 'com.zhiliaoapp.musically'

result = [] continuation_token = None reviews_count = 5000

with tqdm(total=reviews_count, position=0, leave=True) as pbar: while len(result) < reviews_count: new_result, continuation_token = reviews( app_id, continuation_token=continuation_token, lang='en', country='us', sort=Sort.NEWEST, filter_score_with=None, count=199 ) if not new_result: break result.extend(new_result) pbar.update(len(new_result))

df = pd.DataFrame(result)

today = str(datetime.now().strftime("%m-%d-%Y_%H%M%S")) print(len(df))`

myownhoney commented 6 months ago

@AndreasKarasenko @myownhoney can you show me your code please. it is still did not work for me as well

My code is in the previous comment. Have you tried editing reviews.py? If you're working on colab, I strongly suggest you run this code before running your scrape code `

import json
from time import sleep
from typing import List, Optional, Tuple

from google_play_scraper import Sort
from google_play_scraper.constants.element import ElementSpecs
from google_play_scraper.constants.regex import Regex
from google_play_scraper.constants.request import Formats
from google_play_scraper.utils.request import post

MAX_COUNT_EACH_FETCH = 199

class _ContinuationToken:
    __slots__ = (
        "token",
        "lang",
        "country",
        "sort",
        "count",
        "filter_score_with",
        "filter_device_with",
    )

    def __init__(
        self, token, lang, country, sort, count, filter_score_with, filter_device_with
    ):
        self.token = token
        self.lang = lang
        self.country = country
        self.sort = sort
        self.count = count
        self.filter_score_with = filter_score_with
        self.filter_device_with = filter_device_with

def _fetch_review_items(
    url: str,
    app_id: str,
    sort: int,
    count: int,
    filter_score_with: Optional[int],
    filter_device_with: Optional[int],
    pagination_token: Optional[str],
):
    dom = post(
        url,
        Formats.Reviews.build_body(
            app_id,
            sort,
            count,
            "null" if filter_score_with is None else filter_score_with,
            "null" if filter_device_with is None else filter_device_with,
            pagination_token,
        ),
        {"content-type": "application/x-www-form-urlencoded"},
    )
    match = json.loads(Regex.REVIEWS.findall(dom)[0])

    return json.loads(match[0][2])[0], json.loads(match[0][2])[-2][-1]

def reviews(
    app_id: str,
    lang: str = "en",
    country: str = "us",
    sort: Sort = Sort.NEWEST,
    count: int = 100,
    filter_score_with: int = None,
    filter_device_with: int = None,
    continuation_token: _ContinuationToken = None,
) -> Tuple[List[dict], _ContinuationToken]:
    sort = sort.value

    if continuation_token is not None:
        token = continuation_token.token

        if token is None:
            return (
                [],
                continuation_token,
            )

        lang = continuation_token.lang
        country = continuation_token.country
        sort = continuation_token.sort
        count = continuation_token.count
        filter_score_with = continuation_token.filter_score_with
        filter_device_with = continuation_token.filter_device_with
    else:
        token = None

    url = Formats.Reviews.build(lang=lang, country=country)

    _fetch_count = count

    result = []

    while True:
        if _fetch_count == 0:
            break

        if _fetch_count > MAX_COUNT_EACH_FETCH:
            _fetch_count = MAX_COUNT_EACH_FETCH

        try:
            review_items, token = _fetch_review_items(
                url,
                app_id,
                sort,
                _fetch_count,
                filter_score_with,
                filter_device_with,
                token,
            )
        except (TypeError, IndexError):
            #funnan MOD start
            token = continuation_token.token
            continue
            #MOD end

        for review in review_items:
            result.append(
                {
                    k: spec.extract_content(review)
                    for k, spec in ElementSpecs.Review.items()
                }
            )

        _fetch_count = count - len(result)

        if isinstance(token, list):
            token = None
            break

    return (
        result,
        _ContinuationToken(
            token, lang, country, sort, count, filter_score_with, filter_device_with
        ),
    )

def reviews_all(app_id: str, sleep_milliseconds: int = 0, **kwargs) -> list:
    kwargs.pop("count", None)
    kwargs.pop("continuation_token", None)

    continuation_token = None

    result = []

    while True:
        _result, continuation_token = reviews(
            app_id,
            count=MAX_COUNT_EACH_FETCH,
            continuation_token=continuation_token,
            **kwargs
        )

        result += _result

        if continuation_token.token is None:
            break

        if sleep_milliseconds:
            sleep(sleep_milliseconds / 1000)

    return result

`

if we use this code before running our script is it compulsory to edit reviews.py first? or just run this code and that's all!! because the @funnan patch is worked for me on Jupiter

If you run this code, you don't need to edit reviews.py; in fact, this code is the edited reviews.py

myownhoney commented 6 months ago

@AndreasKarasenko @myownhoney can you show me your code please. it is still did not work for me as well

My code is in the previous comment. Have you tried editing reviews.py? If you're working on colab, I strongly suggest you run this code before running your scrape code `

import json
from time import sleep
from typing import List, Optional, Tuple

from google_play_scraper import Sort
from google_play_scraper.constants.element import ElementSpecs
from google_play_scraper.constants.regex import Regex
from google_play_scraper.constants.request import Formats
from google_play_scraper.utils.request import post

MAX_COUNT_EACH_FETCH = 199

class _ContinuationToken:
    __slots__ = (
        "token",
        "lang",
        "country",
        "sort",
        "count",
        "filter_score_with",
        "filter_device_with",
    )

    def __init__(
        self, token, lang, country, sort, count, filter_score_with, filter_device_with
    ):
        self.token = token
        self.lang = lang
        self.country = country
        self.sort = sort
        self.count = count
        self.filter_score_with = filter_score_with
        self.filter_device_with = filter_device_with

def _fetch_review_items(
    url: str,
    app_id: str,
    sort: int,
    count: int,
    filter_score_with: Optional[int],
    filter_device_with: Optional[int],
    pagination_token: Optional[str],
):
    dom = post(
        url,
        Formats.Reviews.build_body(
            app_id,
            sort,
            count,
            "null" if filter_score_with is None else filter_score_with,
            "null" if filter_device_with is None else filter_device_with,
            pagination_token,
        ),
        {"content-type": "application/x-www-form-urlencoded"},
    )
    match = json.loads(Regex.REVIEWS.findall(dom)[0])

    return json.loads(match[0][2])[0], json.loads(match[0][2])[-2][-1]

def reviews(
    app_id: str,
    lang: str = "en",
    country: str = "us",
    sort: Sort = Sort.NEWEST,
    count: int = 100,
    filter_score_with: int = None,
    filter_device_with: int = None,
    continuation_token: _ContinuationToken = None,
) -> Tuple[List[dict], _ContinuationToken]:
    sort = sort.value

    if continuation_token is not None:
        token = continuation_token.token

        if token is None:
            return (
                [],
                continuation_token,
            )

        lang = continuation_token.lang
        country = continuation_token.country
        sort = continuation_token.sort
        count = continuation_token.count
        filter_score_with = continuation_token.filter_score_with
        filter_device_with = continuation_token.filter_device_with
    else:
        token = None

    url = Formats.Reviews.build(lang=lang, country=country)

    _fetch_count = count

    result = []

    while True:
        if _fetch_count == 0:
            break

        if _fetch_count > MAX_COUNT_EACH_FETCH:
            _fetch_count = MAX_COUNT_EACH_FETCH

        try:
            review_items, token = _fetch_review_items(
                url,
                app_id,
                sort,
                _fetch_count,
                filter_score_with,
                filter_device_with,
                token,
            )
        except (TypeError, IndexError):
            #funnan MOD start
            token = continuation_token.token
            continue
            #MOD end

        for review in review_items:
            result.append(
                {
                    k: spec.extract_content(review)
                    for k, spec in ElementSpecs.Review.items()
                }
            )

        _fetch_count = count - len(result)

        if isinstance(token, list):
            token = None
            break

    return (
        result,
        _ContinuationToken(
            token, lang, country, sort, count, filter_score_with, filter_device_with
        ),
    )

def reviews_all(app_id: str, sleep_milliseconds: int = 0, **kwargs) -> list:
    kwargs.pop("count", None)
    kwargs.pop("continuation_token", None)

    continuation_token = None

    result = []

    while True:
        _result, continuation_token = reviews(
            app_id,
            count=MAX_COUNT_EACH_FETCH,
            continuation_token=continuation_token,
            **kwargs
        )

        result += _result

        if continuation_token.token is None:
            break

        if sleep_milliseconds:
            sleep(sleep_milliseconds / 1000)

    return result

`

so after run this code and after that i should run this code right ? help me please

`from google_play_scraper import Sort, reviews import pandas as pd from datetime import datetime from tqdm import tqdm import time

app_id = 'com.zhiliaoapp.musically'

result = [] continuation_token = None reviews_count = 5000

with tqdm(total=reviews_count, position=0, leave=True) as pbar: while len(result) < reviews_count: new_result, continuation_token = reviews( app_id, continuation_token=continuation_token, lang='en', country='us', sort=Sort.NEWEST, filter_score_with=None, count=199 ) if not new_result: break result.extend(new_result) pbar.update(len(new_result))

df = pd.DataFrame(result)

today = str(datetime.now().strftime("%m-%d-%Y_%H%M%S")) print(len(df))`

yeah just run the first one, than the second one

gianlucascoccia commented 6 months ago

@funnan's solution does not work for me, retrieval of reviews gets stuck after a while for apps with tens of thousands reviews

RamaDNA commented 6 months ago

yeah just run the first one, than the second one thanks for sharing, does not work for me

gianlucascoccia commented 6 months ago

I digged a bit in the code starting from @adilosa's solution. I found two issues that prevented his solution from working:

1) Now, when the API fails silently, it returns a "play.gateway.proto.PlayGatewayError" rather than a "error.PlayDataError".

2) Now, the _fetch_review_items needs a filter_device_with parameter too

After applying the required changes, this is the new patch to be done in reviews.py:

def _fetch_review_items(
    url: str,
    app_id: str,
    sort: int,
    count: int,
    filter_score_with: Optional[int],
    filter_device_with: Optional[int],
    pagination_token: Optional[str],
):
    dom = post(
        url,
        Formats.Reviews.build_body(
            app_id,
            sort,
            count,
            "null" if filter_score_with is None else filter_score_with,
            "null" if filter_device_with is None else filter_device_with,
            pagination_token,
        ),
        {"content-type": "application/x-www-form-urlencoded"},
    )

    # PATCH START
    if ("error.PlayDataError" in dom) or (".PlayGatewayError" in dom): # <--- Keeping both for robustness
        return _fetch_review_items(url, app_id, sort, count, filter_score_with, filter_device_with, pagination_token)
    # PATCH END

    match = json.loads(Regex.REVIEWS.findall(dom)[0])

    return json.loads(match[0][2])[0], json.loads(match[0][2])[-2][-1]

With this changes it appears to be working

sheldon0711 commented 6 months ago

I digged a bit in the code starting from @adilosa's solution. I found two issues that prevented his solution from working:

  1. Now, when the API fails silently, it returns a "play.gateway.proto.PlayGatewayError" rather than a "error.PlayDataError".
  2. Now, the _fetch_review_items needs a filter_device_with parameter too

After applying the required changes, this is the new patch to be done in reviews.py:

def _fetch_review_items(
    url: str,
    app_id: str,
    sort: int,
    count: int,
    filter_score_with: Optional[int],
    filter_device_with: Optional[int],
    pagination_token: Optional[str],
):
    dom = post(
        url,
        Formats.Reviews.build_body(
            app_id,
            sort,
            count,
            "null" if filter_score_with is None else filter_score_with,
            "null" if filter_device_with is None else filter_device_with,
            pagination_token,
        ),
        {"content-type": "application/x-www-form-urlencoded"},
    )

    # PATCH START
    if ("error.PlayDataError" in dom) or (".PlayGatewayError" in dom): # <--- Keeping both for robustness
        return _fetch_review_items(url, app_id, sort, count, filter_score_with, filter_device_with, pagination_token)
    # PATCH END

    match = json.loads(Regex.REVIEWS.findall(dom)[0])

    return json.loads(match[0][2])[0], json.loads(match[0][2])[-2][-1]

With this changes it appears to be working

Thanks for fix! I still have problem get it work. Since the filter_device_with is needed, do we need to place it in other functions as well when _fetch_review_items is used?

And I tried your patch but without the filter_device_with parameter, somehow it works. But I'm wondering if there is any problem with it?

iniandrew commented 6 months ago

thank you so much @adilosa , your code worked well. for you who are experiencing the same problem, here's what I did:

add this code above match variable in _fetch_review_items function

# MOD error handling
    if "error.PlayDataError" in dom:
        return _fetch_review_items(url, app_id, sort, count, filter_score_with, pagination_token)
    # ENDMOD

before:

def _fetch_review_items(
    url: str,
    app_id: str,
    sort: int,
    count: int,
    filter_score_with: Optional[int],
    pagination_token: Optional[str],
):
    dom = post(
        url,
        Formats.Reviews.build_body(
            app_id,
            sort,
            count,
            "null" if filter_score_with is None else filter_score_with,
            pagination_token,
        ),
        {"content-type": "application/x-www-form-urlencoded"},
    )

    match = json.loads(Regex.REVIEWS.findall(dom)[0])

    return json.loads(match[0][2])[0], json.loads(match[0][2])[-1][-1]

after

def _fetch_review_items(
    url: str,
    app_id: str,
    sort: int,
    count: int,
    filter_score_with: Optional[int],
    pagination_token: Optional[str],
):
    dom = post(
        url,
        Formats.Reviews.build_body(
            app_id,
            sort,
            count,
            "null" if filter_score_with is None else filter_score_with,
            pagination_token,
        ),
        {"content-type": "application/x-www-form-urlencoded"},
    )

    # MOD error handling
    if "error.PlayDataError" in dom:
        return _fetch_review_items(url, app_id, sort, count, filter_score_with, pagination_token)
    # ENDMOD

    match = json.loads(Regex.REVIEWS.findall(dom)[0])

    return json.loads(match[0][2])[0], json.loads(match[0][2])[-1][-1]

then i run code from documentation

from google_play_scraper import Sort, reviews

result, continuation_token = reviews(
    'app-id', # replace this with the application id you want to scrap
    lang='id', # defaults to 'en'
    country='id', # defaults to 'us'
    sort=Sort.MOST_RELEVANT, # defaults to Sort.NEWEST
    count=20000, # defaults to 100
    filter_score_with=None # defaults to None(means all score)
)

# If you pass `continuation_token` as an argument to the reviews function at this point,
# it will crawl the items after 3 review items.

result, _ = reviews(
    'app-id', # replace this with the application id you want to scrap
    continuation_token=continuation_token # defaults to None(load from the beginning)
)

result:

image

gianlucascoccia commented 6 months ago

Thanks for fix! I still have problem get it work. Since the filter_device_with is needed, do we need to place it in other functions as well when _fetch_review_items is used?

And I tried your patch but without the filter_device_with parameter, somehow it works. But I'm wondering if there is any problem with it?

In my experience it is not necessary to add the parameter to other parts of the code, but I am only using the reviews_all method

Also, it seems that different people are getting different error messages (perhaps depending on their location?), so it really depends on what behaviour the program has on your side

DanielGusman commented 6 months ago

Hi everybody. I've been studying Python for a week. I also encountered this problem. I used the last proposed method, but, unfortunately, I can't scrape more than 600 reviews. Tell me what I'm doing wrong. I also added a small piece of code to unloading data to Excel.

from typing import Optional def _fetch_review_items( url: str, app_id: str, sort: int, count: int, filter_score_with: Optional[int], pagination_token: Optional[str], ): dom = post( url, Formats.Reviews.build_body( app_id, sort, count, "null" if filter_score_with is None else filter_score_with, pagination_token, ), {"content-type": "application/x-www-form-urlencoded"}, )

# MOD error handling
if "error.PlayDataError" in dom:
    return _fetch_review_items(url, app_id, sort, count, filter_score_with, pagination_token)
# ENDMOD

match = json.loads(Regex.REVIEWS.findall(dom)[0])

return json.loads(match[0][2])[0], json.loads(match[0][2])[-1][-1]

import pandas as pd from google_play_scraper import Sort, reviews result, continuation_token = reviews( 'eu.livesport.FlashScore_com', # replace this with the application id you want to scrap lang='en', # defaults to 'en' country='US', # defaults to 'us' sort=Sort.MOST_RELEVANT, # defaults to Sort.NEWEST count=20000, # defaults to 100 filter_score_with=None # defaults to None(means all score) ) df = pd.json_normalize(result) df.head() df = pd.DataFrame(result) df.to_excel('FS_en.xlsx')

asornbor commented 5 months ago

Hi everybody. I've been studying Python for a week. I also encountered this problem. I used the last proposed method, but, unfortunately, I can't scrape more than 600 reviews. Tell me what I'm doing wrong. I also added a small piece of code to unloading data to Excel.

from typing import Optional def _fetch_review_items( url: str, app_id: str, sort: int, count: int, filter_score_with: Optional[int], pagination_token: Optional[str], ): dom = post( url, Formats.Reviews.build_body( app_id, sort, count, "null" if filter_score_with is None else filter_score_with, pagination_token, ), {"content-type": "application/x-www-form-urlencoded"}, )

# MOD error handling
if "error.PlayDataError" in dom:
    return _fetch_review_items(url, app_id, sort, count, filter_score_with, pagination_token)
# ENDMOD

match = json.loads(Regex.REVIEWS.findall(dom)[0])

return json.loads(match[0][2])[0], json.loads(match[0][2])[-1][-1]

import pandas as pd from google_play_scraper import Sort, reviews result, continuation_token = reviews( 'eu.livesport.FlashScore_com', # replace this with the application id you want to scrap lang='en', # defaults to 'en' country='US', # defaults to 'us' sort=Sort.MOST_RELEVANT, # defaults to Sort.NEWEST count=20000, # defaults to 100 filter_score_with=None # defaults to None(means all score) ) df = pd.json_normalize(result) df.head() df = pd.DataFrame(result) df.to_excel('FS_en.xlsx')

Hey I was having a similar issue yesterday --> You have to make sure you don't run import reviews after you run the fix on fetch_review or it will revert it back to the broken form. Try importing at the beginning, then running the fix, then running the call and that should work!

DanielGusman commented 5 months ago

Thank you very much for the advice!

DanielGusman commented 5 months ago

Today I tried to collect reviews all day but to no avail. I tried all the methods from the thread, but without success. The last method I tried, but for some reason it returns an empty Excel table. Please tell me what I did wrong.

1) I updated the file called review.py (using pycharm)

MOD error handling

if "error.PlayDataError" in dom:
    return _fetch_review_items(url, app_id, sort, count, filter_score_with, pagination_token)
# ENDMOD

2) Then I pasted the code from google_play_scraper import Sort, reviews

result, continuation_token = reviews( 'app-id', # replace this with the application id you want to scrap lang='id', # defaults to 'en' country='id', # defaults to 'us' sort=Sort.MOST_RELEVANT, # defaults to Sort.NEWEST count=20000, # defaults to 100 filter_score_with=None # defaults to None(means all score) )

If you pass continuation_token as an argument to the reviews function at this point,

it will crawl the items after 3 review items.

result, _ = reviews( 'app-id', # replace this with the application id you want to scrap continuation_token=continuation_token # defaults to None(load from the beginning) )

I would be grateful for any advice.

asornbor commented 5 months ago

Today I tried to collect reviews all day but to no avail. I tried all the methods from the thread, but without success. The last method I tried, but for some reason it returns an empty Excel table. Please tell me what I did wrong.

1. I updated the file called review.py (using pycharm)

MOD error handling

if "error.PlayDataError" in dom:
    return _fetch_review_items(url, app_id, sort, count, filter_score_with, pagination_token)
# ENDMOD
2. Then I pasted the code
   from google_play_scraper import Sort, reviews

result, continuation_token = reviews( 'app-id', # replace this with the application id you want to scrap lang='id', # defaults to 'en' country='id', # defaults to 'us' sort=Sort.MOST_RELEVANT, # defaults to Sort.NEWEST count=20000, # defaults to 100 filter_score_with=None # defaults to None(means all score) )

If you pass continuation_token as an argument to the reviews function at this point,

it will crawl the items after 3 review items.

result, _ = reviews( 'app-id', # replace this with the application id you want to scrap continuation_token=continuation_token # defaults to None(load from the beginning) )

I would be grateful for any advice.

Fill in your app_id and try running this:

from google_play_scraper import Sort
from google_play_scraper.constants.element import ElementSpecs
from google_play_scraper.constants.regex import Regex
from google_play_scraper.constants.request import Formats
from google_play_scraper.utils.request import post

import pandas as pd
from datetime import datetime
from tqdm import tqdm
import time
app_id = ''

MAX_COUNT_EACH_FETCH = 199

class _ContinuationToken:
    __slots__ = (
        "token",
        "lang",
        "country",
        "sort",
        "count",
        "filter_score_with",
        "filter_device_with",
    )

    def __init__(
        self, token, lang, country, sort, count, filter_score_with, filter_device_with
    ):
        self.token = token
        self.lang = lang
        self.country = country
        self.sort = sort
        self.count = count
        self.filter_score_with = filter_score_with
        self.filter_device_with = filter_device_with

def _fetch_review_items(
    url: str,
    app_id: str,
    sort: int,
    count: int,
    filter_score_with: Optional[int],
    filter_device_with: Optional[int],
    pagination_token: Optional[str],
):
    dom = post(
        url,
        Formats.Reviews.build_body(
            app_id,
            sort,
            count,
            "null" if filter_score_with is None else filter_score_with,
            "null" if filter_device_with is None else filter_device_with,
            pagination_token,
        ),
        {"content-type": "application/x-www-form-urlencoded"},
    )
    match = json.loads(Regex.REVIEWS.findall(dom)[0])

    return json.loads(match[0][2])[0], json.loads(match[0][2])[-2][-1]

def reviews(
    app_id: str,
    lang: str = "en",
    country: str = "us",
    sort: Sort = Sort.MOST_RELEVANT,
    count: int = 100,
    filter_score_with: int = None,
    filter_device_with: int = None,
    continuation_token: _ContinuationToken = None,
) -> Tuple[List[dict], _ContinuationToken]:
    sort = sort.value

    if continuation_token is not None:
        token = continuation_token.token

        if token is None:
            return (
                [],
                continuation_token,
            )

        lang = continuation_token.lang
        country = continuation_token.country
        sort = continuation_token.sort
        count = continuation_token.count
        filter_score_with = continuation_token.filter_score_with
        filter_device_with = continuation_token.filter_device_with
    else:
        token = None

    url = Formats.Reviews.build(lang=lang, country=country)

    _fetch_count = count

    result = []

    while True:
        if _fetch_count == 0:
            break

        if _fetch_count > MAX_COUNT_EACH_FETCH:
            _fetch_count = MAX_COUNT_EACH_FETCH

        try:
            review_items, token = _fetch_review_items(
                url,
                app_id,
                sort,
                _fetch_count,
                filter_score_with,
                filter_device_with,
                token,
            )
        except (TypeError, IndexError):
            #funnan MOD start
            token = continuation_token.token
            continue
            #MOD end

        for review in review_items:
            result.append(
                {
                    k: spec.extract_content(review)
                    for k, spec in ElementSpecs.Review.items()
                }
            )

        _fetch_count = count - len(result)

        if isinstance(token, list):
            token = None
            break

    return (
        result,
        _ContinuationToken(
            token, lang, country, sort, count, filter_score_with, filter_device_with
        ),
    )

def reviews_all(app_id: str, sleep_milliseconds: int = 0, **kwargs) -> list:
    kwargs.pop("count", None)
    kwargs.pop("continuation_token", None)

    continuation_token = None

    result = []

    while True:
        _result, continuation_token = reviews(
            app_id,
            count=MAX_COUNT_EACH_FETCH,
            continuation_token=continuation_token,
            **kwargs
        )

        result += _result

        if continuation_token.token is None:
            break

        if sleep_milliseconds:
            sleep(sleep_milliseconds / 1000)

    return result
result = []
continuation_token = None
reviews_count = 20000

with tqdm(total=reviews_count, position=0, leave=True) as pbar:
    while len(result) < reviews_count:
        new_result, continuation_token = reviews(
            app_id,
            continuation_token=continuation_token,
            lang='en',
            country='us',
            sort=Sort.MOST_RELEVANT,
            filter_score_with=None,
            count=199
        )
        if not new_result:
            break
        result.extend(new_result)
        pbar.update(len(new_result))
df = pd.DataFrame(result)

today = str(datetime.now().strftime("%m-%d-%Y_%H%M%S"))
print(len(df))
petskratt commented 5 months ago

when debugging a Node.JS sister project I was able to fix similar problem by ensuring cookie persistence from first request - for testing purposes you can grab NID cookie from your browser and send it with each script request (e.g where headers are added, like {"content-type": "application/x-www-form-urlencoded", "cookie": "NID=[cookie value from browser];"}).

DKZPT commented 5 months ago

i have same issue, can only take 398 comments/reviews using this simple code:

from google_play_scraper import app, Sort, reviews_all
import pandas as pd

def scrape_google_play_reviews(app_id, sort_by=Sort.NEWEST, count=2000):
    # Fetch reviews
    result = reviews_all(
        app_id,
        sleep_milliseconds=1000,  # Don't use sleep if you're not making many requests
        lang='pt',  # Language in which you want to fetch reviews
        country='pt',  # Country to which the reviews are targeted
        sort=sort_by,  # Sorting method
        count=count  # Number of reviews to fetch
    )

    # Convert to DataFrame
    reviews_df = pd.DataFrame(result)

    # Save to CSV
    reviews_df.to_csv(f'{app_id}_reviews.csv', index=False)

    print(f"Saved {len(reviews_df)} reviews for app ID {app_id} to CSV.")

# Example usage
app_id_example = ''  # Replace with the app ID you're interested in
scrape_google_play_reviews(app_id_example, sort_by=Sort.NEWEST, count=2000)

If anyone find a fix let us know.

DanielGusman commented 5 months ago

Today I tried to collect reviews all day but to no avail. I tried all the methods from the thread, but without success. The last method I tried, but for some reason it returns an empty Excel table. Please tell me what I did wrong.

1. I updated the file called review.py (using pycharm)

MOD error handling

if "error.PlayDataError" in dom:
    return _fetch_review_items(url, app_id, sort, count, filter_score_with, pagination_token)
# ENDMOD
2. Then I pasted the code
   from google_play_scraper import Sort, reviews

result, continuation_token = reviews( 'app-id', # replace this with the application id you want to scrap lang='id', # defaults to 'en' country='id', # defaults to 'us' sort=Sort.MOST_RELEVANT, # defaults to Sort.NEWEST count=20000, # defaults to 100 filter_score_with=None # defaults to None(means all score) )

If you pass continuation_token as an argument to the reviews function at this point,

it will crawl the items after 3 review items.

result, _ = reviews( 'app-id', # replace this with the application id you want to scrap continuation_token=continuation_token # defaults to None(load from the beginning) ) I would be grateful for any advice.

Fill in your app_id and try running this:

from google_play_scraper import Sort
from google_play_scraper.constants.element import ElementSpecs
from google_play_scraper.constants.regex import Regex
from google_play_scraper.constants.request import Formats
from google_play_scraper.utils.request import post

import pandas as pd
from datetime import datetime
from tqdm import tqdm
import time
app_id = ''

MAX_COUNT_EACH_FETCH = 199

class _ContinuationToken:
    __slots__ = (
        "token",
        "lang",
        "country",
        "sort",
        "count",
        "filter_score_with",
        "filter_device_with",
    )

    def __init__(
        self, token, lang, country, sort, count, filter_score_with, filter_device_with
    ):
        self.token = token
        self.lang = lang
        self.country = country
        self.sort = sort
        self.count = count
        self.filter_score_with = filter_score_with
        self.filter_device_with = filter_device_with

def _fetch_review_items(
    url: str,
    app_id: str,
    sort: int,
    count: int,
    filter_score_with: Optional[int],
    filter_device_with: Optional[int],
    pagination_token: Optional[str],
):
    dom = post(
        url,
        Formats.Reviews.build_body(
            app_id,
            sort,
            count,
            "null" if filter_score_with is None else filter_score_with,
            "null" if filter_device_with is None else filter_device_with,
            pagination_token,
        ),
        {"content-type": "application/x-www-form-urlencoded"},
    )
    match = json.loads(Regex.REVIEWS.findall(dom)[0])

    return json.loads(match[0][2])[0], json.loads(match[0][2])[-2][-1]

def reviews(
    app_id: str,
    lang: str = "en",
    country: str = "us",
    sort: Sort = Sort.MOST_RELEVANT,
    count: int = 100,
    filter_score_with: int = None,
    filter_device_with: int = None,
    continuation_token: _ContinuationToken = None,
) -> Tuple[List[dict], _ContinuationToken]:
    sort = sort.value

    if continuation_token is not None:
        token = continuation_token.token

        if token is None:
            return (
                [],
                continuation_token,
            )

        lang = continuation_token.lang
        country = continuation_token.country
        sort = continuation_token.sort
        count = continuation_token.count
        filter_score_with = continuation_token.filter_score_with
        filter_device_with = continuation_token.filter_device_with
    else:
        token = None

    url = Formats.Reviews.build(lang=lang, country=country)

    _fetch_count = count

    result = []

    while True:
        if _fetch_count == 0:
            break

        if _fetch_count > MAX_COUNT_EACH_FETCH:
            _fetch_count = MAX_COUNT_EACH_FETCH

        try:
            review_items, token = _fetch_review_items(
                url,
                app_id,
                sort,
                _fetch_count,
                filter_score_with,
                filter_device_with,
                token,
            )
        except (TypeError, IndexError):
            #funnan MOD start
            token = continuation_token.token
            continue
            #MOD end

        for review in review_items:
            result.append(
                {
                    k: spec.extract_content(review)
                    for k, spec in ElementSpecs.Review.items()
                }
            )

        _fetch_count = count - len(result)

        if isinstance(token, list):
            token = None
            break

    return (
        result,
        _ContinuationToken(
            token, lang, country, sort, count, filter_score_with, filter_device_with
        ),
    )

def reviews_all(app_id: str, sleep_milliseconds: int = 0, **kwargs) -> list:
    kwargs.pop("count", None)
    kwargs.pop("continuation_token", None)

    continuation_token = None

    result = []

    while True:
        _result, continuation_token = reviews(
            app_id,
            count=MAX_COUNT_EACH_FETCH,
            continuation_token=continuation_token,
            **kwargs
        )

        result += _result

        if continuation_token.token is None:
            break

        if sleep_milliseconds:
            sleep(sleep_milliseconds / 1000)

    return result
result = []
continuation_token = None
reviews_count = 20000

with tqdm(total=reviews_count, position=0, leave=True) as pbar:
    while len(result) < reviews_count:
        new_result, continuation_token = reviews(
            app_id,
            continuation_token=continuation_token,
            lang='en',
            country='us',
            sort=Sort.MOST_RELEVANT,
            filter_score_with=None,
            count=199
        )
        if not new_result:
            break
        result.extend(new_result)
        pbar.update(len(new_result))
df = pd.DataFrame(result)

today = str(datetime.now().strftime("%m-%d-%Y_%H%M%S"))
print(len(df))

Thanks for the help. I'll try to run the code today.

JoMingyu commented 5 months ago

Hello guys. Sorry for not paying attention to the library. I've read all the discussions, and I've confirmed that the small modifications in @funnan are valid for most.

Unfortunately, Google Play is conducting various experiments in various countries, including a/b testing of the UI and data structure. Therefore, although most of them can be solved with the method proposed by @funnan , for example, the following function calls generate infinite loops.

reviews_all(
    "com.poleposition.AOSheroking",
    sort=Sort.MOST_RELEVANT,
    country="kr",
    lang="ko",
)

So @funnan and your every suggestions are really good, but it can cause infinite loop-like problems in edge case, so I need to research it a more.

By default, this library is not official, and Google Play does not allow crawling through robots.txt. Therefore, I think it might have been better not to support complex features like reviews_all in the first place.

I think it would be good for everyone to write your own reviews_all function according to your situation. I'm sorry I couldn't tell you the good news.

adilosa commented 5 months ago

We also observed that the API response from Google randomly didn't include the token on some calls, meaning the loop would end as if it was the last page. We simply retried the request a few times and usually get a continuation token eventually!

petskratt commented 5 months ago

@adilosa @JoMingyu pls try capturing NID cookie from first response and sending it with all following paging requests - with multiple attempts you just hope to hit the same worker behind LB. Check my fix & test in Node.js project https://github.com/facundoolano/google-play-scraper/pull/677

gianlucascoccia commented 5 months ago

We also observed that the API response from Google randomly didn't include the token on some calls, meaning the loop would end as if it was the last page. We simply retried the request a few times and usually get a continuation token eventually!

I also observed different error messages from other users, I believe Google's API is currently not working 100% correctly.

JoMingyu commented 5 months ago

@adilosa @JoMingyu pls try capturing NID cookie from first response and sending it with all following paging requests - with multiple attempts you just hope to hit the same worker behind LB. Check my fix & test in Node.js project facundoolano/google-play-scraper#677

I'll give it a try. That makes sense. I'm sorry, but I don't have a lot of time to spend on it. However, I'll do my best to work on it.

DKZPT commented 5 months ago

Im not expert on this but.. i find something wierd.

If i change the "country" and "language" i get more reviews, maybe something changed on google side?!

I made this code and i get more reviews then if i fix the country and language.

from google_play_scraper import Sort, reviews
import pandas as pd
from datetime import datetime
from tqdm import tqdm
import time

# App ID to fetch reviews for
app_id = 'xxx.com'

# Lists of languages and countries to fetch reviews in
languages = ['en', 'pt']
countries = ['us', 'pt', 'br']

# Initialize results list
result = []

# Number of reviews to attempt to fetch per language-country combination
reviews_count_per_combination = 10000  

for country in countries:
    for lang in languages:
        continuation_token = None
        fetched_reviews = 0
        with tqdm(total=reviews_count_per_combination, desc=f"Fetching reviews in {lang}-{country}", position=0, leave=True) as pbar:
            while fetched_reviews < reviews_count_per_combination:
                new_result, continuation_token = reviews(
                    app_id,
                    continuation_token=continuation_token,
                    lang=lang,
                    country=country,
                    sort=Sort.NEWEST,
                    filter_score_with=None,
                    count=min(200, reviews_count_per_combination - fetched_reviews)  #i change this for 400-500 and take more then 200.
                )
                if not new_result:
                    break
                result.extend(new_result)
                fetched_reviews += len(new_result)
                pbar.update(len(new_result))

# Convert aggregated results to DataFrame
df = pd.DataFrame(result)

# Save the DataFrame to a CSV file
today = datetime.now().strftime("%Y-%m-%d_%H-%M-%S")
filename = f'reviews_{app_id}_{today}.csv'
df.to_csv(filename, index=False)

print(f"Saved {len(df)} reviews from multiple languages and countries to {filename}")

Any suggestions?