Closed louis030195 closed 1 year ago
Also running into this issue when trying to use stream()
on a collection
My sense is that there is some kind of version conflict between the latest version of google-cloud-firestore
and google-api-core
. In other words, google-cloud-firestore
is calling to some deprecated or remove behavior on the _UnaryStreamMultiCallable
.
Any advice on how to fix or avoid this issue on failed calls? This also seems to be popping up on Reddit.
EDIT:
I've been able to avoid this issue by just passing in my own google.api_core.retry.Retry
object.
db.collection(collection).stream(retry=retry.Retry(deadline=60))
See the example provided here.
I have this issue as well. I hope a solid fix comes along soon
Previous fix with retry object doesn't work in my case. Did someone were able to fix this issue ?
I don't think so, the devs have been slow in replying to this open issue. I found the bug through fuzzing with atheris fuzzer
I found an easy fix before the devs give us an official reply.
Wrap the call to db.collection("COLLECTION_NAME").stream()
in a try except block which excepts AttributeError
. In the except block, call db.collection("COLLECTION_NAME").stream()
again. Wrap the whole of that in a function and you've got a recursive call to stream everytime it fails. If successful, the function returns the query snapshot.
You can add a maximum number of retries before reporting the failure.
Just started running into this issue myself too. In my case it is from simply streaming through a large collection of documents; no writes/deletes/etc. happening. This is on an Ubuntu 18 LTS system with Python 3.10.8, google-api-core 2.11.0, and google-cloud-firestore 2.7.2
I have the same issue. Some of my production servers suddenly started throwing this exception. Hope this will get fixed soon
The current requirements for this library and google-api-core[grpc]
are:
"google-api-core[grpc] >= 1.34.0, <3.0.0dev,!=2.0.*,!=2.1.*,!=2.2.*,!=2.3.*,!=2.4.*,!=2.5.*,!=2.6.*,!=2.7.*,!=2.8.*,!=2.9.*,!=2.10.*"
For anyone still experiencing this issue, can you confirm the versions that you're using? https://github.com/googleapis/python-api-core/releases/tag/v2.11.0 contains some fixes that likely help with this issue.
Hi. I hadn't seen this error show up in a while so I assumed maybe it was fixed, but I just added some code that is streaming through a large set of documents and it appears I am running into it again. The following is the stack trace from my app-engine app where it is happening, and also the list of pip packages from that same app-engine setup. This includes google-api-core 2.15.0 at this point, so it doesn't seem that 2.11.0 fixed it.
Is anyone else still seeing this? Should this be reopened?
Package Version
------------------------------ ---------------
app-store-server-library 1.0.0
asn1 2.7.0
attrs 23.2.0
bcrypt 4.1.2
blinker 1.7.0
boto3 1.34.20
botocore 1.34.21
CacheControl 0.13.1
cachetools 5.3.2
cattrs 23.1.2
certifi 2023.11.17
cffi 1.16.0
charset-normalizer 3.3.2
click 8.1.7
cryptography 41.0.7
ecdsa 0.18.0
enum-compat 0.0.3
Flask 3.0.0
google-api-core 2.15.0
google-api-python-client 2.114.0
google-auth 2.26.2
google-auth-httplib2 0.2.0
google-auth-oauthlib 1.2.0
google-cloud-appengine-logging 1.4.0
google-cloud-audit-log 0.2.5
google-cloud-core 2.4.1
google-cloud-datastore 2.19.0
google-cloud-firestore 2.14.0
google-cloud-logging 3.9.0
google-cloud-storage 2.14.0
google-cloud-tasks 2.15.0
google-cloud-trace 1.12.0
google-crc32c 1.5.0
google-resumable-media 2.7.0
googleapis-common-protos 1.62.0
grpc-google-iam-v1 0.13.0
grpcio 1.60.0
grpcio-status 1.60.0
gunicorn 21.2.0
hiredis 2.3.2
httplib2 0.22.0
idna 3.6
itsdangerous 2.1.2
Jinja2 3.1.3
jmespath 1.0.1
jsonpickle 3.0.2
Markdown 3.5.2
MarkupSafe 2.1.3
msgpack 1.0.7
oauthlib 3.2.2
packaging 23.2
pillow 10.2.0
pip 23.3.1
proto-plus 1.23.0
protobuf 4.25.2
pyasn1 0.5.1
pyasn1-modules 0.3.0
pycparser 2.21
pycryptodomex 3.20.0
PyJWT 2.8.0
pyOpenSSL 23.3.0
pyparsing 3.1.1
python-dateutil 2.8.2
PyYAML 6.0.1
redis 5.0.1
requests 2.31.0
requests-oauthlib 1.3.1
rsa 4.9
s3transfer 0.10.0
setuptools 69.0.2
six 1.16.0
types-Markdown 3.5.0.20240106
types-protobuf 4.24.0.20240106
types-pyOpenSSL 23.3.0.20240106
types-python-dateutil 2.8.19.20240106
types-redis 4.6.0.20240106
typing_extensions 4.9.0
ua-parser 0.18.0
uritemplate 4.1.1
urllib3 2.0.7
user-agents 2.2.0
Werkzeug 3.0.1
wheel 0.42.0
Environment details
python --version
Python 3.8.13
pip --version
pip 21.3.1
google-cloud-firestore
version:pip show google-cloud-firestore
Version: 2.3.4
Steps to reproduce
firebase_admin.initialize_app()
[print(e) for e in firestore.client().collection("foos").stream()]
Stack trace
Sometimes it happens, sometimes it does not... I know it's not the best issue description, I never had issue with
stream()
before.In my
for ...stream()
I run a bunch of set / delete in batch queries, can it be related (yes I ensure max batch size 500)? Am I hitting the limits of Firestore?Can you delete documents while using
stream()
?Or is
stream()
kind of in conflict with batch writes?