When using find methods with a defined schema, values in MongoDB documents that don’t match the specified schema types (e.g., strings in a field expected to be int64) are silently converted to NaN in the results. This behavior can lead to unintended data loss and inconsistency, as there’s no indication of the type mismatch or a warning about the data conversion.
Ideally, the function should raise an error indicating that a type mismatch has been detected in the value field, rather than converting to NaN. This would make the find method more robust and prevent data integrity issues due to unintentional type mismatches.
Example:
from pymongoarrow.monkey import patch_all
patch_all()
from pymongo import MongoClient
from pymongoarrow.api import Schema
import pyarrow as pa
coll = MongoClient().test.test
coll.insert_many([{"value": 1}, {"value": "1"}, {"value": 1.1}, {"value": None}, {"value": False}, {}])
schema = {"value": pa.int64()}
projection = {"_id": 0, "value": 1}
df = coll.find_pandas_all(query={}, schema=Schema(schema), projection=projection)
print(df)
When using
find
methods with a defined schema, values in MongoDB documents that don’t match the specified schema types (e.g., strings in a field expected to be int64) are silently converted to NaN in the results. This behavior can lead to unintended data loss and inconsistency, as there’s no indication of the type mismatch or a warning about the data conversion.Ideally, the function should raise an error indicating that a type mismatch has been detected in the value field, rather than converting to NaN. This would make the
find
method more robust and prevent data integrity issues due to unintentional type mismatches.Example:
Output: