Closed willsthompson closed 1 year ago
Just as a temporary workaround I use ceil
or round
function of pandas library for timestamp to fix that, for example ceil function usage:
pdf['some_datetime_column'] = pdf['some_datetime_column'].apply(lambda datetime_val: datetime_val.ceil('ms'))
Probably doesn't work as expected, see comment below
@plotneishestvo Thanks, yes, we're doing something similar. This is probably a little quicker for big tables:
df['col'] = df['col'].dt.ceil(freq='ms')
Actually for me this ceil
function doesn't work, it successfully writes data in parquet and then in snowflake but as a result I get something like 53007-12-08 06:27:57.000
so for me data rows were transformed in a weird way: 2021-01-14 05:20:04.077
-> 53007-12-08 06:27:57.000
Hm, what's the column type? I have a test that roundtrips a DataFrame with datetime64 to/from Snowflake, asserting that everything returned matches what was inserted. datetime64 columns are processed using the vectorized ceil
from above on the way out and stored as DATETIME
columns in Snowflake. Everything looks fine in Python and in Snowlake.
Oh, we're also usingpyarrow==2.0.0
, instead of the version your connector wants. Maybe a bug fix in the newer version?
I've tried to use pyarrow==2.0.0
but I have the same issue. I use pandas integration with sql_alchemy and pandas function to_sql
from snowflake.connector.pandas_tools import pd_writer
df.to_sql('oh_my_table', engine, index=False, method=pd_writer, if_exists='append')
and it automatically creates table for dataframe, so it creates it with timestamp_type TIMESTAMP_NTZ(9)
that sounds okay. I've tried to update everything and instead of future dates I've got invalid dates
😅
My solution at the moment is to cast date fields in iso strings and manually create a table before using pandas integration, so it automatically converts it to dates in snowflake from strings, and it works. Just want to write it here if somebody tackling with that, because it took me a lot of time to discover that in one small table we have future dates because of this issue :D
This is a my simple experiment stand (I start clean without any table in Snowflake):
import datetime
import pandas as pd
from sqlalchemy import create_engine
from snowflake.connector.pandas_tools import pd_writer
engine = create_engine('<my awasome connection URL>')
df = pd.DataFrame([('awdawd', datetime.datetime.now()), ('awdawdawd', datetime.datetime.now() - datetime.timedelta(hours=345))], columns=['id', 'date_field'])
df['date_field'] = df['date_field'].dt.ceil(freq='ms')
df.to_sql('oh_my_df', engine, index=False, method=pd_writer, if_exists='append')
my pip list:
So, now after I updated SQLALchemy pandas and snowflake libraries i have in the snowflake web interface:
Okay, looks like snowflake also has issues to understand parquet v2 timestamp values, I've tried to upload data using parquet verson '2.0' and I got the same invalid date
result.
firstly i passed parameter to pandas to store the same dataframe as parquet version 2.0:
df.to_parquet('my_awasome_parquet.parquet', compression='snappy', engine='pyarrow', version='2.0') # version='2.0' makes a deal
and just to check
pq.read_metadata('my_awasome_parquet.parquet')
#output:
<pyarrow._parquet.FileMetaData object at 0x11ee05778>
created_by: parquet-cpp version 1.5.1-SNAPSHOT
num_columns: 2
num_rows: 2
num_row_groups: 1
format_version: 2.0
serialized_size: 1940
after that I've manually uploaded data to the table using this parquet 2.0 file and got the same invalid date
result
ThenI tried to query my staged parquet v2.0 file and it was fine:
SELECT $1:date_field FROM @"TEST_DB"."PUBLIC".oh_my_df ;
However, casting to timestamp resulted as invalid date
SELECT $1:date_field::timestamp_ntz FROM @"TEST_DB"."PUBLIC".oh_my_df ;
Okay, starting with pandas 1.1.0 it automatically uses parquet 2.0 and this is why i started receiving invalid date
so i will open a new issue to support pandas 1.1.0 and parquet 2.0
This is a little out of scope for this issue, but if at all possible it would be helpful to support newer versions of PyArrow (they just released v3 early this week), for compatibility and bugfixes, but also because the early versions of PyArrow required by the Snowflake connector are enormous payloads, over 200MB IIRC. Newer releases are in the 50MB range. This can have a significant impact on deployments.
...(they just released v3 early this week)...
👀 #565
I drafted a possible solution to the issue here: https://github.com/snowflakedb/snowflake-connector-python/issues/616#issuecomment-806386788
Hi all, we continue to work on a long-term solution internally. We're sorry about any inconvenience this issue might be causing. As a temporary workaround, we suggest passing a timezone to any timestamp values when building your pandas dataframe. The following works for millisecond and microsecond timestamps, For example, instead of using:
ts = pd.Timestamp(1621291701002, unit="ms")
pd.DataFrame([('a', ts)], columns=['ID', 'DATE_FIELD'])
you can define the dataframe as:
ts = pd.Timestamp(1621291701002, unit="ms", tz="UTC")
pd.DataFrame([('a', ts)], columns=['ID', 'DATE_FIELD'])
When the dataframe is unloaded into a parquet file and COPY'ed into Snowflake, the value will be correctly parsed:
As for ns timestamps, specifically the issue mentioned by @willsthompson, using df.to_sql
will avoid the error you are receiving as that converts nanosecond timestamps to microsecond timestamps (and then using the timezone suggestion above will ensure you see valid dates in Snowflake). Once again, we apologize for any inconvenience as a result of this issue.
I came across the same problem, i ve solved it by converting npdatetime64 to object: output['SUBMODEL_VALUE_DATE'] =output['SUBMODEL_VALUE_DATE'].dt.strftime("%Y-%m-%d")
It seems like this issue still persists. Moreover, for [ns] timestamps it seems like it can also affect the Pandas to_sql
method even with pd_writer
. Is there a combination of known versions for which this is resolved?
I solved the problem by adding allow_truncated_timestamps=True
in chunk.to_parquet
.
According to pyarrow doc:
Some Parquet readers may only support timestamps stored in millisecond ('ms') or microsecond ('us') resolution. Since pandas uses nanoseconds to represent timestamps, this can occasionally be a nuisance. By default (when writing version 1.0 Parquet files), the nanoseconds will be cast to microseconds (‘us’). If a cast to a lower resolution value may result in a loss of data, by default an exception will be raised. This can be suppressed by passing
allow_truncated_timestamps=True
In /snowflake/connector/pandas_tools.py
line 159:
chunk.to_parquet(
chunk_path,
compression=compression,
allow_truncated_timestamps=True,
)
or alternatively:
chunk.to_parquet(
chunk_path,
compression=compression,
use_deprecated_int96_timestamps=True,
)
Does this solve your problem too ?
Hey, I am trying the proposed alternative from @sfc-gh-kzaveri and I am sending a dataframe with tz defined
df['my_datetime'][0]
Timestamp('2022-01-01 14:50:59+0000', tz='UTC')
The table is created as TIMESTAMP_NTZ
It is uploaded without time zone info
And when read with df: pd.DataFrame = cur.fetch_pandas_all()
it arrives without a time zone associated in the DataFrame.
Timestamp('2022-01-01 14:50:59')
Any ideas what might make this behave like this? Any specific versions I should be using?
Having similar issues with Snowpark write_pandas
and createDataFrame
, a Pandas timestamp column
TIMESTAMP_NTZ
in SnowflakeNUMBER
in SnowflakeHaving similar issues with Snowpark
write_pandas
andcreateDataFrame
, a Pandas timestamp column
- with local TimeZone --> correct
TIMESTAMP_NTZ
in Snowflake- without TimeZone -->
NUMBER
in Snowflake
Thanks for pointing this out!
The following snippet helped me:
df['Date'] = df['Date'].dt.tz_localize("UTC+01:00").dt.ceil(freq='ms')
Adding the parameter use_deprecated_int96_timestamps=True
to write_pandas()
helped me. The previous snippet by @fredrike with rounding the timestamps also worked. This approach allows to preserve original timestamp values without truncating them:
write_pandas(con, df, "TABLE_NAME", use_deprecated_int96_timestamps=True)
We are internally working on a more permanent solution and provide an update next quarter.
Hi @sfc-gh-aalam, do you have an update on this?
take a look at https://github.com/snowflakedb/snowflake-connector-python/issues/1687. Can you try using use_logical_type
Nice! This works with sqlalchemy as well. Previously had to convert pd.Timestamp -> str
with method=pd_writer
and dtype={}
.
import pandas as pd
import numpy as np
from sqlalchemy import create_engine
from snowflake.sqlalchemy import URL
from snowflake.connector.pandas_tools import make_pd_writer
from snowflake.sqlalchemy import DATETIME
creds = {
'user': '',
'password': '',
'account': '',
'warehouse': '',
'role': '',
'database': ''
}
engine = create_engine(URL(**creds))
time = pd.date_range('2023-01-01', '2023-01-31', freq='1h')
df = pd.DataFrame({
'timestamp': time,
'test_col': ['XYZ123'] * len(time),
'values': np.random.random(len(time))
})
# df['timestamp'] = df.timestamp.astype(str) ### previous workaround for proper pd.Timestamp -> snowflake timestamp conversion
with engine.connect() as con:
df.to_sql(
name='test_table',
schema='schema_name',
index=False,
con=con,
if_exists='replace',
# dtype={'timestamp': DATETIME}, ### previous workaround for proper pd.Timestamp -> snowflake timestamp conversion
# method=pd_writer ### previous workaround
method=make_pd_writer(use_logical_type=True)
)
closing this issue as use_logical_type
is working as expected. Please open a new issue if you see more issues.
Please answer these questions before submitting your issue. Thanks!
python --version
)?Python 3.8.5
python -c 'import platform; print(platform.platform())'
)?macOS-10.16-x86_64-i386-64bit
pip freeze
)?``` asn1crypto==1.4.0 attrs==20.3.0 docopt==0.6.2 ecdsa==0.16.1 Flask==1.1.2 future==0.18.2 jmespath==0.10.0 jsonformatter==0.2.3 jsonschema==3.1.1 more-itertools==8.6.0 numpy==1.19.5 oscrypto==1.2.1 packaging==20.8 paho-mqtt==1.5.1 pandas==1.0.3 pandasql==0.7.3 pbr==5.5.1 pluggy==0.13.1 py==1.10.0 pyarrow==2.0.0 pyasn1==0.4.8 pyasn1-modules==0.2.8 pycryptodome==3.9.9 pycryptodomex==3.9.9 PyJWT==1.7.1 pykwalify==1.7.0 pyOpenSSL==20.0.1 pytest==5.4.3 python-box==5.2.0 python-dateutil==2.8.1 python-dotenv==0.12.0 python-jose==3.1.0 pytz==2020.5 PyYAML==5.3.1 requests==2.23.0 rsa==4.7 six==1.15.0 snowflake-connector-python==2.3.7 stevedore==3.3.0 urllib3==1.25.11 wcwidth==0.2.5 Werkzeug==0.16.1 ```
I invoked
snowflake.connector.pandas_tools.write_pandas
on a DataFrame with a column of typedatetime64[ns]
(using PyArrow as the default backend for ParquetWriter)I expected the datetime data written to the database verbatim with nanosecond precision.
write_pandas
fails when Arrow tries writing to Parquet using the default arguments:pyarrow.lib.ArrowInvalid: Casting from timestamp[ns] to timestamp[ms] would lose data
This problem IIUC is related to the PyArrow defaults, which for compatibility reasons defaults to Parquet
version=’1.0’
behavior, which only supports ms timestamps, and also defaultingallow_truncated_timestamps=False
, which raises an exception when any timestamp precision is lost during writing. The end result is always truncating ns-precision timestamps to ms-precision and therefore always throwing an exception.Since Snowflake supports ns-precision timestamps, I would expect defaults that allow them to be written from DataFrames without error. However, since I imagine it's messy supporting various Parquet backends, I think at a minimum
write_pandas
should accept kwargs to be passed to the parquet writer, so users can tailor the behavior they want (and workaround backend-specific problems like this one).