coiled / data-science-at-scale

A Pythonic introduction to methods for scaling your data science and machine learning work to larger datasets and larger models, using the tools and APIs you know and love from the PyData stack (such as numpy, pandas, and scikit-learn).
MIT License
112 stars 38 forks source link

KilledWorker: ("('read-csv-55ae4b745374b44b30abe2784b294853', 470) #30

Closed Santhin closed 3 years ago

Santhin commented 3 years ago

Hey, I found some weird error when trying to run this code on my local machine:

import coiled
# Connects to the existing "production" cluster
cluster = coiled.Cluster(
    name="production",
    n_workers=10
)

from dask.distributed import Client
client = Client(cluster)
print('Dashboard:', client.dashboard_link)

import dask.dataframe as dd

df = dd.read_csv(
    "s3://nyc-tlc/trip data/yellow_tripdata_2019-*.csv",
    dtype={
        "payment_type": "UInt8",
        "VendorID": "UInt8",
        "passenger_count": "UInt8",
        "RatecodeID": "UInt8",
    },
    storage_options={"anon": True},
    blocksize="16 MiB",
).persist()

df.groupby("passenger_count").tip_amount.mean().compute()

Error message:
KilledWorker: ("('read-csv-55ae4b745374b44b30abe2784b294853', 470)", <Worker 'tls://10.2.11.153:43261', name: santhin-5533-worker-5-27c4f7, memory: 0, processing: 329>)

After inspecting worker log:

distributed.nanny - INFO -         Start Nanny at: 'tls://10.2.11.153:37979'
distributed.worker - INFO -       Start worker at:    tls://10.2.11.153:43261
distributed.worker - INFO -          Listening to:    tls://10.2.11.153:43261
distributed.worker - INFO -          dashboard at:          10.2.11.153:45757
distributed.worker - INFO - Waiting to connect to: tls://ip-10-2-11-126.us-east-2.compute.internal:8786
distributed.worker - INFO - -------------------------------------------------
distributed.worker - INFO -               Threads:                          4
distributed.worker - INFO -                Memory:                   17.18 GB
distributed.worker - INFO -       Local Directory: /dask-worker-space/dask-worker-space/worker-9gxmodeo
distributed.worker - INFO - -------------------------------------------------
distributed.worker - INFO - Waiting to connect to: tls://ip-10-2-11-126.us-east-2.compute.internal:8786
distributed.worker - INFO - Waiting to connect to: tls://ip-10-2-11-126.us-east-2.compute.internal:8786
distributed.worker - INFO -         Registered to: tls://ip-10-2-11-126.us-east-2.compute.internal:8786
distributed.worker - INFO - -------------------------------------------------
distributed.core - INFO - Starting established connection
distributed.protocol.pickle - INFO - Failed to deserialize b'\x80\x05\x95\x1dG\x00\x00\x00\x00\x00\x00(\x8c\x15dask.dataframe.io.csv\x94\x8c\x10pandas_read_text\x94\x93\x94\x8c\x17cloudpickle.cloudpickle\x94\x8c\r_builtin_type\x94\x93\x94\x8c\nLambdaType\x94\x85\x94R\x94(h\x05\x8c\x08CodeType\x94\x85\x94R\x94(K1K\x00K4K2K\x13B\x1a\x01\x00\x00\x88\x01d\x01k\x02rF|\x01d\x02k\x08r*|\x02d\x00k\x08r*t\x00j\x01d\x03t\x02d\x04d\x05\x8d\x03\x01\x00n\x10t\x00j\x01d\x06t\x02d\x04d\x05\x8d\x03\x01\x00|\x01d\x02k\x08rF\x88\x00}\x01|)d\x00k\trj|\x02d\x00k\x08o\\|\x01\x88\x00k\x02}1t\x03|1d\x07\x8d\x01}2n\x06t\x03\x83\x00}2|\x02d\x00k\x08r||\x01}\x02|-r\x90|\x02\x88\x00k\x03r\x90t\x04d\x08\x83\x01\x82\x01|\x0bd\x00k\tr\x9ed\t}3n\x08d\n}\x0bd\x02}3|2j\x05|\x02|\x0b|)|\x1f|3|%|&|#|$|\x0f|"|\x03|\x05|\x04|\x08|\x10|\x11|\x13|\r|\x0e|\x14| |\'|!|\x18|\x1a|\x1c|\x1b|\x12|\x1d|\x1e|\x0c|\n|\x06|\x16|(|\x07|/|0|\x15|-|,|+|.|\t|*|\x19|\x17d\x0b\x8d0\x01\x00t\x06|\x00|2\x83\x02S\x00\x94(N\x8c\nread_table\x94\x89\x8cAread_table is deprecated, use read_csv instead, passing sep=\'\\t\'.\x94K\x02\x8c\nstacklevel\x94\x85\x94\x8c/read_table is deprecated, use read_csv instead.\x94\x8c\x0csep_override\x94\x85\x94\x8cXSpecified a delimiter with both sep and delim_whitespace=True; you can only specify one.\x94\x88\x8c\x01c\x94(\x8c\tdelimiter\x94\x8c\x06engine\x94\x8c\x07dialect\x94\x8c\x0bcompression\x94\x8c\x10engine_specified\x94\x8c\x0bdoublequote\x94\x8c\nescapechar\x94\x8c\tquotechar\x94\x8c\x07quoting\x94\x8c\x10skipinitialspace\x94\x8c\x0elineterminator\x94\x8c\x06header\x94\x8c\tindex_col\x94\x8c\x05names\x94\x8c\x06prefix\x94\x8c\x08skiprows\x94\x8c\nskipfooter\x94\x8c\tna_values\x94\x8c\x0btrue_values\x94\x8c\x0cfalse_values\x94\x8c\x0fkeep_default_na\x94\x8c\tthousands\x94\x8c\x07comment\x94\x8c\x07decimal\x94\x8c\x0bparse_dates\x94\x8c\rkeep_date_col\x94\x8c\x08dayfirst\x94\x8c\x0bdate_parser\x94\x8c\x05nrows\x94\x8c\x08iterator\x94\x8c\tchunksize\x94\x8c\nconverters\x94\x8c\x05dtype\x94\x8c\x07usecols\x94\x8c\x07verbose\x94\x8c\x08encoding\x94\x8c\x07squeeze\x94\x8c\nmemory_map\x94\x8c\x0ffloat_precision\x94\x8c\tna_filter\x94\x8c\x10delim_whitespace\x94\x8c\x0ewarn_bad_lines\x94\x8c\x0ferror_bad_lines\x94\x8c\nlow_memory\x94\x8c\x10mangle_dupe_cols\x94\x8c\rtupleize_cols\x94\x8c\x15infer_datetime_format\x94\x8c\x10skip_blank_lines\x94t\x94t\x94(\x8c\x08warnings\x94\x8c\x04warn\x94\x8c\rFutureWarning\x94\x8c\x04dict\x94\x8c\nValueError\x94\x8c\x06update\x94\x8c\x05_read\x94t\x94(\x8c\x12filepath_or_buffer\x94\x8c\x03sep\x94h\x16h!h#h"h7h:h$hBh6h\x17h5h(h)h\x1fh%h&h2h\'h*h=h8hEh.hDh/h1h0h3h4h\x19h+h-h h\x1dh\x1eh\x1bh\x1ch,h9h\x18hCh@h?h>hAh;h<h\x12\x8c\x04kwds\x94h\x1at\x94\x8cUC:\\Users\\p_sierkin\\Miniconda3\\envs\\real_estate\\lib\\site-packages\\pandas\\io\\parsers.py\x94\x8c\x08parser_f\x94M\x13\x02C\x8a\x00C\x08\x01\x10\x01\x06\x02\x0c\x02\x06\x02\n\x01\x08\x01\x04\x0e\x08\x01\x10\x01\x0c\x02\x06\x03\x08\x01\x04\x02\x0c\x01\x08\x04\x08\x01\x06\x02\x04\x01\x04\x02\x06\x01\x02\x01\x02\x01\x02\x01\x02\x02\x02\x01\x02\x01\x02\x01\x02\x01\x02\x01\x02\x02\x02\x01\x02\x01\x02\x01\x02\x01\x02\x01\x02\x01\x02\x01\x02\x01\x02\x01\x02\x01\x02\x01\x02\x01\x02\x02\x02\x01\x02\x01\x02\x01\x02\x02\x02\x01\x02\x01\x02\x01\x02\x01\x02\x01\x02\x01\x02\x01\x02\x01\x02\x01\x02\x01\x02\x02\x02\x01\x02\x01\x02\x01\x02\x01\x02\x01\x02\x01\x02\x01\x02\x01\x08\x02\x94\x8c\x0bdefault_sep\x94\x8c\x04name\x94\x86\x94)t\x94R\x94}\x94(\x8c\x0b__package__\x94\x8c\tpandas.io\x94\x8c\x08__name__\x94\x8c\x11pandas.io.parsers\x94\x8c\x08__file__\x94hTuNNh\x03\x8c\x10_make_empty_cell\x94\x93\x94)R\x94hc)R\x94\x86\x94t\x94R\x94\x8c\x1ccloudpickle.cloudpickle_fast\x94\x8c\x12_function_setstate\x94\x93\x94hh}\x94}\x94(h_\x8c\x08read_csv\x94\x8c\x0c__qualname__\x94\x8c\'_make_parser_function.<locals>.parser_f\x94\x8c\x0f__annotations__\x94}\x94\x8c\x0e__kwdefaults__\x94N\x8c\x0c__defaults__\x94(\x8c\x01,\x94N\x8c\x05infer\x94NNN\x89N\x88NNNNN\x89NK\x00NN\x88\x88\x89\x88\x89\x89\x89N\x89\x89NhvNC\x01.\x94N\x8c\x01"\x94K\x00\x88NNNNN\x88\x88\x89\x88\x89Nt\x94\x8c\n__module__\x94h`\x8c\x07__doc__\x94X\x979\x00\x00\nRead a comma-separated values (csv) file into DataFrame.\n\nAlso supports optionally iterating or breaking of the file\ninto chunks.\n\nAdditional help can be found in the online docs for\n`IO Tools <http://pandas.pydata.org/pandas-docs/stable/io.html>`_.\n\nParameters\n----------\nfilepath_or_buffer : str, path object, or file-like object\n    Any valid string path is acceptable. The string could be a URL. Valid\n    URL schemes include http, ftp, s3, and file. For file URLs, a host is\n    expected. A local file could be: file://localhost/path/to/table.csv.\n\n    If you want to pass in a path object, pandas accepts either\n    ``pathlib.Path`` or ``py._path.local.LocalPath``.\n\n    By file-like object, we refer to objects with a ``read()`` method, such as\n    a file handler (e.g. via builtin ``open`` function) or ``StringIO``.\nsep : str, default \',\'\n    Delimiter to use. If sep is None, the C engine cannot automatically detect\n    the separator, but the Python parsing engine can, meaning the latter will\n    be used and automatically detect the separator by Python\'s builtin sniffer\n    tool, ``csv.Sniffer``. In addition, separators longer than 1 character and\n    different from ``\'\\s+\'`` will be interpreted as regular expressions and\n    will also force the use of the Python parsing engine. Note that regex\n    delimiters are prone to ignoring quoted data. Regex example: ``\'\\r\\t\'``.\ndelimiter : str, default ``None``\n    Alias for sep.\nheader : int, list of int, default \'infer\'\n    Row number(s) to use as the column names, and the start of the\n    data.  Default behavior is to infer the column names: if no names\n    are passed the behavior is identical to ``header=0`` and column\n    names are inferred from the first line of the file, if column\n    names are passed explicitly then the behavior is identical to\n    ``header=None``. Explicitly pass ``header=0`` to be able to\n    replace existing names. The header can be a list of integers that\n    specify row locations for a multi-index on the columns\n    e.g. [0,1,3]. Intervening rows that are not specified will be\n    skipped (e.g. 2 in this example is skipped). Note that this\n    parameter ignores commented lines and empty lines if\n    ``skip_blank_lines=True``, so ``header=0`` denotes the first line of\n    data rather than the first line of the file.\nnames : array-like, optional\n    List of column names to use. If file contains no header row, then you\n    should explicitly pass ``header=None``. Duplicates in this list will cause\n    a ``UserWarning`` to be issued.\nindex_col : int, sequence or bool, optional\n    Column to use as the row labels of the DataFrame. If a sequence is given, a\n    MultiIndex is used. If you have a malformed file with delimiters at the end\n    of each line, you might consider ``index_col=False`` to force pandas to\n    not use the first column as the index (row names).\nusecols : list-like or callable, optional\n    Return a subset of the columns. If list-like, all elements must either\n    be positional (i.e. integer indices into the document columns) or strings\n    that correspond to column names provided either by the user in `names` or\n    inferred from the document header row(s). For example, a valid list-like\n    `usecols` parameter would be ``[0, 1, 2]`` or ``[\'foo\', \'bar\', \'baz\']``.\n    Element order is ignored, so ``usecols=[0, 1]`` is the same as ``[1, 0]``.\n    To instantiate a DataFrame from ``data`` with element order preserved use\n    ``pd.read_csv(data, usecols=[\'foo\', \'bar\'])[[\'foo\', \'bar\']]`` for columns\n    in ``[\'foo\', \'bar\']`` order or\n    ``pd.read_csv(data, usecols=[\'foo\', \'bar\'])[[\'bar\', \'foo\']]``\n    for ``[\'bar\', \'foo\']`` order.\n\n    If callable, the callable function will be evaluated against the column\n    names, returning names where the callable function evaluates to True. An\n    example of a valid callable argument would be ``lambda x: x.upper() in\n    [\'AAA\', \'BBB\', \'DDD\']``. Using this parameter results in much faster\n    parsing time and lower memory usage.\nsqueeze : bool, default False\n    If the parsed data only contains one column then return a Series.\nprefix : str, optional\n    Prefix to add to column numbers when no header, e.g. \'X\' for X0, X1, ...\nmangle_dupe_cols : bool, default True\n    Duplicate columns will be specified as \'X\', \'X.1\', ...\'X.N\', rather than\n    \'X\'...\'X\'. Passing in False will cause data to be overwritten if there\n    are duplicate names in the columns.\ndtype : Type name or dict of column -> type, optional\n    Data type for data or columns. E.g. {\'a\': np.float64, \'b\': np.int32,\n    \'c\': \'Int64\'}\n    Use `str` or `object` together with suitable `na_values` settings\n    to preserve and not interpret dtype.\n    If converters are specified, they will be applied INSTEAD\n    of dtype conversion.\nengine : {\'c\', \'python\'}, optional\n    Parser engine to use. The C engine is faster while the python engine is\n    currently more feature-complete.\nconverters : dict, optional\n    Dict of functions for converting values in certain columns. Keys can either\n    be integers or column labels.\ntrue_values : list, optional\n    Values to consider as True.\nfalse_values : list, optional\n    Values to consider as False.\nskipinitialspace : bool, default False\n    Skip spaces after delimiter.\nskiprows : list-like, int or callable, optional\n    Line numbers to skip (0-indexed) or number of lines to skip (int)\n    at the start of the file.\n\n    If callable, the callable function will be evaluated against the row\n    indices, returning True if the row should be skipped and False otherwise.\n    An example of a valid callable argument would be ``lambda x: x in [0, 2]``.\nskipfooter : int, default 0\n    Number of lines at bottom of file to skip (Unsupported with engine=\'c\').\nnrows : int, optional\n    Number of rows of file to read. Useful for reading pieces of large files.\nna_values : scalar, str, list-like, or dict, optional\n    Additional strings to recognize as NA/NaN. If dict passed, specific\n    per-column NA values.  By default the following values are interpreted as\n    NaN: \'\', \'#N/A\', \'#N/A N/A\', \'#NA\', \'-1.#IND\', \'-1.#QNAN\', \'-NaN\', \'-nan\',\n    \'1.#IND\', \'1.#QNAN\', \'N/A\', \'NA\', \'NULL\', \'NaN\', \'n/a\', \'nan\',\n    \'null\'.\nkeep_default_na : bool, default True\n    Whether or not to include the default NaN values when parsing the data.\n    Depending on whether `na_values` is passed in, the behavior is as follows:\n\n    * If `keep_default_na` is True, and `na_values` are specified, `na_values`\n      is appended to the default NaN values used for parsing.\n    * If `keep_default_na` is True, and `na_values` are not specified, only\n      the default NaN values are used for parsing.\n    * If `keep_default_na` is False, and `na_values` are specified, only\n      the NaN values specified `na_values` are used for parsing.\n    * If `keep_default_na` is False, and `na_values` are not specified, no\n      strings will be parsed as NaN.\n\n    Note that if `na_filter` is passed in as False, the `keep_default_na` and\n    `na_values` parameters will be ignored.\nna_filter : bool, default True\n    Detect missing value markers (empty strings and the value of na_values). In\n    data without any NAs, passing na_filter=False can improve the performance\n    of reading a large file.\nverbose : bool, default False\n    Indicate number of NA values placed in non-numeric columns.\nskip_blank_lines : bool, default True\n    If True, skip over blank lines rather than interpreting as NaN values.\nparse_dates : bool or list of int or names or list of lists or dict, default False\n    The behavior is as follows:\n\n    * boolean. If True -> try parsing the index.\n    * list of int or names. e.g. If [1, 2, 3] -> try parsing columns 1, 2, 3\n      each as a separate date column.\n    * list of lis'
Traceback (most recent call last):
  File "/opt/conda/envs/coiled/lib/python3.7/site-packages/distributed/protocol/pickle.py", line 75, in loads
    return pickle.loads(x)
ValueError: unsupported pickle protocol: 5
distributed.protocol.core - CRITICAL - Failed to deserialize
Traceback (most recent call last):
  File "/opt/conda/envs/coiled/lib/python3.7/site-packages/distributed/protocol/core.py", line 151, in loads
    value = _deserialize(head, fs, deserializers=deserializers)
  File "/opt/conda/envs/coiled/lib/python3.7/site-packages/distributed/protocol/serialize.py", line 374, in deserialize
    return loads(header, frames)
  File "/opt/conda/envs/coiled/lib/python3.7/site-packages/distributed/protocol/serialize.py", line 70, in pickle_loads
    return pickle.loads(x, buffers=buffers)
  File "/opt/conda/envs/coiled/lib/python3.7/site-packages/distributed/protocol/pickle.py", line 75, in loads
    return pickle.loads(x)
ValueError: unsupported pickle protocol: 5
distributed.core - ERROR - unsupported pickle protocol: 5
Traceback (most recent call last):
  File "/opt/conda/envs/coiled/lib/python3.7/site-packages/distributed/core.py", line 554, in handle_stream
    msgs = await comm.read()
  File "/opt/conda/envs/coiled/lib/python3.7/site-packages/distributed/comm/tcp.py", line 216, in read
    allow_offload=self.allow_offload,
  File "/opt/conda/envs/coiled/lib/python3.7/site-packages/distributed/comm/utils.py", line 80, in from_frames
    res = _from_frames()
  File "/opt/conda/envs/coiled/lib/python3.7/site-packages/distributed/comm/utils.py", line 64, in _from_frames
    frames, deserialize=deserialize, deserializers=deserializers
  File "/opt/conda/envs/coiled/lib/python3.7/site-packages/distributed/protocol/core.py", line 151, in loads
    value = _deserialize(head, fs, deserializers=deserializers)
  File "/opt/conda/envs/coiled/lib/python3.7/site-packages/distributed/protocol/serialize.py", line 374, in deserialize
    return loads(header, frames)
  File "/opt/conda/envs/coiled/lib/python3.7/site-packages/distributed/protocol/serialize.py", line 70, in pickle_loads
    return pickle.loads(x, buffers=buffers)
  File "/opt/conda/envs/coiled/lib/python3.7/site-packages/distributed/protocol/pickle.py", line 75, in loads
    return pickle.loads(x)
ValueError: unsupported pickle protocol: 5
distributed.worker - ERROR - unsupported pickle protocol: 5
Traceback (most recent call last):
  File "/opt/conda/envs/coiled/lib/python3.7/site-packages/distributed/worker.py", line 991, in handle_scheduler
    comm, every_cycle=[self.ensure_communicating, self.ensure_computing]
  File "/opt/conda/envs/coiled/lib/python3.7/site-packages/distributed/core.py", line 554, in handle_stream
    msgs = await comm.read()
  File "/opt/conda/envs/coiled/lib/python3.7/site-packages/distributed/comm/tcp.py", line 216, in read
    allow_offload=self.allow_offload,
  File "/opt/conda/envs/coiled/lib/python3.7/site-packages/distributed/comm/utils.py", line 80, in from_frames
    res = _from_frames()
  File "/opt/conda/envs/coiled/lib/python3.7/site-packages/distributed/comm/utils.py", line 64, in _from_frames
    frames, deserialize=deserialize, deserializers=deserializers
  File "/opt/conda/envs/coiled/lib/python3.7/site-packages/distributed/protocol/core.py", line 151, in loads
    value = _deserialize(head, fs, deserializers=deserializers)
  File "/opt/conda/envs/coiled/lib/python3.7/site-packages/distributed/protocol/serialize.py", line 374, in deserialize
    return loads(header, frames)
  File "/opt/conda/envs/coiled/lib/python3.7/site-packages/distributed/protocol/serialize.py", line 70, in pickle_loads
    return pickle.loads(x, buffers=buffers)
  File "/opt/conda/envs/coiled/lib/python3.7/site-packages/distributed/protocol/pickle.py", line 75, in loads
    return pickle.loads(x)
ValueError: unsupported pickle protocol: 5
distributed.worker - INFO - Connection to scheduler broken.  Reconnecting...
tornado.application - ERROR - Exception in callback functools.partial(<bound method IOLoop._discard_future_result of <tornado.platform.asyncio.AsyncIOLoop object at 0x7f8c1955d750>>, <Task finished coro=<Worker.handle_scheduler() done, defined at /opt/conda/envs/coiled/lib/python3.7/site-packages/distributed/worker.py:988> exception=ValueError('unsupported pickle protocol: 5')>)
Traceback (most recent call last):
  File "/opt/conda/envs/coiled/lib/python3.7/site-packages/tornado/ioloop.py", line 741, in _run_callback
    ret = callback()
  File "/opt/conda/envs/coiled/lib/python3.7/site-packages/tornado/ioloop.py", line 765, in _discard_future_result
    future.result()
  File "/opt/conda/envs/coiled/lib/python3.7/site-packages/distributed/worker.py", line 991, in handle_scheduler
    comm, every_cycle=[self.ensure_communicating, self.ensure_computing]
  File "/opt/conda/envs/coiled/lib/python3.7/site-packages/distributed/core.py", line 554, in handle_stream
    msgs = await comm.read()
  File "/opt/conda/envs/coiled/lib/python3.7/site-packages/distributed/comm/tcp.py", line 216, in read
    allow_offload=self.allow_offload,
  File "/opt/conda/envs/coiled/lib/python3.7/site-packages/distributed/comm/utils.py", line 80, in from_frames
    res = _from_frames()
  File "/opt/conda/envs/coiled/lib/python3.7/site-packages/distributed/comm/utils.py", line 64, in _from_frames
    frames, deserialize=deserialize, deserializers=deserializers
  File "/opt/conda/envs/coiled/lib/python3.7/site-packages/distributed/protocol/core.py", line 151, in loads
    value = _deserialize(head, fs, deserializers=deserializers)
  File "/opt/conda/envs/coiled/lib/python3.7/site-packages/distributed/protocol/serialize.py", line 374, in deserialize
    return loads(header, frames)
  File "/opt/conda/envs/coiled/lib/python3.7/site-packages/distributed/protocol/serialize.py", line 70, in pickle_loads
    return pickle.loads(x, buffers=buffers)
  File "/opt/conda/envs/coiled/lib/python3.7/site-packages/distributed/protocol/pickle.py", line 75, in loads
    return pickle.loads(x)
ValueError: unsupported pickle protocol: 5
distributed.worker - INFO - -------------------------------------------------
distributed.worker - WARNING - Mismatched versions found
+---------+-------------+-----------+----------------------+
| Package | This Worker | scheduler | workers              |
+---------+-------------+-----------+----------------------+
| numpy   | 1.20.0      | 1.20.0    | {'1.20.0', '1.16.5'} |
+---------+-------------+-----------+----------------------+
distributed.worker - INFO -         Registered to: tls://ip-10-2-11-126.us-east-2.compute.internal:8786
distributed.worker - INFO - -------------------------------------------------
distributed.core - INFO - Starting established connection
distributed.protocol.pickle - INFO - Failed to deserialize b'\x80\x05\x95\x1dG\x00\x00\x00\x00\x00\x00(\x8c\x15dask.dataframe.io.csv\x94\x8c\x10pandas_read_text\x94\x93\x94\x8c\x17cloudpickle.cloudpickle\x94\x8c\r_builtin_type\x94\x93\x94\x8c\nLambdaType\x94\x85\x94R\x94(h\x05\x8c\x08CodeType\x94\x85\x94R\x94(K1K\x00K4K2K\x13B\x1a\x01\x00\x00\x88\x01d\x01k\x02rF|\x01d\x02k\x08r*|\x02d\x00k\x08r*t\x00j\x01d\x03t\x02d\x04d\x05\x8d\x03\x01\x00n\x10t\x00j\x01d\x06t\x02d\x04d\x05\x8d\x03\x01\x00|\x01d\x02k\x08rF\x88\x00}\x01|)d\x00k\trj|\x02d\x00k\x08o\\|\x01\x88\x00k\x02}1t\x03|1d\x07\x8d\x01}2n\x06t\x03\x83\x00}2|\x02d\x00k\x08r||\x01}\x02|-r\x90|\x02\x88\x00k\x03r\x90t\x04d\x08\x83\x01\x82\x01|\x0bd\x00k\tr\x9ed\t}3n\x08d\n}\x0bd\x02}3|2j\x05|\x02|\x0b|)|\x1f|3|%|&|#|$|\x0f|"|\x03|\x05|\x04|\x08|\x10|\x11|\x13|\r|\x0e|\x14| |\'|!|\x18|\x1a|\x1c|\x1b|\x12|\x1d|\x1e|\x0c|\n|\x06|\x16|(|\x07|/|0|\x15|-|,|+|.|\t|*|\x19|\x17d\x0b\x8d0\x01\x00t\x06|\x00|2\x83\x02S\x00\x94(N\x8c\nread_table\x94\x89\x8cAread_table is deprecated, use read_csv instead, passing sep=\'\\t\'.\x94K\x02\x8c\nstacklevel\x94\x85\x94\x8c/read_table is deprecated, use read_csv instead.\x94\x8c\x0csep_override\x94\x85\x94\x8cXSpecified a delimiter with both sep and delim_whitespace=True; you can only specify one.\x94\x88\x8c\x01c\x94(\x8c\tdelimiter\x94\x8c\x06engine\x94\x8c\x07dialect\x94\x8c\x0bcompression\x94\x8c\x10engine_specified\x94\x8c\x0bdoublequote\x94\x8c\nescapechar\x94\x8c\tquotechar\x94\x8c\x07quoting\x94\x8c\x10skipinitialspace\x94\x8c\x0elineterminator\x94\x8c\x06header\x94\x8c\tindex_col\x94\x8c\x05names\x94\x8c\x06prefix\x94\x8c\x08skiprows\x94\x8c\nskipfooter\x94\x8c\tna_values\x94\x8c\x0btrue_values\x94\x8c\x0cfalse_values\x94\x8c\x0fkeep_default_na\x94\x8c\tthousands\x94\x8c\x07comment\x94\x8c\x07decimal\x94\x8c\x0bparse_dates\x94\x8c\rkeep_date_col\x94\x8c\x08dayfirst\x94\x8c\x0bdate_parser\x94\x8c\x05nrows\x94\x8c\x08iterator\x94\x8c\tchunksize\x94\x8c\nconverters\x94\x8c\x05dtype\x94\x8c\x07usecols\x94\x8c\x07verbose\x94\x8c\x08encoding\x94\x8c\x07squeeze\x94\x8c\nmemory_map\x94\x8c\x0ffloat_precision\x94\x8c\tna_filter\x94\x8c\x10delim_whitespace\x94\x8c\x0ewarn_bad_lines\x94\x8c\x0ferror_bad_lines\x94\x8c\nlow_memory\x94\x8c\x10mangle_dupe_cols\x94\x8c\rtupleize_cols\x94\x8c\x15infer_datetime_format\x94\x8c\x10skip_blank_lines\x94t\x94t\x94(\x8c\x08warnings\x94\x8c\x04warn\x94\x8c\rFutureWarning\x94\x8c\x04dict\x94\x8c\nValueError\x94\x8c\x06update\x94\x8c\x05_read\x94t\x94(\x8c\x12filepath_or_buffer\x94\x8c\x03sep\x94h\x16h!h#h"h7h:h$hBh6h\x17h5h(h)h\x1fh%h&h2h\'h*h=h8hEh.hDh/h1h0h3h4h\x19h+h-h h\x1dh\x1eh\x1bh\x1ch,h9h\x18hCh@h?h>hAh;h<h\x12\x8c\x04kwds\x94h\x1at\x94\x8cUC:\\Users\\p_sierkin\\Miniconda3\\envs\\real_estate\\lib\\site-packages\\pandas\\io\\parsers.py\x94\x8c\x08parser_f\x94M\x13\x02C\x8a\x00C\x08\x01\x10\x01\x06\x02\x0c\x02\x06\x02\n\x01\x08\x01\x04\x0e\x08\x01\x10\x01\x0c\x02\x06\x03\x08\x01\x04\x02\x0c\x01\x08\x04\x08\x01\x06\x02\x04\x01\x04\x02\x06\x01\x02\x01\x02\x01\x02\x01\x02\x02\x02\x01\x02\x01\x02\x01\x02\x01\x02\x01\x02\x02\x02\x01\x02\x01\x02\x01\x02\x01\x02\x01\x02\x01\x02\x01\x02\x01\x02\x01\x02\x01\x02\x01\x02\x01\x02\x02\x02\x01\x02\x01\x02\x01\x02\x02\x02\x01\x02\x01\x02\x01\x02\x01\x02\x01\x02\x01\x02\x01\x02\x01\x02\x01\x02\x01\x02\x02\x02\x01\x02\x01\x02\x01\x02\x01\x02\x01\x02\x01\x02\x01\x02\x01\x08\x02\x94\x8c\x0bdefault_sep\x94\x8c\x04name\x94\x86\x94)t\x94R\x94}\x94(\x8c\x0b__package__\x94\x8c\tpandas.io\x94\x8c\x08__name__\x94\x8c\x11pandas.io.parsers\x94\x8c\x08__file__\x94hTuNNh\x03\x8c\x10_make_empty_cell\x94\x93\x94)R\x94hc)R\x94\x86\x94t\x94R\x94\x8c\x1ccloudpickle.cloudpickle_fast\x94\x8c\x12_function_setstate\x94\x93\x94hh}\x94}\x94(h_\x8c\x08read_csv\x94\x8c\x0c__qualname__\x94\x8c\'_make_parser_function.<locals>.parser_f\x94\x8c\x0f__annotations__\x94}\x94\x8c\x0e__kwdefaults__\x94N\x8c\x0c__defaults__\x94(\x8c\x01,\x94N\x8c\x05infer\x94NNN\x89N\x88NNNNN\x89NK\x00NN\x88\x88\x89\x88\x89\x89\x89N\x89\x89NhvNC\x01.\x94N\x8c\x01"\x94K\x00\x88NNNNN\x88\x88\x89\x88\x89Nt\x94\x8c\n__module__\x94h`\x8c\x07__doc__\x94X\x979\x00\x00\nRead a comma-separated values (csv) file into DataFrame.\n\nAlso supports optionally iterating or breaking of the file\ninto chunks.\n\nAdditional help can be found in the online docs for\n`IO Tools <http://pandas.pydata.org/pandas-docs/stable/io.html>`_.\n\nParameters\n----------\nfilepath_or_buffer : str, path object, or file-like object\n    Any valid string path is acceptable. The string could be a URL. Valid\n    URL schemes include http, ftp, s3, and file. For file URLs, a host is\n    expected. A local file could be: file://localhost/path/to/table.csv.\n\n    If you want to pass in a path object, pandas accepts either\n    ``pathlib.Path`` or ``py._path.local.LocalPath``.\n\n    By file-like object, we refer to objects with a ``read()`` method, such as\n    a file handler (e.g. via builtin ``open`` function) or ``StringIO``.\nsep : str, default \',\'\n    Delimiter to use. If sep is None, the C engine cannot automatically detect\n    the separator, but the Python parsing engine can, meaning the latter will\n    be used and automatically detect the separator by Python\'s builtin sniffer\n    tool, ``csv.Sniffer``. In addition, separators longer than 1 character and\n    different from ``\'\\s+\'`` will be interpreted as regular expressions and\n    will also force the use of the Python parsing engine. Note that regex\n    delimiters are prone to ignoring quoted data. Regex example: ``\'\\r\\t\'``.\ndelimiter : str, default ``None``\n    Alias for sep.\nheader : int, list of int, default \'infer\'\n    Row number(s) to use as the column names, and the start of the\n    data.  Default behavior is to infer the column names: if no names\n    are passed the behavior is identical to ``header=0`` and column\n    names are inferred from the first line of the file, if column\n    names are passed explicitly then the behavior is identical to\n    ``header=None``. Explicitly pass ``header=0`` to be able to\n    replace existing names. The header can be a list of integers that\n    specify row locations for a multi-index on the columns\n    e.g. [0,1,3]. Intervening rows that are not specified will be\n    skipped (e.g. 2 in this example is skipped). Note that this\n    parameter ignores commented lines and empty lines if\n    ``skip_blank_lines=True``, so ``header=0`` denotes the first line of\n    data rather than the first line of the file.\nnames : array-like, optional\n    List of column names to use. If file contains no header row, then you\n    should explicitly pass ``header=None``. Duplicates in this list will cause\n    a ``UserWarning`` to be issued.\nindex_col : int, sequence or bool, optional\n    Column to use as the row labels of the DataFrame. If a sequence is given, a\n    MultiIndex is used. If you have a malformed file with delimiters at the end\n    of each line, you might consider ``index_col=False`` to force pandas to\n    not use the first column as the index (row names).\nusecols : list-like or callable, optional\n    Return a subset of the columns. If list-like, all elements must either\n    be positional (i.e. integer indices into the document columns) or strings\n    that correspond to column names provided either by the user in `names` or\n    inferred from the document header row(s). For example, a valid list-like\n    `usecols` parameter would be ``[0, 1, 2]`` or ``[\'foo\', \'bar\', \'baz\']``.\n    Element order is ignored, so ``usecols=[0, 1]`` is the same as ``[1, 0]``.\n    To instantiate a DataFrame from ``data`` with element order preserved use\n    ``pd.read_csv(data, usecols=[\'foo\', \'bar\'])[[\'foo\', \'bar\']]`` for columns\n    in ``[\'foo\', \'bar\']`` order or\n    ``pd.read_csv(data, usecols=[\'foo\', \'bar\'])[[\'bar\', \'foo\']]``\n    for ``[\'bar\', \'foo\']`` order.\n\n    If callable, the callable function will be evaluated against the column\n    names, returning names where the callable function evaluates to True. An\n    example of a valid callable argument would be ``lambda x: x.upper() in\n    [\'AAA\', \'BBB\', \'DDD\']``. Using this parameter results in much faster\n    parsing time and lower memory usage.\nsqueeze : bool, default False\n    If the parsed data only contains one column then return a Series.\nprefix : str, optional\n    Prefix to add to column numbers when no header, e.g. \'X\' for X0, X1, ...\nmangle_dupe_cols : bool, default True\n    Duplicate columns will be specified as \'X\', \'X.1\', ...\'X.N\', rather than\n    \'X\'...\'X\'. Passing in False will cause data to be overwritten if there\n    are duplicate names in the columns.\ndtype : Type name or dict of column -> type, optional\n    Data type for data or columns. E.g. {\'a\': np.float64, \'b\': np.int32,\n    \'c\': \'Int64\'}\n    Use `str` or `object` together with suitable `na_values` settings\n    to preserve and not interpret dtype.\n    If converters are specified, they will be applied INSTEAD\n    of dtype conversion.\nengine : {\'c\', \'python\'}, optional\n    Parser engine to use. The C engine is faster while the python engine is\n    currently more feature-complete.\nconverters : dict, optional\n    Dict of functions for converting values in certain columns. Keys can either\n    be integers or column labels.\ntrue_values : list, optional\n    Values to consider as True.\nfalse_values : list, optional\n    Values to consider as False.\nskipinitialspace : bool, default False\n    Skip spaces after delimiter.\nskiprows : list-like, int or callable, optional\n    Line numbers to skip (0-indexed) or number of lines to skip (int)\n    at the start of the file.\n\n    If callable, the callable function will be evaluated against the row\n    indices, returning True if the row should be skipped and False otherwise.\n    An example of a valid callable argument would be ``lambda x: x in [0, 2]``.\nskipfooter : int, default 0\n    Number of lines at bottom of file to skip (Unsupported with engine=\'c\').\nnrows : int, optional\n    Number of rows of file to read. Useful for reading pieces of large files.\nna_values : scalar, str, list-like, or dict, optional\n    Additional strings to recognize as NA/NaN. If dict passed, specific\n    per-column NA values.  By default the following values are interpreted as\n    NaN: \'\', \'#N/A\', \'#N/A N/A\', \'#NA\', \'-1.#IND\', \'-1.#QNAN\', \'-NaN\', \'-nan\',\n    \'1.#IND\', \'1.#QNAN\', \'N/A\', \'NA\', \'NULL\', \'NaN\', \'n/a\', \'nan\',\n    \'null\'.\nkeep_default_na : bool, default True\n    Whether or not to include the default NaN values when parsing the data.\n    Depending on whether `na_values` is passed in, the behavior is as follows:\n\n    * If `keep_default_na` is True, and `na_values` are specified, `na_values`\n      is appended to the default NaN values used for parsing.\n    * If `keep_default_na` is True, and `na_values` are not specified, only\n      the default NaN values are used for parsing.\n    * If `keep_default_na` is False, and `na_values` are specified, only\n      the NaN values specified `na_values` are used for parsing.\n    * If `keep_default_na` is False, and `na_values` are not specified, no\n      strings will be parsed as NaN.\n\n    Note that if `na_filter` is passed in as False, the `keep_default_na` and\n    `na_values` parameters will be ignored.\nna_filter : bool, default True\n    Detect missing value markers (empty strings and the value of na_values). In\n    data without any NAs, passing na_filter=False can improve the performance\n    of reading a large file.\nverbose : bool, default False\n    Indicate number of NA values placed in non-numeric columns.\nskip_blank_lines : bool, default True\n    If True, skip over blank lines rather than interpreting as NaN values.\nparse_dates : bool or list of int or names or list of lists or dict, default False\n    The behavior is as follows:\n\n    * boolean. If True -> try parsing the index.\n    * list of int or names. e.g. If [1, 2, 3] -> try parsing columns 1, 2, 3\n      each as a separate date column.\n    * list of lis'
Traceback (most recent call last):
  File "/opt/conda/envs/coiled/lib/python3.7/site-packages/distributed/protocol/pickle.py", line 75, in loads
    return pickle.loads(x)
ValueError: unsupported pickle protocol: 5
distributed.protocol.core - CRITICAL - Failed to deserialize
Traceback (most recent call last):
  File "/opt/conda/envs/coiled/lib/python3.7/site-packages/distributed/protocol/core.py", line 151, in loads
    value = _deserialize(head, fs, deserializers=deserializers)
  File "/opt/conda/envs/coiled/lib/python3.7/site-packages/distributed/protocol/serialize.py", line 374, in deserialize
    return loads(header, frames)
  File "/opt/conda/envs/coiled/lib/python3.7/site-packages/distributed/protocol/serialize.py", line 70, in pickle_loads
    return pickle.loads(x, buffers=buffers)
  File "/opt/conda/envs/coiled/lib/python3.7/site-packages/distributed/protocol/pickle.py", line 75, in loads
    return pickle.loads(x)
ValueError: unsupported pickle protocol: 5
distributed.core - ERROR - unsupported pickle protocol: 5
Traceback (most recent call last):
  File "/opt/conda/envs/coiled/lib/python3.7/site-packages/distributed/core.py", line 554, in handle_stream
    msgs = await comm.read()
  File "/opt/conda/envs/coiled/lib/python3.7/site-packages/distributed/comm/tcp.py", line 216, in read
    allow_offload=self.allow_offload,
  File "/opt/conda/envs/coiled/lib/python3.7/site-packages/distributed/comm/utils.py", line 80, in from_frames
    res = _from_frames()
  File "/opt/conda/envs/coiled/lib/python3.7/site-packages/distributed/comm/utils.py", line 64, in _from_frames
    frames, deserialize=deserialize, deserializers=deserializers
  File "/opt/conda/envs/coiled/lib/python3.7/site-packages/distributed/protocol/core.py", line 151, in loads
    value = _deserialize(head, fs, deserializers=deserializers)
  File "/opt/conda/envs/coiled/lib/python3.7/site-packages/distributed/protocol/serialize.py", line 374, in deserialize
    return loads(header, frames)
  File "/opt/conda/envs/coiled/lib/python3.7/site-packages/distributed/protocol/serialize.py", line 70, in pickle_loads
    return pickle.loads(x, buffers=buffers)
  File "/opt/conda/envs/coiled/lib/python3.7/site-packages/distributed/protocol/pickle.py", line 75, in loads
    return pickle.loads(x)
ValueError: unsupported pickle protocol: 5
distributed.worker - ERROR - unsupported pickle protocol: 5
Traceback (most recent call last):
  File "/opt/conda/envs/coiled/lib/python3.7/site-packages/distributed/worker.py", line 991, in handle_scheduler
    comm, every_cycle=[self.ensure_communicating, self.ensure_computing]
  File "/opt/conda/envs/coiled/lib/python3.7/site-packages/distributed/core.py", line 554, in handle_stream
    msgs = await comm.read()
  File "/opt/conda/envs/coiled/lib/python3.7/site-packages/distributed/comm/tcp.py", line 216, in read
    allow_offload=self.allow_offload,
  File "/opt/conda/envs/coiled/lib/python3.7/site-packages/distributed/comm/utils.py", line 80, in from_frames
    res = _from_frames()
  File "/opt/conda/envs/coiled/lib/python3.7/site-packages/distributed/comm/utils.py", line 64, in _from_frames
    frames, deserialize=deserialize, deserializers=deserializers
  File "/opt/conda/envs/coiled/lib/python3.7/site-packages/distributed/protocol/core.py", line 151, in loads
    value = _deserialize(head, fs, deserializers=deserializers)
  File "/opt/conda/envs/coiled/lib/python3.7/site-packages/distributed/protocol/serialize.py", line 374, in deserialize
    return loads(header, frames)
  File "/opt/conda/envs/coiled/lib/python3.7/site-packages/distributed/protocol/serialize.py", line 70, in pickle_loads
    return pickle.loads(x, buffers=buffers)
  File "/opt/conda/envs/coiled/lib/python3.7/site-packages/distributed/protocol/pickle.py", line 75, in loads
    return pickle.loads(x)
ValueError: unsupported pickle protocol: 5
distributed.worker - INFO - Connection to scheduler broken.  Reconnecting...
tornado.application - ERROR - Exception in callback functools.partial(<bound method IOLoop._discard_future_result of <tornado.platform.asyncio.AsyncIOLoop object at 0x7f8c1955d750>>, <Task finished coro=<Worker.handle_scheduler() done, defined at /opt/conda/envs/coiled/lib/python3.7/site-packages/distributed/worker.py:988> exception=ValueError('unsupported pickle protocol: 5')>)
Traceback (most recent call last):
  File "/opt/conda/envs/coiled/lib/python3.7/site-packages/tornado/ioloop.py", line 741, in _run_callback
    ret = callback()
  File "/opt/conda/envs/coiled/lib/python3.7/site-packages/tornado/ioloop.py", line 765, in _discard_future_result
    future.result()
  File "/opt/conda/envs/coiled/lib/python3.7/site-packages/distributed/worker.py", line 991, in handle_scheduler
    comm, every_cycle=[self.ensure_communicating, self.ensure_computing]
  File "/opt/conda/envs/coiled/lib/python3.7/site-packages/distributed/core.py", line 554, in handle_stream
    msgs = await comm.read()
  File "/opt/conda/envs/coiled/lib/python3.7/site-packages/distributed/comm/tcp.py", line 216, in read
    allow_offload=self.allow_offload,
  File "/opt/conda/envs/coiled/lib/python3.7/site-packages/distributed/comm/utils.py", line 80, in from_frames
    res = _from_frames()
  File "/opt/conda/envs/coiled/lib/python3.7/site-packages/distributed/comm/utils.py", line 64, in _from_frames
    frames, deserialize=deserialize, deserializers=deserializers
  File "/opt/conda/envs/coiled/lib/python3.7/site-packages/distributed/protocol/core.py", line 151, in loads
    value = _deserialize(head, fs, deserializers=deserializers)
  File "/opt/conda/envs/coiled/lib/python3.7/site-packages/distributed/protocol/serialize.py", line 374, in deserialize
    return loads(header, frames)
  File "/opt/conda/envs/coiled/lib/python3.7/site-packages/distributed/protocol/serialize.py", line 70, in pickle_loads
    return pickle.loads(x, buffers=buffers)
  File "/opt/conda/envs/coiled/lib/python3.7/site-packages/distributed/protocol/pickle.py", line 75, in loads
    return pickle.loads(x)
ValueError: unsupported pickle protocol: 5
distributed.worker - INFO - -------------------------------------------------
distributed.worker - WARNING - Mismatched versions found
+---------+-------------+-----------+----------------------+
| Package | This Worker | scheduler | workers              |
+---------+-------------+-----------+----------------------+
| numpy   | 1.20.0      | 1.20.0    | {'1.20.0', '1.16.5'} |
+---------+-------------+-----------+----------------------+
distributed.worker - INFO -         Registered to: tls://ip-10-2-11-126.us-east-2.compute.internal:8786
distributed.worker - INFO - -------------------------------------------------
distributed.core - INFO - Starting established connection
distributed.protocol.pickle - INFO - Failed to deserialize b'\x80\x05\x95\x1dG\x00\x00\x00\x00\x00\x00(\x8c\x15dask.dataframe.io.csv\x94\x8c\x10pandas_read_text\x94\x93\x94\x8c\x17cloudpickle.cloudpickle\x94\x8c\r_builtin_type\x94\x93\x94\x8c\nLambdaType\x94\x85\x94R\x94(h\x05\x8c\x08CodeType\x94\x85\x94R\x94(K1K\x00K4K2K\x13B\x1a\x01\x00\x00\x88\x01d\x01k\x02rF|\x01d\x02k\x08r*|\x02d\x00k\x08r*t\x00j\x01d\x03t\x02d\x04d\x05\x8d\x03\x01\x00n\x10t\x00j\x01d\x06t\x02d\x04d\x05\x8d\x03\x01\x00|\x01d\x02k\x08rF\x88\x00}\x01|)d\x00k\trj|\x02d\x00k\x08o\\|\x01\x88\x00k\x02}1t\x03|1d\x07\x8d\x01}2n\x06t\x03\x83\x00}2|\x02d\x00k\x08r||\x01}\x02|-r\x90|\x02\x88\x00k\x03r\x90t\x04d\x08\x83\x01\x82\x01|\x0bd\x00k\tr\x9ed\t}3n\x08d\n}\x0bd\x02}3|2j\x05|\x02|\x0b|)|\x1f|3|%|&|#|$|\x0f|"|\x03|\x05|\x04|\x08|\x10|\x11|\x13|\r|\x0e|\x14| |\'|!|\x18|\x1a|\x1c|\x1b|\x12|\x1d|\x1e|\x0c|\n|\x06|\x16|(|\x07|/|0|\x15|-|,|+|.|\t|*|\x19|\x17d\x0b\x8d0\x01\x00t\x06|\x00|2\x83\x02S\x00\x94(N\x8c\nread_table\x94\x89\x8cAread_table is deprecated, use read_csv instead, passing sep=\'\\t\'.\x94K\x02\x8c\nstacklevel\x94\x85\x94\x8c/read_table is deprecated, use read_csv instead.\x94\x8c\x0csep_override\x94\x85\x94\x8cXSpecified a delimiter with both sep and delim_whitespace=True; you can only specify one.\x94\x88\x8c\x01c\x94(\x8c\tdelimiter\x94\x8c\x06engine\x94\x8c\x07dialect\x94\x8c\x0bcompression\x94\x8c\x10engine_specified\x94\x8c\x0bdoublequote\x94\x8c\nescapechar\x94\x8c\tquotechar\x94\x8c\x07quoting\x94\x8c\x10skipinitialspace\x94\x8c\x0elineterminator\x94\x8c\x06header\x94\x8c\tindex_col\x94\x8c\x05names\x94\x8c\x06prefix\x94\x8c\x08skiprows\x94\x8c\nskipfooter\x94\x8c\tna_values\x94\x8c\x0btrue_values\x94\x8c\x0cfalse_values\x94\x8c\x0fkeep_default_na\x94\x8c\tthousands\x94\x8c\x07comment\x94\x8c\x07decimal\x94\x8c\x0bparse_dates\x94\x8c\rkeep_date_col\x94\x8c\x08dayfirst\x94\x8c\x0bdate_parser\x94\x8c\x05nrows\x94\x8c\x08iterator\x94\x8c\tchunksize\x94\x8c\nconverters\x94\x8c\x05dtype\x94\x8c\x07usecols\x94\x8c\x07verbose\x94\x8c\x08encoding\x94\x8c\x07squeeze\x94\x8c\nmemory_map\x94\x8c\x0ffloat_precision\x94\x8c\tna_filter\x94\x8c\x10delim_whitespace\x94\x8c\x0ewarn_bad_lines\x94\x8c\x0ferror_bad_lines\x94\x8c\nlow_memory\x94\x8c\x10mangle_dupe_cols\x94\x8c\rtupleize_cols\x94\x8c\x15infer_datetime_format\x94\x8c\x10skip_blank_lines\x94t\x94t\x94(\x8c\x08warnings\x94\x8c\x04warn\x94\x8c\rFutureWarning\x94\x8c\x04dict\x94\x8c\nValueError\x94\x8c\x06update\x94\x8c\x05_read\x94t\x94(\x8c\x12filepath_or_buffer\x94\x8c\x03sep\x94h\x16h!h#h"h7h:h$hBh6h\x17h5h(h)h\x1fh%h&h2h\'h*h=h8hEh.hDh/h1h0h3h4h\x19h+h-h h\x1dh\x1eh\x1bh\x1ch,h9h\x18hCh@h?h>hAh;h<h\x12\x8c\x04kwds\x94h\x1at\x94\x8cUC:\\Users\\p_sierkin\\Miniconda3\\envs\\real_estate\\lib\\site-packages\\pandas\\io\\parsers.py\x94\x8c\x08parser_f\x94M\x13\x02C\x8a\x00C\x08\x01\x10\x01\x06\x02\x0c\x02\x06\x02\n\x01\x08\x01\x04\x0e\x08\x01\x10\x01\x0c\x02\x06\x03\x08\x01\x04\x02\x0c\x01\x08\x04\x08\x01\x06\x02\x04\x01\x04\x02\x06\x01\x02\x01\x02\x01\x02\x01\x02\x02\x02\x01\x02\x01\x02\x01\x02\x01\x02\x01\x02\x02\x02\x01\x02\x01\x02\x01\x02\x01\x02\x01\x02\x01\x02\x01\x02\x01\x02\x01\x02\x01\x02\x01\x02\x01\x02\x02\x02\x01\x02\x01\x02\x01\x02\x02\x02\x01\x02\x01\x02\x01\x02\x01\x02\x01\x02\x01\x02\x01\x02\x01\x02\x01\x02\x01\x02\x02\x02\x01\x02\x01\x02\x01\x02\x01\x02\x01\x02\x01\x02\x01\x02\x01\x08\x02\x94\x8c\x0bdefault_sep\x94\x8c\x04name\x94\x86\x94)t\x94R\x94}\x94(\x8c\x0b__package__\x94\x8c\tpandas.io\x94\x8c\x08__name__\x94\x8c\x11pandas.io.parsers\x94\x8c\x08__file__\x94hTuNNh\x03\x8c\x10_make_empty_cell\x94\x93\x94)R\x94hc)R\x94\x86\x94t\x94R\x94\x8c\x1ccloudpickle.cloudpickle_fast\x94\x8c\x12_function_setstate\x94\x93\x94hh}\x94}\x94(h_\x8c\x08read_csv\x94\x8c\x0c__qualname__\x94\x8c\'_make_parser_function.<locals>.parser_f\x94\x8c\x0f__annotations__\x94}\x94\x8c\x0e__kwdefaults__\x94N\x8c\x0c__defaults__\x94(\x8c\x01,\x94N\x8c\x05infer\x94NNN\x89N\x88NNNNN\x89NK\x00NN\x88\x88\x89\x88\x89\x89\x89N\x89\x89NhvNC\x01.\x94N\x8c\x01"\x94K\x00\x88NNNNN\x88\x88\x89\x88\x89Nt\x94\x8c\n__module__\x94h`\x8c\x07__doc__\x94X\x979\x00\x00\nRead a comma-separated values (csv) file into DataFrame.\n\nAlso supports optionally iterating or breaking of the file\ninto chunks.\n\nAdditional help can be found in the online docs for\n`IO Tools <http://pandas.pydata.org/pandas-docs/stable/io.html>`_.\n\nParameters\n----------\nfilepath_or_buffer : str, path object, or file-like object\n    Any valid string path is acceptable. The string could be a URL. Valid\n    URL schemes include http, ftp, s3, and file. For file URLs, a host is\n    expected. A local file could be: file://localhost/path/to/table.csv.\n\n    If you want to pass in a path object, pandas accepts either\n    ``pathlib.Path`` or ``py._path.local.LocalPath``.\n\n    By file-like object, we refer to objects with a ``read()`` method, such as\n    a file handler (e.g. via builtin ``open`` function) or ``StringIO``.\nsep : str, default \',\'\n    Delimiter to use. If sep is None, the C engine cannot automatically detect\n    the separator, but the Python parsing engine can, meaning the latter will\n    be used and automatically detect the separator by Python\'s builtin sniffer\n    tool, ``csv.Sniffer``. In addition, separators longer than 1 character and\n    different from ``\'\\s+\'`` will be interpreted as regular expressions and\n    will also force the use of the Python parsing engine. Note that regex\n    delimiters are prone to ignoring quoted data. Regex example: ``\'\\r\\t\'``.\ndelimiter : str, default ``None``\n    Alias for sep.\nheader : int, list of int, default \'infer\'\n    Row number(s) to use as the column names, and the start of the\n    data.  Default behavior is to infer the column names: if no names\n    are passed the behavior is identical to ``header=0`` and column\n    names are inferred from the first line of the file, if column\n    names are passed explicitly then the behavior is identical to\n    ``header=None``. Explicitly pass ``header=0`` to be able to\n    replace existing names. The header can be a list of integers that\n    specify row locations for a multi-index on the columns\n    e.g. [0,1,3]. Intervening rows that are not specified will be\n    skipped (e.g. 2 in this example is skipped). Note that this\n    parameter ignores commented lines and empty lines if\n    ``skip_blank_lines=True``, so ``header=0`` denotes the first line of\n    data rather than the first line of the file.\nnames : array-like, optional\n    List of column names to use. If file contains no header row, then you\n    should explicitly pass ``header=None``. Duplicates in this list will cause\n    a ``UserWarning`` to be issued.\nindex_col : int, sequence or bool, optional\n    Column to use as the row labels of the DataFrame. If a sequence is given, a\n    MultiIndex is used. If you have a malformed file with delimiters at the end\n    of each line, you might consider ``index_col=False`` to force pandas to\n    not use the first column as the index (row names).\nusecols : list-like or callable, optional\n    Return a subset of the columns. If list-like, all elements must either\n    be positional (i.e. integer indices into the document columns) or strings\n    that correspond to column names provided either by the user in `names` or\n    inferred from the document header row(s). For example, a valid list-like\n    `usecols` parameter would be ``[0, 1, 2]`` or ``[\'foo\', \'bar\', \'baz\']``.\n    Element order is ignored, so ``usecols=[0, 1]`` is the same as ``[1, 0]``.\n    To instantiate a DataFrame from ``data`` with element order preserved use\n    ``pd.read_csv(data, usecols=[\'foo\', \'bar\'])[[\'foo\', \'bar\']]`` for columns\n    in ``[\'foo\', \'bar\']`` order or\n    ``pd.read_csv(data, usecols=[\'foo\', \'bar\'])[[\'bar\', \'foo\']]``\n    for ``[\'bar\', \'foo\']`` order.\n\n    If callable, the callable function will be evaluated against the column\n    names, returning names where the callable function evaluates to True. An\n    example of a valid callable argument would be ``lambda x: x.upper() in\n    [\'AAA\', \'BBB\', \'DDD\']``. Using this parameter results in much faster\n    parsing time and lower memory usage.\nsqueeze : bool, default False\n    If the parsed data only contains one column then return a Series.\nprefix : str, optional\n    Prefix to add to column numbers when no header, e.g. \'X\' for X0, X1, ...\nmangle_dupe_cols : bool, default True\n    Duplicate columns will be specified as \'X\', \'X.1\', ...\'X.N\', rather than\n    \'X\'...\'X\'. Passing in False will cause data to be overwritten if there\n    are duplicate names in the columns.\ndtype : Type name or dict of column -> type, optional\n    Data type for data or columns. E.g. {\'a\': np.float64, \'b\': np.int32,\n    \'c\': \'Int64\'}\n    Use `str` or `object` together with suitable `na_values` settings\n    to preserve and not interpret dtype.\n    If converters are specified, they will be applied INSTEAD\n    of dtype conversion.\nengine : {\'c\', \'python\'}, optional\n    Parser engine to use. The C engine is faster while the python engine is\n    currently more feature-complete.\nconverters : dict, optional\n    Dict of functions for converting values in certain columns. Keys can either\n    be integers or column labels.\ntrue_values : list, optional\n    Values to consider as True.\nfalse_values : list, optional\n    Values to consider as False.\nskipinitialspace : bool, default False\n    Skip spaces after delimiter.\nskiprows : list-like, int or callable, optional\n    Line numbers to skip (0-indexed) or number of lines to skip (int)\n    at the start of the file.\n\n    If callable, the callable function will be evaluated against the row\n    indices, returning True if the row should be skipped and False otherwise.\n    An example of a valid callable argument would be ``lambda x: x in [0, 2]``.\nskipfooter : int, default 0\n    Number of lines at bottom of file to skip (Unsupported with engine=\'c\').\nnrows : int, optional\n    Number of rows of file to read. Useful for reading pieces of large files.\nna_values : scalar, str, list-like, or dict, optional\n    Additional strings to recognize as NA/NaN. If dict passed, specific\n    per-column NA values.  By default the following values are interpreted as\n    NaN: \'\', \'#N/A\', \'#N/A N/A\', \'#NA\', \'-1.#IND\', \'-1.#QNAN\', \'-NaN\', \'-nan\',\n    \'1.#IND\', \'1.#QNAN\', \'N/A\', \'NA\', \'NULL\', \'NaN\', \'n/a\', \'nan\',\n    \'null\'.\nkeep_default_na : bool, default True\n    Whether or not to include the default NaN values when parsing the data.\n    Depending on whether `na_values` is passed in, the behavior is as follows:\n\n    * If `keep_default_na` is True, and `na_values` are specified, `na_values`\n      is appended to the default NaN values used for parsing.\n    * If `keep_default_na` is True, and `na_values` are not specified, only\n      the default NaN values are used for parsing.\n    * If `keep_default_na` is False, and `na_values` are specified, only\n      the NaN values specified `na_values` are used for parsing.\n    * If `keep_default_na` is False, and `na_values` are not specified, no\n      strings will be parsed as NaN.\n\n    Note that if `na_filter` is passed in as False, the `keep_default_na` and\n    `na_values` parameters will be ignored.\nna_filter : bool, default True\n    Detect missing value markers (empty strings and the value of na_values). In\n    data without any NAs, passing na_filter=False can improve the performance\n    of reading a large file.\nverbose : bool, default False\n    Indicate number of NA values placed in non-numeric columns.\nskip_blank_lines : bool, default True\n    If True, skip over blank lines rather than interpreting as NaN values.\nparse_dates : bool or list of int or names or list of lists or dict, default False\n    The behavior is as follows:\n\n    * boolean. If True -> try parsing the index.\n    * list of int or names. e.g. If [1, 2, 3] -> try parsing columns 1, 2, 3\n      each as a separate date column.\n    * list of lis'
Traceback (most recent call last):
  File "/opt/conda/envs/coiled/lib/python3.7/site-packages/distributed/protocol/pickle.py", line 75, in loads
    return pickle.loads(x)
ValueError: unsupported pickle protocol: 5
distributed.protocol.core - CRITICAL - Failed to deserialize
Traceback (most recent call last):
  File "/opt/conda/envs/coiled/lib/python3.7/site-packages/distributed/protocol/core.py", line 151, in loads
    value = _deserialize(head, fs, deserializers=deserializers)
  File "/opt/conda/envs/coiled/lib/python3.7/site-packages/distributed/protocol/serialize.py", line 374, in deserialize
    return loads(header, frames)
  File "/opt/conda/envs/coiled/lib/python3.7/site-packages/distributed/protocol/serialize.py", line 70, in pickle_loads
    return pickle.loads(x, buffers=buffers)
  File "/opt/conda/envs/coiled/lib/python3.7/site-packages/distributed/protocol/pickle.py", line 75, in loads
    return pickle.loads(x)
ValueError: unsupported pickle protocol: 5
distributed.core - ERROR - unsupported pickle protocol: 5
Traceback (most recent call last):
  File "/opt/conda/envs/coiled/lib/python3.7/site-packages/distributed/core.py", line 554, in handle_stream
    msgs = await comm.read()
  File "/opt/conda/envs/coiled/lib/python3.7/site-packages/distributed/comm/tcp.py", line 216, in read
    allow_offload=self.allow_offload,
  File "/opt/conda/envs/coiled/lib/python3.7/site-packages/distributed/comm/utils.py", line 80, in from_frames
    res = _from_frames()
  File "/opt/conda/envs/coiled/lib/python3.7/site-packages/distributed/comm/utils.py", line 64, in _from_frames
    frames, deserialize=deserialize, deserializers=deserializers
  File "/opt/conda/envs/coiled/lib/python3.7/site-packages/distributed/protocol/core.py", line 151, in loads
    value = _deserialize(head, fs, deserializers=deserializers)
  File "/opt/conda/envs/coiled/lib/python3.7/site-packages/distributed/protocol/serialize.py", line 374, in deserialize
    return loads(header, frames)
  File "/opt/conda/envs/coiled/lib/python3.7/site-packages/distributed/protocol/serialize.py", line 70, in pickle_loads
    return pickle.loads(x, buffers=buffers)
  File "/opt/conda/envs/coiled/lib/python3.7/site-packages/distributed/protocol/pickle.py", line 75, in loads
    return pickle.loads(x)
ValueError: unsupported pickle protocol: 5
distributed.worker - ERROR - unsupported pickle protocol: 5
Traceback (most recent call last):
  File "/opt/conda/envs/coiled/lib/python3.7/site-packages/distributed/worker.py", line 991, in handle_scheduler
    comm, every_cycle=[self.ensure_communicating, self.ensure_computing]
  File "/opt/conda/envs/coiled/lib/python3.7/site-packages/distributed/core.py", line 554, in handle_stream
    msgs = await comm.read()
  File "/opt/conda/envs/coiled/lib/python3.7/site-packages/distributed/comm/tcp.py", line 216, in read
    allow_offload=self.allow_offload,
  File "/opt/conda/envs/coiled/lib/python3.7/site-packages/distributed/comm/utils.py", line 80, in from_frames
    res = _from_frames()
  File "/opt/conda/envs/coiled/lib/python3.7/site-packages/distributed/comm/utils.py", line 64, in _from_frames
    frames, deserialize=deserialize, deserializers=deserializers
  File "/opt/conda/envs/coiled/lib/python3.7/site-packages/distributed/protocol/core.py", line 151, in loads
    value = _deserialize(head, fs, deserializers=deserializers)
  File "/opt/conda/envs/coiled/lib/python3.7/site-packages/distributed/protocol/serialize.py", line 374, in deserialize
    return loads(header, frames)
  File "/opt/conda/envs/coiled/lib/python3.7/site-packages/distributed/protocol/serialize.py", line 70, in pickle_loads
    return pickle.loads(x, buffers=buffers)
  File "/opt/conda/envs/coiled/lib/python3.7/site-packages/distributed/protocol/pickle.py", line 75, in loads
    return pickle.loads(x)
ValueError: unsupported pickle protocol: 5
distributed.worker - INFO - Connection to scheduler broken.  Reconnecting...
tornado.application - ERROR - Exception in callback functools.partial(<bound method IOLoop._discard_future_result of <tornado.platform.asyncio.AsyncIOLoop object at 0x7f8c1955d750>>, <Task finished coro=<Worker.handle_scheduler() done, defined at /opt/conda/envs/coiled/lib/python3.7/site-packages/distributed/worker.py:988> exception=ValueError('unsupported pickle protocol: 5')>)
Traceback (most recent call last):
  File "/opt/conda/envs/coiled/lib/python3.7/site-packages/tornado/ioloop.py", line 741, in _run_callback
    ret = callback()
  File "/opt/conda/envs/coiled/lib/python3.7/site-packages/tornado/ioloop.py", line 765, in _discard_future_result
    future.result()
  File "/opt/conda/envs/coiled/lib/python3.7/site-packages/distributed/worker.py", line 991, in handle_scheduler
    comm, every_cycle=[self.ensure_communicating, self.ensure_computing]
  File "/opt/conda/envs/coiled/lib/python3.7/site-packages/distributed/core.py", line 554, in handle_stream
    msgs = await comm.read()
  File "/opt/conda/envs/coiled/lib/python3.7/site-packages/distributed/comm/tcp.py", line 216, in read
    allow_offload=self.allow_offload,
  File "/opt/conda/envs/coiled/lib/python3.7/site-packages/distributed/comm/utils.py", line 80, in from_frames
    res = _from_frames()
  File "/opt/conda/envs/coiled/lib/python3.7/site-packages/distributed/comm/utils.py", line 64, in _from_frames
    frames, deserialize=deserialize, deserializers=deserializers
  File "/opt/conda/envs/coiled/lib/python3.7/site-packages/distributed/protocol/core.py", line 151, in loads
    value = _deserialize(head, fs, deserializers=deserializers)
  File "/opt/conda/envs/coiled/lib/python3.7/site-packages/distributed/protocol/serialize.py", line 374, in deserialize
    return loads(header, frames)
  File "/opt/conda/envs/coiled/lib/python3.7/site-packages/distributed/protocol/serialize.py", line 70, in pickle_loads
    return pickle.loads(x, buffers=buffers)
  File "/opt/conda/envs/coiled/lib/python3.7/site-packages/distributed/protocol/pickle.py", line 75, in loads
    return pickle.loads(x)
ValueError: unsupported pickle protocol: 5
distributed.worker - INFO - -------------------------------------------------
distributed.worker - WARNING - Mismatched versions found
+---------+-------------+-----------+----------------------+
| Package | This Worker | scheduler | workers              |
+---------+-------------+-----------+----------------------+
| numpy   | 1.20.0      | 1.20.0    | {'1.20.0', '1.16.5'} |
+---------+-------------+-----------+----------------------+
distributed.worker - INFO -         Registered to: tls://ip-10-2-11-126.us-east-2.compute.internal:8786
distributed.worker - INFO - -------------------------------------------------
distributed.core - INFO - Starting established connection
distributed.protocol.pickle - INFO - Failed to deserialize b'\x80\x05\x95\x1dG\x00\x00\x00\x00\x00\x00(\x8c\x15dask.dataframe.io.csv\x94\x8c\x10pandas_read_text\x94\x93\x94\x8c\x17cloudpickle.cloudpickle\x94\x8c\r_builtin_type\x94\x93\x94\x8c\nLambdaType\x94\x85\x94R\x94(h\x05\x8c\x08CodeType\x94\x85\x94R\x94(K1K\x00K4K2K\x13B\x1a\x01\x00\x00\x88\x01d\x01k\x02rF|\x01d\x02k\x08r*|\x02d\x00k\x08r*t\x00j\x01d\x03t\x02d\x04d\x05\x8d\x03\x01\x00n\x10t\x00j\x01d\x06t\x02d\x04d\x05\x8d\x03\x01\x00|\x01d\x02k\x08rF\x88\x00}\x01|)d\x00k\trj|\x02d\x00k\x08o\\|\x01\x88\x00k\x02}1t\x03|1d\x07\x8d\x01}2n\x06t\x03\x83\x00}2|\x02d\x00k\x08r||\x01}\x02|-r\x90|\x02\x88\x00k\x03r\x90t\x04d\x08\x83\x01\x82\x01|\x0bd\x00k\tr\x9ed\t}3n\x08d\n}\x0bd\x02}3|2j\x05|\x02|\x0b|)|\x1f|3|%|&|#|$|\x0f|"|\x03|\x05|\x04|\x08|\x10|\x11|\x13|\r|\x0e|\x14| |\'|!|\x18|\x1a|\x1c|\x1b|\x12|\x1d|\x1e|\x0c|\n|\x06|\x16|(|\x07|/|0|\x15|-|,|+|.|\t|*|\x19|\x17d\x0b\x8d0\x01\x00t\x06|\x00|2\x83\x02S\x00\x94(N\x8c\nread_table\x94\x89\x8cAread_table is deprecated, use read_csv instead, passing sep=\'\\t\'.\x94K\x02\x8c\nstacklevel\x94\x85\x94\x8c/read_table is deprecated, use read_csv instead.\x94\x8c\x0csep_override\x94\x85\x94\x8cXSpecified a delimiter with both sep and delim_whitespace=True; you can only specify one.\x94\x88\x8c\x01c\x94(\x8c\tdelimiter\x94\x8c\x06engine\x94\x8c\x07dialect\x94\x8c\x0bcompression\x94\x8c\x10engine_specified\x94\x8c\x0bdoublequote\x94\x8c\nescapechar\x94\x8c\tquotechar\x94\x8c\x07quoting\x94\x8c\x10skipinitialspace\x94\x8c\x0elineterminator\x94\x8c\x06header\x94\x8c\tindex_col\x94\x8c\x05names\x94\x8c\x06prefix\x94\x8c\x08skiprows\x94\x8c\nskipfooter\x94\x8c\tna_values\x94\x8c\x0btrue_values\x94\x8c\x0cfalse_values\x94\x8c\x0fkeep_default_na\x94\x8c\tthousands\x94\x8c\x07comment\x94\x8c\x07decimal\x94\x8c\x0bparse_dates\x94\x8c\rkeep_date_col\x94\x8c\x08dayfirst\x94\x8c\x0bdate_parser\x94\x8c\x05nrows\x94\x8c\x08iterator\x94\x8c\tchunksize\x94\x8c\nconverters\x94\x8c\x05dtype\x94\x8c\x07usecols\x94\x8c\x07verbose\x94\x8c\x08encoding\x94\x8c\x07squeeze\x94\x8c\nmemory_map\x94\x8c\x0ffloat_precision\x94\x8c\tna_filter\x94\x8c\x10delim_whitespace\x94\x8c\x0ewarn_bad_lines\x94\x8c\x0ferror_bad_lines\x94\x8c\nlow_memory\x94\x8c\x10mangle_dupe_cols\x94\x8c\rtupleize_cols\x94\x8c\x15infer_datetime_format\x94\x8c\x10skip_blank_lines\x94t\x94t\x94(\x8c\x08warnings\x94\x8c\x04warn\x94\x8c\rFutureWarning\x94\x8c\x04dict\x94\x8c\nValueError\x94\x8c\x06update\x94\x8c\x05_read\x94t\x94(\x8c\x12filepath_or_buffer\x94\x8c\x03sep\x94h\x16h!h#h"h7h:h$hBh6h\x17h5h(h)h\x1fh%h&h2h\'h*h=h8hEh.hDh/h1h0h3h4h\x19h+h-h h\x1dh\x1eh\x1bh\x1ch,h9h\x18hCh@h?h>hAh;h<h\x12\x8c\x04kwds\x94h\x1at\x94\x8cUC:\\Users\\p_sierkin\\Miniconda3\\envs\\real_estate\\lib\\site-packages\\pandas\\io\\parsers.py\x94\x8c\x08parser_f\x94M\x13\x02C\x8a\x00C\x08\x01\x10\x01\x06\x02\x0c\x02\x06\x02\n\x01\x08\x01\x04\x0e\x08\x01\x10\x01\x0c\x02\x06\x03\x08\x01\x04\x02\x0c\x01\x08\x04\x08\x01\x06\x02\x04\x01\x04\x02\x06\x01\x02\x01\x02\x01\x02\x01\x02\x02\x02\x01\x02\x01\x02\x01\x02\x01\x02\x01\x02\x02\x02\x01\x02\x01\x02\x01\x02\x01\x02\x01\x02\x01\x02\x01\x02\x01\x02\x01\x02\x01\x02\x01\x02\x01\x02\x02\x02\x01\x02\x01\x02\x01\x02\x02\x02\x01\x02\x01\x02\x01\x02\x01\x02\x01\x02\x01\x02\x01\x02\x01\x02\x01\x02\x01\x02\x02\x02\x01\x02\x01\x02\x01\x02\x01\x02\x01\x02\x01\x02\x01\x02\x01\x08\x02\x94\x8c\x0bdefault_sep\x94\x8c\x04name\x94\x86\x94)t\x94R\x94}\x94(\x8c\x0b__package__\x94\x8c\tpandas.io\x94\x8c\x08__name__\x94\x8c\x11pandas.io.parsers\x94\x8c\x08__file__\x94hTuNNh\x03\x8c\x10_make_empty_cell\x94\x93\x94)R\x94hc)R\x94\x86\x94t\x94R\x94\x8c\x1ccloudpickle.cloudpickle_fast\x94\x8c\x12_function_setstate\x94\x93\x94hh}\x94}\x94(h_\x8c\x08read_csv\x94\x8c\x0c__qualname__\x94\x8c\'_make_parser_function.<locals>.parser_f\x94\x8c\x0f__annotations__\x94}\x94\x8c\x0e__kwdefaults__\x94N\x8c\x0c__defaults__\x94(\x8c\x01,\x94N\x8c\x05infer\x94NNN\x89N\x88NNNNN\x89NK\x00NN\x88\x88\x89\x88\x89\x89\x89N\x89\x89NhvNC\x01.\x94N\x8c\x01"\x94K\x00\x88NNNNN\x88\x88\x89\x88\x89Nt\x94\x8c\n__module__\x94h`\x8c\x07__doc__\x94X\x979\x00\x00\nRead a comma-separated values (csv) file into DataFrame.\n\nAlso supports optionally iterating or breaking of the file\ninto chunks.\n\nAdditional help can be found in the online docs for\n`IO Tools <http://pandas.pydata.org/pandas-docs/stable/io.html>`_.\n\nParameters\n----------\nfilepath_or_buffer : str, path object, or file-like object\n    Any valid string path is acceptable. The string could be a URL. Valid\n    URL schemes include http, ftp, s3, and file. For file URLs, a host is\n    expected. A local file could be: file://localhost/path/to/table.csv.\n\n    If you want to pass in a path object, pandas accepts either\n    ``pathlib.Path`` or ``py._path.local.LocalPath``.\n\n    By file-like object, we refer to objects with a ``read()`` method, such as\n    a file handler (e.g. via builtin ``open`` function) or ``StringIO``.\nsep : str, default \',\'\n    Delimiter to use. If sep is None, the C engine cannot automatically detect\n    the separator, but the Python parsing engine can, meaning the latter will\n    be used and automatically detect the separator by Python\'s builtin sniffer\n    tool, ``csv.Sniffer``. In addition, separators longer than 1 character and\n    different from ``\'\\s+\'`` will be interpreted as regular expressions and\n    will also force the use of the Python parsing engine. Note that regex\n    delimiters are prone to ignoring quoted data. Regex example: ``\'\\r\\t\'``.\ndelimiter : str, default ``None``\n    Alias for sep.\nheader : int, list of int, default \'infer\'\n    Row number(s) to use as the column names, and the start of the\n    data.  Default behavior is to infer the column names: if no names\n    are passed the behavior is identical to ``header=0`` and column\n    names are inferred from the first line of the file, if column\n    names are passed explicitly then the behavior is identical to\n    ``header=None``. Explicitly pass ``header=0`` to be able to\n    replace existing names. The header can be a list of integers that\n    specify row locations for a multi-index on the columns\n    e.g. [0,1,3]. Intervening rows that are not specified will be\n    skipped (e.g. 2 in this example is skipped). Note that this\n    parameter ignores commented lines and empty lines if\n    ``skip_blank_lines=True``, so ``header=0`` denotes the first line of\n    data rather than the first line of the file.\nnames : array-like, optional\n    List of column names to use. If file contains no header row, then you\n    should explicitly pass ``header=None``. Duplicates in this list will cause\n    a ``UserWarning`` to be issued.\nindex_col : int, sequence or bool, optional\n    Column to use as the row labels of the DataFrame. If a sequence is given, a\n    MultiIndex is used. If you have a malformed file with delimiters at the end\n    of each line, you might consider ``index_col=False`` to force pandas to\n    not use the first column as the index (row names).\nusecols : list-like or callable, optional\n    Return a subset of the columns. If list-like, all elements must either\n    be positional (i.e. integer indices into the document columns) or strings\n    that correspond to column names provided either by the user in `names` or\n    inferred from the document header row(s). For example, a valid list-like\n    `usecols` parameter would be ``[0, 1, 2]`` or ``[\'foo\', \'bar\', \'baz\']``.\n    Element order is ignored, so ``usecols=[0, 1]`` is the same as ``[1, 0]``.\n    To instantiate a DataFrame from ``data`` with element order preserved use\n    ``pd.read_csv(data, usecols=[\'foo\', \'bar\'])[[\'foo\', \'bar\']]`` for columns\n    in ``[\'foo\', \'bar\']`` order or\n    ``pd.read_csv(data, usecols=[\'foo\', \'bar\'])[[\'bar\', \'foo\']]``\n    for ``[\'bar\', \'foo\']`` order.\n\n    If callable, the callable function will be evaluated against the column\n    names, returning names where the callable function evaluates to True. An\n    example of a valid callable argument would be ``lambda x: x.upper() in\n    [\'AAA\', \'BBB\', \'DDD\']``. Using this parameter results in much faster\n    parsing time and lower memory usage.\nsqueeze : bool, default False\n    If the parsed data only contains one column then return a Series.\nprefix : str, optional\n    Prefix to add to column numbers when no header, e.g. \'X\' for X0, X1, ...\nmangle_dupe_cols : bool, default True\n    Duplicate columns will be specified as \'X\', \'X.1\', ...\'X.N\', rather than\n    \'X\'...\'X\'. Passing in False will cause data to be overwritten if there\n    are duplicate names in the columns.\ndtype : Type name or dict of column -> type, optional\n    Data type for data or columns. E.g. {\'a\': np.float64, \'b\': np.int32,\n    \'c\': \'Int64\'}\n    Use `str` or `object` together with suitable `na_values` settings\n    to preserve and not interpret dtype.\n    If converters are specified, they will be applied INSTEAD\n    of dtype conversion.\nengine : {\'c\', \'python\'}, optional\n    Parser engine to use. The C engine is faster while the python engine is\n    currently more feature-complete.\nconverters : dict, optional\n    Dict of functions for converting values in certain columns. Keys can either\n    be integers or column labels.\ntrue_values : list, optional\n    Values to consider as True.\nfalse_values : list, optional\n    Values to consider as False.\nskipinitialspace : bool, default False\n    Skip spaces after delimiter.\nskiprows : list-like, int or callable, optional\n    Line numbers to skip (0-indexed) or number of lines to skip (int)\n    at the start of the file.\n\n    If callable, the callable function will be evaluated against the row\n    indices, returning True if the row should be skipped and False otherwise.\n    An example of a valid callable argument would be ``lambda x: x in [0, 2]``.\nskipfooter : int, default 0\n    Number of lines at bottom of file to skip (Unsupported with engine=\'c\').\nnrows : int, optional\n    Number of rows of file to read. Useful for reading pieces of large files.\nna_values : scalar, str, list-like, or dict, optional\n    Additional strings to recognize as NA/NaN. If dict passed, specific\n    per-column NA values.  By default the following values are interpreted as\n    NaN: \'\', \'#N/A\', \'#N/A N/A\', \'#NA\', \'-1.#IND\', \'-1.#QNAN\', \'-NaN\', \'-nan\',\n    \'1.#IND\', \'1.#QNAN\', \'N/A\', \'NA\', \'NULL\', \'NaN\', \'n/a\', \'nan\',\n    \'null\'.\nkeep_default_na : bool, default True\n    Whether or not to include the default NaN values when parsing the data.\n    Depending on whether `na_values` is passed in, the behavior is as follows:\n\n    * If `keep_default_na` is True, and `na_values` are specified, `na_values`\n      is appended to the default NaN values used for parsing.\n    * If `keep_default_na` is True, and `na_values` are not specified, only\n      the default NaN values are used for parsing.\n    * If `keep_default_na` is False, and `na_values` are specified, only\n      the NaN values specified `na_values` are used for parsing.\n    * If `keep_default_na` is False, and `na_values` are not specified, no\n      strings will be parsed as NaN.\n\n    Note that if `na_filter` is passed in as False, the `keep_default_na` and\n    `na_values` parameters will be ignored.\nna_filter : bool, default True\n    Detect missing value markers (empty strings and the value of na_values). In\n    data without any NAs, passing na_filter=False can improve the performance\n    of reading a large file.\nverbose : bool, default False\n    Indicate number of NA values placed in non-numeric columns.\nskip_blank_lines : bool, default True\n    If True, skip over blank lines rather than interpreting as NaN values.\nparse_dates : bool or list of int or names or list of lists or dict, default False\n    The behavior is as follows:\n\n    * boolean. If True -> try parsing the index.\n    * list of int or names. e.g. If [1, 2, 3] -> try parsing columns 1, 2, 3\n      each as a separate date column.\n    * list of lis'
Traceback (most recent call last):
  File "/opt/conda/envs/coiled/lib/python3.7/site-packages/distributed/protocol/pickle.py", line 75, in loads
    return pickle.loads(x)
ValueError: unsupported pickle protocol: 5
distributed.protocol.core - CRITICAL - Failed to deserialize
Traceback (most recent call last):
  File "/opt/conda/envs/coiled/lib/python3.7/site-packages/distributed/protocol/core.py", line 151, in loads
    value = _deserialize(head, fs, deserializers=deserializers)
  File "/opt/conda/envs/coiled/lib/python3.7/site-packages/distributed/protocol/serialize.py", line 374, in deserialize
    return loads(header, frames)
  File "/opt/conda/envs/coiled/lib/python3.7/site-packages/distributed/protocol/serialize.py", line 70, in pickle_loads
    return pickle.loads(x, buffers=buffers)
  File "/opt/conda/envs/coiled/lib/python3.7/site-packages/distributed/protocol/pickle.py", line 75, in loads
    return pickle.loads(x)
ValueError: unsupported pickle protocol: 5
distributed.core - ERROR - unsupported pickle protocol: 5
Traceback (most recent call last):
  File "/opt/conda/envs/coiled/lib/python3.7/site-packages/distributed/core.py", line 554, in handle_stream
    msgs = await comm.read()
  File "/opt/conda/envs/coiled/lib/python3.7/site-packages/distributed/comm/tcp.py", line 216, in read
    allow_offload=self.allow_offload,
  File "/opt/conda/envs/coiled/lib/python3.7/site-packages/distributed/comm/utils.py", line 80, in from_frames
    res = _from_frames()
  File "/opt/conda/envs/coiled/lib/python3.7/site-packages/distributed/comm/utils.py", line 64, in _from_frames
    frames, deserialize=deserialize, deserializers=deserializers
  File "/opt/conda/envs/coiled/lib/python3.7/site-packages/distributed/protocol/core.py", line 151, in loads
    value = _deserialize(head, fs, deserializers=deserializers)
  File "/opt/conda/envs/coiled/lib/python3.7/site-packages/distributed/protocol/serialize.py", line 374, in deserialize
    return loads(header, frames)
  File "/opt/conda/envs/coiled/lib/python3.7/site-packages/distributed/protocol/serialize.py", line 70, in pickle_loads
    return pickle.loads(x, buffers=buffers)
  File "/opt/conda/envs/coiled/lib/python3.7/site-packages/distributed/protocol/pickle.py", line 75, in loads
    return pickle.loads(x)
ValueError: unsupported pickle protocol: 5
distributed.worker - ERROR - unsupported pickle protocol: 5
Traceback (most recent call last):
  File "/opt/conda/envs/coiled/lib/python3.7/site-packages/distributed/worker.py", line 991, in handle_scheduler
    comm, every_cycle=[self.ensure_communicating, self.ensure_computing]
  File "/opt/conda/envs/coiled/lib/python3.7/site-packages/distributed/core.py", line 554, in handle_stream
    msgs = await comm.read()
  File "/opt/conda/envs/coiled/lib/python3.7/site-packages/distributed/comm/tcp.py", line 216, in read
    allow_offload=self.allow_offload,
  File "/opt/conda/envs/coiled/lib/python3.7/site-packages/distributed/comm/utils.py", line 80, in from_frames
    res = _from_frames()
  File "/opt/conda/envs/coiled/lib/python3.7/site-packages/distributed/comm/utils.py", line 64, in _from_frames
    frames, deserialize=deserialize, deserializers=deserializers
  File "/opt/conda/envs/coiled/lib/python3.7/site-packages/distributed/protocol/core.py", line 151, in loads
    value = _deserialize(head, fs, deserializers=deserializers)
  File "/opt/conda/envs/coiled/lib/python3.7/site-packages/distributed/protocol/serialize.py", line 374, in deserialize
    return loads(header, frames)
  File "/opt/conda/envs/coiled/lib/python3.7/site-packages/distributed/protocol/serialize.py", line 70, in pickle_loads
    return pickle.loads(x, buffers=buffers)
  File "/opt/conda/envs/coiled/lib/python3.7/site-packages/distributed/protocol/pickle.py", line 75, in loads
    return pickle.loads(x)
ValueError: unsupported pickle protocol: 5
distributed.worker - INFO - Connection to scheduler broken.  Reconnecting...
tornado.application - ERROR - Exception in callback functools.partial(<bound method IOLoop._discard_future_result of <tornado.platform.asyncio.AsyncIOLoop object at 0x7f8c1955d750>>, <Task finished coro=<Worker.handle_scheduler() done, defined at /opt/conda/envs/coiled/lib/python3.7/site-packages/distributed/worker.py:988> exception=ValueError('unsupported pickle protocol: 5')>)
Traceback (most recent call last):
  File "/opt/conda/envs/coiled/lib/python3.7/site-packages/tornado/ioloop.py", line 741, in _run_callback
    ret = callback()
  File "/opt/conda/envs/coiled/lib/python3.7/site-packages/tornado/ioloop.py", line 765, in _discard_future_result
    future.result()
  File "/opt/conda/envs/coiled/lib/python3.7/site-packages/distributed/worker.py", line 991, in handle_scheduler
    comm, every_cycle=[self.ensure_communicating, self.ensure_computing]
  File "/opt/conda/envs/coiled/lib/python3.7/site-packages/distributed/core.py", line 554, in handle_stream
    msgs = await comm.read()
  File "/opt/conda/envs/coiled/lib/python3.7/site-packages/distributed/comm/tcp.py", line 216, in read
    allow_offload=self.allow_offload,
  File "/opt/conda/envs/coiled/lib/python3.7/site-packages/distributed/comm/utils.py", line 80, in from_frames
    res = _from_frames()
  File "/opt/conda/envs/coiled/lib/python3.7/site-packages/distributed/comm/utils.py", line 64, in _from_frames
    frames, deserialize=deserialize, deserializers=deserializers
  File "/opt/conda/envs/coiled/lib/python3.7/site-packages/distributed/protocol/core.py", line 151, in loads
    value = _deserialize(head, fs, deserializers=deserializers)
  File "/opt/conda/envs/coiled/lib/python3.7/site-packages/distributed/protocol/serialize.py", line 374, in deserialize
    return loads(header, frames)
  File "/opt/conda/envs/coiled/lib/python3.7/site-packages/distributed/protocol/serialize.py", line 70, in pickle_loads
    return pickle.loads(x, buffers=buffers)
  File "/opt/conda/envs/coiled/lib/python3.7/site-packages/distributed/protocol/pickle.py", line 75, in loads
    return pickle.loads(x)
ValueError: unsupported pickle protocol: 5
distributed.worker - INFO - -------------------------------------------------
distributed.worker - WARNING - Mismatched versions found
+---------+-------------+-----------+----------------------+
| Package | This Worker | scheduler | workers              |
+---------+-------------+-----------+----------------------+
| numpy   | 1.20.0      | 1.20.0    | {'1.20.0', '1.16.5'} |
+---------+-------------+-----------+----------------------+
distributed.worker - INFO -         Registered to: tls://ip-10-2-11-126.us-east-2.compute.internal:8786
distributed.worker - INFO - -------------------------------------------------
distributed.core - INFO - Starting established connection
distributed.protocol.pickle - INFO - Failed to deserialize b'\x80\x05\x95\x1dG\x00\x00\x00\x00\x00\x00(\x8c\x15dask.dataframe.io.csv\x94\x8c\x10pandas_read_text\x94\x93\x94\x8c\x17cloudpickle.cloudpickle\x94\x8c\r_builtin_type\x94\x93\x94\x8c\nLambdaType\x94\x85\x94R\x94(h\x05\x8c\x08CodeType\x94\x85\x94R\x94(K1K\x00K4K2K\x13B\x1a\x01\x00\x00\x88\x01d\x01k\x02rF|\x01d\x02k\x08r*|\x02d\x00k\x08r*t\x00j\x01d\x03t\x02d\x04d\x05\x8d\x03\x01\x00n\x10t\x00j\x01d\x06t\x02d\x04d\x05\x8d\x03\x01\x00|\x01d\x02k\x08rF\x88\x00}\x01|)d\x00k\trj|\x02d\x00k\x08o\\|\x01\x88\x00k\x02}1t\x03|1d\x07\x8d\x01}2n\x06t\x03\x83\x00}2|\x02d\x00k\x08r||\x01}\x02|-r\x90|\x02\x88\x00k\x03r\x90t\x04d\x08\x83\x01\x82\x01|\x0bd\x00k\tr\x9ed\t}3n\x08d\n}\x0bd\x02}3|2j\x05|\x02|\x0b|)|\x1f|3|%|&|#|$|\x0f|"|\x03|\x05|\x04|\x08|\x10|\x11|\x13|\r|\x0e|\x14| |\'|!|\x18|\x1a|\x1c|\x1b|\x12|\x1d|\x1e|\x0c|\n|\x06|\x16|(|\x07|/|0|\x15|-|,|+|.|\t|*|\x19|\x17d\x0b\x8d0\x01\x00t\x06|\x00|2\x83\x02S\x00\x94(N\x8c\nread_table\x94\x89\x8cAread_table is deprecated, use read_csv instead, passing sep=\'\\t\'.\x94K\x02\x8c\nstacklevel\x94\x85\x94\x8c/read_table is deprecated, use read_csv instead.\x94\x8c\x0csep_override\x94\x85\x94\x8cXSpecified a delimiter with both sep and delim_whitespace=True; you can only specify one.\x94\x88\x8c\x01c\x94(\x8c\tdelimiter\x94\x8c\x06engine\x94\x8c\x07dialect\x94\x8c\x0bcompression\x94\x8c\x10engine_specified\x94\x8c\x0bdoublequote\x94\x8c\nescapechar\x94\x8c\tquotechar\x94\x8c\x07quoting\x94\x8c\x10skipinitialspace\x94\x8c\x0elineterminator\x94\x8c\x06header\x94\x8c\tindex_col\x94\x8c\x05names\x94\x8c\x06prefix\x94\x8c\x08skiprows\x94\x8c\nskipfooter\x94\x8c\tna_values\x94\x8c\x0btrue_values\x94\x8c\x0cfalse_values\x94\x8c\x0fkeep_default_na\x94\x8c\tthousands\x94\x8c\x07comment\x94\x8c\x07decimal\x94\x8c\x0bparse_dates\x94\x8c\rkeep_date_col\x94\x8c\x08dayfirst\x94\x8c\x0bdate_parser\x94\x8c\x05nrows\x94\x8c\x08iterator\x94\x8c\tchunksize\x94\x8c\nconverters\x94\x8c\x05dtype\x94\x8c\x07usecols\x94\x8c\x07verbose\x94\x8c\x08encoding\x94\x8c\x07squeeze\x94\x8c\nmemory_map\x94\x8c\x0ffloat_precision\x94\x8c\tna_filter\x94\x8c\x10delim_whitespace\x94\x8c\x0ewarn_bad_lines\x94\x8c\x0ferror_bad_lines\x94\x8c\nlow_memory\x94\x8c\x10mangle_dupe_cols\x94\x8c\rtupleize_cols\x94\x8c\x15infer_datetime_format\x94\x8c\x10skip_blank_lines\x94t\x94t\x94(\x8c\x08warnings\x94\x8c\x04warn\x94\x8c\rFutureWarning\x94\x8c\x04dict\x94\x8c\nValueError\x94\x8c\x06update\x94\x8c\x05_read\x94t\x94(\x8c\x12filepath_or_buffer\x94\x8c\x03sep\x94h\x16h!h#h"h7h:h$hBh6h\x17h5h(h)h\x1fh%h&h2h\'h*h=h8hEh.hDh/h1h0h3h4h\x19h+h-h h\x1dh\x1eh\x1bh\x1ch,h9h\x18hCh@h?h>hAh;h<h\x12\x8c\x04kwds\x94h\x1at\x94\x8cUC:\\Users\\p_sierkin\\Miniconda3\\envs\\real_estate\\lib\\site-packages\\pandas\\io\\parsers.py\x94\x8c\x08parser_f\x94M\x13\x02C\x8a\x00C\x08\x01\x10\x01\x06\x02\x0c\x02\x06\x02\n\x01\x08\x01\x04\x0e\x08\x01\x10\x01\x0c\x02\x06\x03\x08\x01\x04\x02\x0c\x01\x08\x04\x08\x01\x06\x02\x04\x01\x04\x02\x06\x01\x02\x01\x02\x01\x02\x01\x02\x02\x02\x01\x02\x01\x02\x01\x02\x01\x02\x01\x02\x02\x02\x01\x02\x01\x02\x01\x02\x01\x02\x01\x02\x01\x02\x01\x02\x01\x02\x01\x02\x01\x02\x01\x02\x01\x02\x02\x02\x01\x02\x01\x02\x01\x02\x02\x02\x01\x02\x01\x02\x01\x02\x01\x02\x01\x02\x01\x02\x01\x02\x01\x02\x01\x02\x01\x02\x02\x02\x01\x02\x01\x02\x01\x02\x01\x02\x01\x02\x01\x02\x01\x02\x01\x08\x02\x94\x8c\x0bdefault_sep\x94\x8c\x04name\x94\x86\x94)t\x94R\x94}\x94(\x8c\x0b__package__\x94\x8c\tpandas.io\x94\x8c\x08__name__\x94\x8c\x11pandas.io.parsers\x94\x8c\x08__file__\x94hTuNNh\x03\x8c\x10_make_empty_cell\x94\x93\x94)R\x94hc)R\x94\x86\x94t\x94R\x94\x8c\x1ccloudpickle.cloudpickle_fast\x94\x8c\x12_function_setstate\x94\x93\x94hh}\x94}\x94(h_\x8c\x08read_csv\x94\x8c\x0c__qualname__\x94\x8c\'_make_parser_function.<locals>.parser_f\x94\x8c\x0f__annotations__\x94}\x94\x8c\x0e__kwdefaults__\x94N\x8c\x0c__defaults__\x94(\x8c\x01,\x94N\x8c\x05infer\x94NNN\x89N\x88NNNNN\x89NK\x00NN\x88\x88\x89\x88\x89\x89\x89N\x89\x89NhvNC\x01.\x94N\x8c\x01"\x94K\x00\x88NNNNN\x88\x88\x89\x88\x89Nt\x94\x8c\n__module__\x94h`\x8c\x07__doc__\x94X\x979\x00\x00\nRead a comma-separated values (csv) file into DataFrame.\n\nAlso supports optionally iterating or breaking of the file\ninto chunks.\n\nAdditional help can be found in the online docs for\n`IO Tools <http://pandas.pydata.org/pandas-docs/stable/io.html>`_.\n\nParameters\n----------\nfilepath_or_buffer : str, path object, or file-like object\n    Any valid string path is acceptable. The string could be a URL. Valid\n    URL schemes include http, ftp, s3, and file. For file URLs, a host is\n    expected. A local file could be: file://localhost/path/to/table.csv.\n\n    If you want to pass in a path object, pandas accepts either\n    ``pathlib.Path`` or ``py._path.local.LocalPath``.\n\n    By file-like object, we refer to objects with a ``read()`` method, such as\n    a file handler (e.g. via builtin ``open`` function) or ``StringIO``.\nsep : str, default \',\'\n    Delimiter to use. If sep is None, the C engine cannot automatically detect\n    the separator, but the Python parsing engine can, meaning the latter will\n    be used and automatically detect the separator by Python\'s builtin sniffer\n    tool, ``csv.Sniffer``. In addition, separators longer than 1 character and\n    different from ``\'\\s+\'`` will be interpreted as regular expressions and\n    will also force the use of the Python parsing engine. Note that regex\n    delimiters are prone to ignoring quoted data. Regex example: ``\'\\r\\t\'``.\ndelimiter : str, default ``None``\n    Alias for sep.\nheader : int, list of int, default \'infer\'\n    Row number(s) to use as the column names, and the start of the\n    data.  Default behavior is to infer the column names: if no names\n    are passed the behavior is identical to ``header=0`` and column\n    names are inferred from the first line of the file, if column\n    names are passed explicitly then the behavior is identical to\n    ``header=None``. Explicitly pass ``header=0`` to be able to\n    replace existing names. The header can be a list of integers that\n    specify row locations for a multi-index on the columns\n    e.g. [0,1,3]. Intervening rows that are not specified will be\n    skipped (e.g. 2 in this example is skipped). Note that this\n    parameter ignores commented lines and empty lines if\n    ``skip_blank_lines=True``, so ``header=0`` denotes the first line of\n    data rather than the first line of the file.\nnames : array-like, optional\n    List of column names to use. If file contains no header row, then you\n    should explicitly pass ``header=None``. Duplicates in this list will cause\n    a ``UserWarning`` to be issued.\nindex_col : int, sequence or bool, optional\n    Column to use as the row labels of the DataFrame. If a sequence is given, a\n    MultiIndex is used. If you have a malformed file with delimiters at the end\n    of each line, you might consider ``index_col=False`` to force pandas to\n    not use the first column as the index (row names).\nusecols : list-like or callable, optional\n    Return a subset of the columns. If list-like, all elements must either\n    be positional (i.e. integer indices into the document columns) or strings\n    that correspond to column names provided either by the user in `names` or\n    inferred from the document header row(s). For example, a valid list-like\n    `usecols` parameter would be ``[0, 1, 2]`` or ``[\'foo\', \'bar\', \'baz\']``.\n    Element order is ignored, so ``usecols=[0, 1]`` is the same as ``[1, 0]``.\n    To instantiate a DataFrame from ``data`` with element order preserved use\n    ``pd.read_csv(data, usecols=[\'foo\', \'bar\'])[[\'foo\', \'bar\']]`` for columns\n    in ``[\'foo\', \'bar\']`` order or\n    ``pd.read_csv(data, usecols=[\'foo\', \'bar\'])[[\'bar\', \'foo\']]``\n    for ``[\'bar\', \'foo\']`` order.\n\n    If callable, the callable function will be evaluated against the column\n    names, returning names where the callable function evaluates to True. An\n    example of a valid callable argument would be ``lambda x: x.upper() in\n    [\'AAA\', \'BBB\', \'DDD\']``. Using this parameter results in much faster\n    parsing time and lower memory usage.\nsqueeze : bool, default False\n    If the parsed data only contains one column then return a Series.\nprefix : str, optional\n    Prefix to add to column numbers when no header, e.g. \'X\' for X0, X1, ...\nmangle_dupe_cols : bool, default True\n    Duplicate columns will be specified as \'X\', \'X.1\', ...\'X.N\', rather than\n    \'X\'...\'X\'. Passing in False will cause data to be overwritten if there\n    are duplicate names in the columns.\ndtype : Type name or dict of column -> type, optional\n    Data type for data or columns. E.g. {\'a\': np.float64, \'b\': np.int32,\n    \'c\': \'Int64\'}\n    Use `str` or `object` together with suitable `na_values` settings\n    to preserve and not interpret dtype.\n    If converters are specified, they will be applied INSTEAD\n    of dtype conversion.\nengine : {\'c\', \'python\'}, optional\n    Parser engine to use. The C engine is faster while the python engine is\n    currently more feature-complete.\nconverters : dict, optional\n    Dict of functions for converting values in certain columns. Keys can either\n    be integers or column labels.\ntrue_values : list, optional\n    Values to consider as True.\nfalse_values : list, optional\n    Values to consider as False.\nskipinitialspace : bool, default False\n    Skip spaces after delimiter.\nskiprows : list-like, int or callable, optional\n    Line numbers to skip (0-indexed) or number of lines to skip (int)\n    at the start of the file.\n\n    If callable, the callable function will be evaluated against the row\n    indices, returning True if the row should be skipped and False otherwise.\n    An example of a valid callable argument would be ``lambda x: x in [0, 2]``.\nskipfooter : int, default 0\n    Number of lines at bottom of file to skip (Unsupported with engine=\'c\').\nnrows : int, optional\n    Number of rows of file to read. Useful for reading pieces of large files.\nna_values : scalar, str, list-like, or dict, optional\n    Additional strings to recognize as NA/NaN. If dict passed, specific\n    per-column NA values.  By default the following values are interpreted as\n    NaN: \'\', \'#N/A\', \'#N/A N/A\', \'#NA\', \'-1.#IND\', \'-1.#QNAN\', \'-NaN\', \'-nan\',\n    \'1.#IND\', \'1.#QNAN\', \'N/A\', \'NA\', \'NULL\', \'NaN\', \'n/a\', \'nan\',\n    \'null\'.\nkeep_default_na : bool, default True\n    Whether or not to include the default NaN values when parsing the data.\n    Depending on whether `na_values` is passed in, the behavior is as follows:\n\n    * If `keep_default_na` is True, and `na_values` are specified, `na_values`\n      is appended to the default NaN values used for parsing.\n    * If `keep_default_na` is True, and `na_values` are not specified, only\n      the default NaN values are used for parsing.\n    * If `keep_default_na` is False, and `na_values` are specified, only\n      the NaN values specified `na_values` are used for parsing.\n    * If `keep_default_na` is False, and `na_values` are not specified, no\n      strings will be parsed as NaN.\n\n    Note that if `na_filter` is passed in as False, the `keep_default_na` and\n    `na_values` parameters will be ignored.\nna_filter : bool, default True\n    Detect missing value markers (empty strings and the value of na_values). In\n    data without any NAs, passing na_filter=False can improve the performance\n    of reading a large file.\nverbose : bool, default False\n    Indicate number of NA values placed in non-numeric columns.\nskip_blank_lines : bool, default True\n    If True, skip over blank lines rather than interpreting as NaN values.\nparse_dates : bool or list of int or names or list of lists or dict, default False\n    The behavior is as follows:\n\n    * boolean. If True -> try parsing the index.\n    * list of int or names. e.g. If [1, 2, 3] -> try parsing columns 1, 2, 3\n      each as a separate date column.\n    * list of lis'
Traceback (most recent call last):
  File "/opt/conda/envs/coiled/lib/python3.7/site-packages/distributed/protocol/pickle.py", line 75, in loads
    return pickle.loads(x)
ValueError: unsupported pickle protocol: 5
distributed.protocol.core - CRITICAL - Failed to deserialize
Traceback (most recent call last):
  File "/opt/conda/envs/coiled/lib/python3.7/site-packages/distributed/protocol/core.py", line 151, in loads
    value = _deserialize(head, fs, deserializers=deserializers)
  File "/opt/conda/envs/coiled/lib/python3.7/site-packages/distributed/protocol/serialize.py", line 374, in deserialize
    return loads(header, frames)
  File "/opt/conda/envs/coiled/lib/python3.7/site-packages/distributed/protocol/serialize.py", line 70, in pickle_loads
    return pickle.loads(x, buffers=buffers)
  File "/opt/conda/envs/coiled/lib/python3.7/site-packages/distributed/protocol/pickle.py", line 75, in loads
    return pickle.loads(x)
ValueError: unsupported pickle protocol: 5
distributed.core - ERROR - unsupported pickle protocol: 5
Traceback (most recent call last):
  File "/opt/conda/envs/coiled/lib/python3.7/site-packages/distributed/core.py", line 554, in handle_stream
    msgs = await comm.read()
  File "/opt/conda/envs/coiled/lib/python3.7/site-packages/distributed/comm/tcp.py", line 216, in read
    allow_offload=self.allow_offload,
  File "/opt/conda/envs/coiled/lib/python3.7/site-packages/distributed/comm/utils.py", line 80, in from_frames
    res = _from_frames()
  File "/opt/conda/envs/coiled/lib/python3.7/site-packages/distributed/comm/utils.py", line 64, in _from_frames
    frames, deserialize=deserialize, deserializers=deserializers
  File "/opt/conda/envs/coiled/lib/python3.7/site-packages/distributed/protocol/core.py", line 151, in loads
    value = _deserialize(head, fs, deserializers=deserializers)
  File "/opt/conda/envs/coiled/lib/python3.7/site-packages/distributed/protocol/serialize.py", line 374, in deserialize
    return loads(header, frames)
  File "/opt/conda/envs/coiled/lib/python3.7/site-packages/distributed/protocol/serialize.py", line 70, in pickle_loads
    return pickle.loads(x, buffers=buffers)
  File "/opt/conda/envs/coiled/lib/python3.7/site-packages/distributed/protocol/pickle.py", line 75, in loads
    return pickle.loads(x)
ValueError: unsupported pickle protocol: 5
distributed.worker - ERROR - unsupported pickle protocol: 5
Traceback (most recent call last):
  File "/opt/conda/envs/coiled/lib/python3.7/site-packages/distributed/worker.py", line 991, in handle_scheduler
    comm, every_cycle=[self.ensure_communicating, self.ensure_computing]
  File "/opt/conda/envs/coiled/lib/python3.7/site-packages/distributed/core.py", line 554, in handle_stream
    msgs = await comm.read()
  File "/opt/conda/envs/coiled/lib/python3.7/site-packages/distributed/comm/tcp.py", line 216, in read
    allow_offload=self.allow_offload,
  File "/opt/conda/envs/coiled/lib/python3.7/site-packages/distributed/comm/utils.py", line 80, in from_frames
    res = _from_frames()
  File "/opt/conda/envs/coiled/lib/python3.7/site-packages/distributed/comm/utils.py", line 64, in _from_frames
    frames, deserialize=deserialize, deserializers=deserializers
  File "/opt/conda/envs/coiled/lib/python3.7/site-packages/distributed/protocol/core.py", line 151, in loads
    value = _deserialize(head, fs, deserializers=deserializers)
  File "/opt/conda/envs/coiled/lib/python3.7/site-packages/distributed/protocol/serialize.py", line 374, in deserialize
    return loads(header, frames)
  File "/opt/conda/envs/coiled/lib/python3.7/site-packages/distributed/protocol/serialize.py", line 70, in pickle_loads
    return pickle.loads(x, buffers=buffers)
  File "/opt/conda/envs/coiled/lib/python3.7/site-packages/distributed/protocol/pickle.py", line 75, in loads
    return pickle.loads(x)
ValueError: unsupported pickle protocol: 5
distributed.worker - INFO - Connection to scheduler broken.  Reconnecting...
tornado.application - ERROR - Exception in callback functools.partial(<bound method IOLoop._discard_future_result of <tornado.platform.asyncio.AsyncIOLoop object at 0x7f8c1955d750>>, <Task finished coro=<Worker.handle_scheduler() done, defined at /opt/conda/envs/coiled/lib/python3.7/site-packages/distributed/worker.py:988> exception=ValueError('unsupported pickle protocol: 5')>)
Traceback (most recent call last):
  File "/opt/conda/envs/coiled/lib/python3.7/site-packages/tornado/ioloop.py", line 741, in _run_callback
    ret = callback()
  File "/opt/conda/envs/coiled/lib/python3.7/site-packages/tornado/ioloop.py", line 765, in _discard_future_result
    future.result()
  File "/opt/conda/envs/coiled/lib/python3.7/site-packages/distributed/worker.py", line 991, in handle_scheduler
    comm, every_cycle=[self.ensure_communicating, self.ensure_computing]
  File "/opt/conda/envs/coiled/lib/python3.7/site-packages/distributed/core.py", line 554, in handle_stream
    msgs = await comm.read()
  File "/opt/conda/envs/coiled/lib/python3.7/site-packages/distributed/comm/tcp.py", line 216, in read
    allow_offload=self.allow_offload,
  File "/opt/conda/envs/coiled/lib/python3.7/site-packages/distributed/comm/utils.py", line 80, in from_frames
    res = _from_frames()
  File "/opt/conda/envs/coiled/lib/python3.7/site-packages/distributed/comm/utils.py", line 64, in _from_frames
    frames, deserialize=deserialize, deserializers=deserializers
  File "/opt/conda/envs/coiled/lib/python3.7/site-packages/distributed/protocol/core.py", line 151, in loads
    value = _deserialize(head, fs, deserializers=deserializers)
  File "/opt/conda/envs/coiled/lib/python3.7/site-packages/distributed/protocol/serialize.py", line 374, in deserialize
    return loads(header, frames)
  File "/opt/conda/envs/coiled/lib/python3.7/site-packages/distributed/protocol/serialize.py", line 70, in pickle_loads
    return pickle.loads(x, buffers=buffers)
  File "/opt/conda/envs/coiled/lib/python3.7/site-packages/distributed/protocol/pickle.py", line 75, in loads
    return pickle.loads(x)
ValueError: unsupported pickle protocol: 5
distributed.worker - INFO - -------------------------------------------------
distributed.worker - WARNING - Mismatched versions found
+---------+-------------+-----------+----------------------+
| Package | This Worker | scheduler | workers              |
+---------+-------------+-----------+----------------------+
| numpy   | 1.20.0      | 1.20.0    | {'1.20.0', '1.16.5'} |
+---------+-------------+-----------+----------------------+
distributed.worker - INFO -         Registered to: tls://ip-10-2-11-126.us-east-2.compute.internal:8786
distributed.worker - INFO - -------------------------------------------------
distributed.core - INFO - Starting established connection
distributed.protocol.pickle - INFO - Failed to deserialize b'\x80\x05\x95\x1dG\x00\x00\x00\x00\x00\x00(\x8c\x15dask.dataframe.io.csv\x94\x8c\x10pandas_read_text\x94\x93\x94\x8c\x17cloudpickle.cloudpickle\x94\x8c\r_builtin_type\x94\x93\x94\x8c\nLambdaType\x94\x85\x94R\x94(h\x05\x8c\x08CodeType\x94\x85\x94R\x94(K1K\x00K4K2K\x13B\x1a\x01\x00\x00\x88\x01d\x01k\x02rF|\x01d\x02k\x08r*|\x02d\x00k\x08r*t\x00j\x01d\x03t\x02d\x04d\x05\x8d\x03\x01\x00n\x10t\x00j\x01d\x06t\x02d\x04d\x05\x8d\x03\x01\x00|\x01d\x02k\x08rF\x88\x00}\x01|)d\x00k\trj|\x02d\x00k\x08o\\|\x01\x88\x00k\x02}1t\x03|1d\x07\x8d\x01}2n\x06t\x03\x83\x00}2|\x02d\x00k\x08r||\x01}\x02|-r\x90|\x02\x88\x00k\x03r\x90t\x04d\x08\x83\x01\x82\x01|\x0bd\x00k\tr\x9ed\t}3n\x08d\n}\x0bd\x02}3|2j\x05|\x02|\x0b|)|\x1f|3|%|&|#|$|\x0f|"|\x03|\x05|\x04|\x08|\x10|\x11|\x13|\r|\x0e|\x14| |\'|!|\x18|\x1a|\x1c|\x1b|\x12|\x1d|\x1e|\x0c|\n|\x06|\x16|(|\x07|/|0|\x15|-|,|+|.|\t|*|\x19|\x17d\x0b\x8d0\x01\x00t\x06|\x00|2\x83\x02S\x00\x94(N\x8c\nread_table\x94\x89\x8cAread_table is deprecated, use read_csv instead, passing sep=\'\\t\'.\x94K\x02\x8c\nstacklevel\x94\x85\x94\x8c/read_table is deprecated, use read_csv instead.\x94\x8c\x0csep_override\x94\x85\x94\x8cXSpecified a delimiter with both sep and delim_whitespace=True; you can only specify one.\x94\x88\x8c\x01c\x94(\x8c\tdelimiter\x94\x8c\x06engine\x94\x8c\x07dialect\x94\x8c\x0bcompression\x94\x8c\x10engine_specified\x94\x8c\x0bdoublequote\x94\x8c\nescapechar\x94\x8c\tquotechar\x94\x8c\x07quoting\x94\x8c\x10skipinitialspace\x94\x8c\x0elineterminator\x94\x8c\x06header\x94\x8c\tindex_col\x94\x8c\x05names\x94\x8c\x06prefix\x94\x8c\x08skiprows\x94\x8c\nskipfooter\x94\x8c\tna_values\x94\x8c\x0btrue_values\x94\x8c\x0cfalse_values\x94\x8c\x0fkeep_default_na\x94\x8c\tthousands\x94\x8c\x07comment\x94\x8c\x07decimal\x94\x8c\x0bparse_dates\x94\x8c\rkeep_date_col\x94\x8c\x08dayfirst\x94\x8c\x0bdate_parser\x94\x8c\x05nrows\x94\x8c\x08iterator\x94\x8c\tchunksize\x94\x8c\nconverters\x94\x8c\x05dtype\x94\x8c\x07usecols\x94\x8c\x07verbose\x94\x8c\x08encoding\x94\x8c\x07squeeze\x94\x8c\nmemory_map\x94\x8c\x0ffloat_precision\x94\x8c\tna_filter\x94\x8c\x10delim_whitespace\x94\x8c\x0ewarn_bad_lines\x94\x8c\x0ferror_bad_lines\x94\x8c\nlow_memory\x94\x8c\x10mangle_dupe_cols\x94\x8c\rtupleize_cols\x94\x8c\x15infer_datetime_format\x94\x8c\x10skip_blank_lines\x94t\x94t\x94(\x8c\x08warnings\x94\x8c\x04warn\x94\x8c\rFutureWarning\x94\x8c\x04dict\x94\x8c\nValueError\x94\x8c\x06update\x94\x8c\x05_read\x94t\x94(\x8c\x12filepath_or_buffer\x94\x8c\x03sep\x94h\x16h!h#h"h7h:h$hBh6h\x17h5h(h)h\x1fh%h&h2h\'h*h=h8hEh.hDh/h1h0h3h4h\x19h+h-h h\x1dh\x1eh\x1bh\x1ch,h9h\x18hCh@h?h>hAh;h<h\x12\x8c\x04kwds\x94h\x1at\x94\x8cUC:\\Users\\p_sierkin\\Miniconda3\\envs\\real_estate\\lib\\site-packages\\pandas\\io\\parsers.py\x94\x8c\x08parser_f\x94M\x13\x02C\x8a\x00C\x08\x01\x10\x01\x06\x02\x0c\x02\x06\x02\n\x01\x08\x01\x04\x0e\x08\x01\x10\x01\x0c\x02\x06\x03\x08\x01\x04\x02\x0c\x01\x08\x04\x08\x01\x06\x02\x04\x01\x04\x02\x06\x01\x02\x01\x02\x01\x02\x01\x02\x02\x02\x01\x02\x01\x02\x01\x02\x01\x02\x01\x02\x02\x02\x01\x02\x01\x02\x01\x02\x01\x02\x01\x02\x01\x02\x01\x02\x01\x02\x01\x02\x01\x02\x01\x02\x01\x02\x02\x02\x01\x02\x01\x02\x01\x02\x02\x02\x01\x02\x01\x02\x01\x02\x01\x02\x01\x02\x01\x02\x01\x02\x01\x02\x01\x02\x01\x02\x02\x02\x01\x02\x01\x02\x01\x02\x01\x02\x01\x02\x01\x02\x01\x02\x01\x08\x02\x94\x8c\x0bdefault_sep\x94\x8c\x04name\x94\x86\x94)t\x94R\x94}\x94(\x8c\x0b__package__\x94\x8c\tpandas.io\x94\x8c\x08__name__\x94\x8c\x11pandas.io.parsers\x94\x8c\x08__file__\x94hTuNNh\x03\x8c\x10_make_empty_cell\x94\x93\x94)R\x94hc)R\x94\x86\x94t\x94R\x94\x8c\x1ccloudpickle.cloudpickle_fast\x94\x8c\x12_function_setstate\x94\x93\x94hh}\x94}\x94(h_\x8c\x08read_csv\x94\x8c\x0c__qualname__\x94\x8c\'_make_parser_function.<locals>.parser_f\x94\x8c\x0f__annotations__\x94}\x94\x8c\x0e__kwdefaults__\x94N\x8c\x0c__defaults__\x94(\x8c\x01,\x94N\x8c\x05infer\x94NNN\x89N\x88NNNNN\x89NK\x00NN\x88\x88\x89\x88\x89\x89\x89N\x89\x89NhvNC\x01.\x94N\x8c\x01"\x94K\x00\x88NNNNN\x88\x88\x89\x88\x89Nt\x94\x8c\n__module__\x94h`\x8c\x07__doc__\x94X\x979\x00\x00\nRead a comma-separated values (csv) file into DataFrame.\n\nAlso supports optionally iterating or breaking of the file\ninto chunks.\n\nAdditional help can be found in the online docs for\n`IO Tools <http://pandas.pydata.org/pandas-docs/stable/io.html>`_.\n\nParameters\n----------\nfilepath_or_buffer : str, path object, or file-like object\n    Any valid string path is acceptable. The string could be a URL. Valid\n    URL schemes include http, ftp, s3, and file. For file URLs, a host is\n    expected. A local file could be: file://localhost/path/to/table.csv.\n\n    If you want to pass in a path object, pandas accepts either\n    ``pathlib.Path`` or ``py._path.local.LocalPath``.\n\n    By file-like object, we refer to objects with a ``read()`` method, such as\n    a file handler (e.g. via builtin ``open`` function) or ``StringIO``.\nsep : str, default \',\'\n    Delimiter to use. If sep is None, the C engine cannot automatically detect\n    the separator, but the Python parsing engine can, meaning the latter will\n    be used and automatically detect the separator by Python\'s builtin sniffer\n    tool, ``csv.Sniffer``. In addition, separators longer than 1 character and\n    different from ``\'\\s+\'`` will be interpreted as regular expressions and\n    will also force the use of the Python parsing engine. Note that regex\n    delimiters are prone to ignoring quoted data. Regex example: ``\'\\r\\t\'``.\ndelimiter : str, default ``None``\n    Alias for sep.\nheader : int, list of int, default \'infer\'\n    Row number(s) to use as the column names, and the start of the\n    data.  Default behavior is to infer the column names: if no names\n    are passed the behavior is identical to ``header=0`` and column\n    names are inferred from the first line of the file, if column\n    names are passed explicitly then the behavior is identical to\n    ``header=None``. Explicitly pass ``header=0`` to be able to\n    replace existing names. The header can be a list of integers that\n    specify row locations for a multi-index on the columns\n    e.g. [0,1,3]. Intervening rows that are not specified will be\n    skipped (e.g. 2 in this example is skipped). Note that this\n    parameter ignores commented lines and empty lines if\n    ``skip_blank_lines=True``, so ``header=0`` denotes the first line of\n    data rather than the first line of the file.\nnames : array-like, optional\n    List of column names to use. If file contains no header row, then you\n    should explicitly pass ``header=None``. Duplicates in this list will cause\n    a ``UserWarning`` to be issued.\nindex_col : int, sequence or bool, optional\n    Column to use as the row labels of the DataFrame. If a sequence is given, a\n    MultiIndex is used. If you have a malformed file with delimiters at the end\n    of each line, you might consider ``index_col=False`` to force pandas to\n    not use the first column as the index (row names).\nusecols : list-like or callable, optional\n    Return a subset of the columns. If list-like, all elements must either\n    be positional (i.e. integer indices into the document columns) or strings\n    that correspond to column names provided either by the user in `names` or\n    inferred from the document header row(s). For example, a valid list-like\n    `usecols` parameter would be ``[0, 1, 2]`` or ``[\'foo\', \'bar\', \'baz\']``.\n    Element order is ignored, so ``usecols=[0, 1]`` is the same as ``[1, 0]``.\n    To instantiate a DataFrame from ``data`` with element order preserved use\n    ``pd.read_csv(data, usecols=[\'foo\', \'bar\'])[[\'foo\', \'bar\']]`` for columns\n    in ``[\'foo\', \'bar\']`` order or\n    ``pd.read_csv(data, usecols=[\'foo\', \'bar\'])[[\'bar\', \'foo\']]``\n    for ``[\'bar\', \'foo\']`` order.\n\n    If callable, the callable function will be evaluated against the column\n    names, returning names where the callable function evaluates to True. An\n    example of a valid callable argument would be ``lambda x: x.upper() in\n    [\'AAA\', \'BBB\', \'DDD\']``. Using this parameter results in much faster\n    parsing time and lower memory usage.\nsqueeze : bool, default False\n    If the parsed data only contains one column then return a Series.\nprefix : str, optional\n    Prefix to add to column numbers when no header, e.g. \'X\' for X0, X1, ...\nmangle_dupe_cols : bool, default True\n    Duplicate columns will be specified as \'X\', \'X.1\', ...\'X.N\', rather than\n    \'X\'...\'X\'. Passing in False will cause data to be overwritten if there\n    are duplicate names in the columns.\ndtype : Type name or dict of column -> type, optional\n    Data type for data or columns. E.g. {\'a\': np.float64, \'b\': np.int32,\n    \'c\': \'Int64\'}\n    Use `str` or `object` together with suitable `na_values` settings\n    to preserve and not interpret dtype.\n    If converters are specified, they will be applied INSTEAD\n    of dtype conversion.\nengine : {\'c\', \'python\'}, optional\n    Parser engine to use. The C engine is faster while the python engine is\n    currently more feature-complete.\nconverters : dict, optional\n    Dict of functions for converting values in certain columns. Keys can either\n    be integers or column labels.\ntrue_values : list, optional\n    Values to consider as True.\nfalse_values : list, optional\n    Values to consider as False.\nskipinitialspace : bool, default False\n    Skip spaces after delimiter.\nskiprows : list-like, int or callable, optional\n    Line numbers to skip (0-indexed) or number of lines to skip (int)\n    at the start of the file.\n\n    If callable, the callable function will be evaluated against the row\n    indices, returning True if the row should be skipped and False otherwise.\n    An example of a valid callable argument would be ``lambda x: x in [0, 2]``.\nskipfooter : int, default 0\n    Number of lines at bottom of file to skip (Unsupported with engine=\'c\').\nnrows : int, optional\n    Number of rows of file to read. Useful for reading pieces of large files.\nna_values : scalar, str, list-like, or dict, optional\n    Additional strings to recognize as NA/NaN. If dict passed, specific\n    per-column NA values.  By default the following values are interpreted as\n    NaN: \'\', \'#N/A\', \'#N/A N/A\', \'#NA\', \'-1.#IND\', \'-1.#QNAN\', \'-NaN\', \'-nan\',\n    \'1.#IND\', \'1.#QNAN\', \'N/A\', \'NA\', \'NULL\', \'NaN\', \'n/a\', \'nan\',\n    \'null\'.\nkeep_default_na : bool, default True\n    Whether or not to include the default NaN values when parsing the data.\n    Depending on whether `na_values` is passed in, the behavior is as follows:\n\n    * If `keep_default_na` is True, and `na_values` are specified, `na_values`\n      is appended to the default NaN values used for parsing.\n    * If `keep_default_na` is True, and `na_values` are not specified, only\n      the default NaN values are used for parsing.\n    * If `keep_default_na` is False, and `na_values` are specified, only\n      the NaN values specified `na_values` are used for parsing.\n    * If `keep_default_na` is False, and `na_values` are not specified, no\n      strings will be parsed as NaN.\n\n    Note that if `na_filter` is passed in as False, the `keep_default_na` and\n    `na_values` parameters will be ignored.\nna_filter : bool, default True\n    Detect missing value markers (empty strings and the value of na_values). In\n    data without any NAs, passing na_filter=False can improve the performance\n    of reading a large file.\nverbose : bool, default False\n    Indicate number of NA values placed in non-numeric columns.\nskip_blank_lines : bool, default True\n    If True, skip over blank lines rather than interpreting as NaN values.\nparse_dates : bool or list of int or names or list of lists or dict, default False\n    The behavior is as follows:\n\n    * boolean. If True -> try parsing the index.\n    * list of int or names. e.g. If [1, 2, 3] -> try parsing columns 1, 2, 3\n      each as a separate date column.\n    * list of lis'
Traceback (most recent call last):
  File "/opt/conda/envs/coiled/lib/python3.7/site-packages/distributed/protocol/pickle.py", line 75, in loads
    return pickle.loads(x)
ValueError: unsupported pickle protocol: 5
distributed.protocol.core - CRITICAL - Failed to deserialize
Traceback (most recent call last):
  File "/opt/conda/envs/coiled/lib/python3.7/site-packages/distributed/protocol/core.py", line 151, in loads
    value = _deserialize(head, fs, deserializers=deserializers)
  File "/opt/conda/envs/coiled/lib/python3.7/site-packages/distributed/protocol/serialize.py", line 374, in deserialize
    return loads(header, frames)
  File "/opt/conda/envs/coiled/lib/python3.7/site-packages/distributed/protocol/serialize.py", line 70, in pickle_loads
    return pickle.loads(x, buffers=buffers)
  File "/opt/conda/envs/coiled/lib/python3.7/site-packages/distributed/protocol/pickle.py", line 75, in loads
    return pickle.loads(x)
ValueError: unsupported pickle protocol: 5
distributed.core - ERROR - unsupported pickle protocol: 5
Traceback (most recent call last):
  File "/opt/conda/envs/coiled/lib/python3.7/site-packages/distributed/core.py", line 554, in handle_stream
    msgs = await comm.read()
  File "/opt/conda/envs/coiled/lib/python3.7/site-packages/distributed/comm/tcp.py", line 216, in read
    allow_offload=self.allow_offload,
  File "/opt/conda/envs/coiled/lib/python3.7/site-packages/distributed/comm/utils.py", line 80, in from_frames
    res = _from_frames()
  File "/opt/conda/envs/coiled/lib/python3.7/site-packages/distributed/comm/utils.py", line 64, in _from_frames
    frames, deserialize=deserialize, deserializers=deserializers
  File "/opt/conda/envs/coiled/lib/python3.7/site-packages/distributed/protocol/core.py", line 151, in loads
    value = _deserialize(head, fs, deserializers=deserializers)
  File "/opt/conda/envs/coiled/lib/python3.7/site-packages/distributed/protocol/serialize.py", line 374, in deserialize
    return loads(header, frames)
  File "/opt/conda/envs/coiled/lib/python3.7/site-packages/distributed/protocol/serialize.py", line 70, in pickle_loads
    return pickle.loads(x, buffers=buffers)
  File "/opt/conda/envs/coiled/lib/python3.7/site-packages/distributed/protocol/pickle.py", line 75, in loads
    return pickle.loads(x)
ValueError: unsupported pickle protocol: 5
distributed.worker - ERROR - unsupported pickle protocol: 5
Traceback (most recent call last):
  File "/opt/conda/envs/coiled/lib/python3.7/site-packages/distributed/worker.py", line 991, in handle_scheduler
    comm, every_cycle=[self.ensure_communicating, self.ensure_computing]
  File "/opt/conda/envs/coiled/lib/python3.7/site-packages/distributed/core.py", line 554, in handle_stream
    msgs = await comm.read()
  File "/opt/conda/envs/coiled/lib/python3.7/site-packages/distributed/comm/tcp.py", line 216, in read
    allow_offload=self.allow_offload,
  File "/opt/conda/envs/coiled/lib/python3.7/site-packages/distributed/comm/utils.py", line 80, in from_frames
    res = _from_frames()
  File "/opt/conda/envs/coiled/lib/python3.7/site-packages/distributed/comm/utils.py", line 64, in _from_frames
    frames, deserialize=deserialize, deserializers=deserializers
  File "/opt/conda/envs/coiled/lib/python3.7/site-packages/distributed/protocol/core.py", line 151, in loads
    value = _deserialize(head, fs, deserializers=deserializers)
  File "/opt/conda/envs/coiled/lib/python3.7/site-packages/distributed/protocol/serialize.py", line 374, in deserialize
    return loads(header, frames)
  File "/opt/conda/envs/coiled/lib/python3.7/site-packages/distributed/protocol/serialize.py", line 70, in pickle_loads
    return pickle.loads(x, buffers=buffers)
  File "/opt/conda/envs/coiled/lib/python3.7/site-packages/distributed/protocol/pickle.py", line 75, in loads
    return pickle.loads(x)
ValueError: unsupported pickle protocol: 5
distributed.worker - INFO - Connection to scheduler broken.  Reconnecting...
tornado.application - ERROR - Exception in callback functools.partial(<bound method IOLoop._discard_future_result of <tornado.platform.asyncio.AsyncIOLoop object at 0x7f8c1955d750>>, <Task finished coro=<Worker.handle_scheduler() done, defined at /opt/conda/envs/coiled/lib/python3.7/site-packages/distributed/worker.py:988> exception=ValueError('unsupported pickle protocol: 5')>)
Traceback (most recent call last):
  File "/opt/conda/envs/coiled/lib/python3.7/site-packages/tornado/ioloop.py", line 741, in _run_callback
    ret = callback()
  File "/opt/conda/envs/coiled/lib/python3.7/site-packages/tornado/ioloop.py", line 765, in _discard_future_result
    future.result()
  File "/opt/conda/envs/coiled/lib/python3.7/site-packages/distributed/worker.py", line 991, in handle_scheduler
    comm, every_cycle=[self.ensure_communicating, self.ensure_computing]
  File "/opt/conda/envs/coiled/lib/python3.7/site-packages/distributed/core.py", line 554, in handle_stream
    msgs = await comm.read()
  File "/opt/conda/envs/coiled/lib/python3.7/site-packages/distributed/comm/tcp.py", line 216, in read
    allow_offload=self.allow_offload,
  File "/opt/conda/envs/coiled/lib/python3.7/site-packages/distributed/comm/utils.py", line 80, in from_frames
    res = _from_frames()
  File "/opt/conda/envs/coiled/lib/python3.7/site-packages/distributed/comm/utils.py", line 64, in _from_frames
    frames, deserialize=deserialize, deserializers=deserializers
  File "/opt/conda/envs/coiled/lib/python3.7/site-packages/distributed/protocol/core.py", line 151, in loads
    value = _deserialize(head, fs, deserializers=deserializers)
  File "/opt/conda/envs/coiled/lib/python3.7/site-packages/distributed/protocol/serialize.py", line 374, in deserialize
    return loads(header, frames)
  File "/opt/conda/envs/coiled/lib/python3.7/site-packages/distributed/protocol/serialize.py", line 70, in pickle_loads
    return pickle.loads(x, buffers=buffers)
  File "/opt/conda/envs/coiled/lib/python3.7/site-packages/distributed/protocol/pickle.py", line 75, in loads
    return pickle.loads(x)
ValueError: unsupported pickle protocol: 5
distributed.worker - INFO - -------------------------------------------------
distributed.worker - WARNING - Mismatched versions found
+---------+-------------+-----------+----------------------+
| Package | This Worker | scheduler | workers              |
+---------+-------------+-----------+----------------------+
| numpy   | 1.20.0      | 1.20.0    | {'1.20.0', '1.16.5'} |
+---------+-------------+-----------+----------------------+
distributed.worker - INFO -         Registered to: tls://ip-10-2-11-126.us-east-2.compute.internal:8786
distributed.worker - INFO - -------------------------------------------------
distributed.core - INFO - Starting established connection
distributed.protocol.pickle - INFO - Failed to deserialize b'\x80\x05\x95\x1dG\x00\x00\x00\x00\x00\x00(\x8c\x15dask.dataframe.io.csv\x94\x8c\x10pandas_read_text\x94\x93\x94\x8c\x17cloudpickle.cloudpickle\x94\x8c\r_builtin_type\x94\x93\x94\x8c\nLambdaType\x94\x85\x94R\x94(h\x05\x8c\x08CodeType\x94\x85\x94R\x94(K1K\x00K4K2K\x13B\x1a\x01\x00\x00\x88\x01d\x01k\x02rF|\x01d\x02k\x08r*|\x02d\x00k\x08r*t\x00j\x01d\x03t\x02d\x04d\x05\x8d\x03\x01\x00n\x10t\x00j\x01d\x06t\x02d\x04d\x05\x8d\x03\x01\x00|\x01d\x02k\x08rF\x88\x00}\x01|)d\x00k\trj|\x02d\x00k\x08o\\|\x01\x88\x00k\x02}1t\x03|1d\x07\x8d\x01}2n\x06t\x03\x83\x00}2|\x02d\x00k\x08r||\x01}\x02|-r\x90|\x02\x88\x00k\x03r\x90t\x04d\x08\x83\x01\x82\x01|\x0bd\x00k\tr\x9ed\t}3n\x08d\n}\x0bd\x02}3|2j\x05|\x02|\x0b|)|\x1f|3|%|&|#|$|\x0f|"|\x03|\x05|\x04|\x08|\x10|\x11|\x13|\r|\x0e|\x14| |\'|!|\x18|\x1a|\x1c|\x1b|\x12|\x1d|\x1e|\x0c|\n|\x06|\x16|(|\x07|/|0|\x15|-|,|+|.|\t|*|\x19|\x17d\x0b\x8d0\x01\x00t\x06|\x00|2\x83\x02S\x00\x94(N\x8c\nread_table\x94\x89\x8cAread_table is deprecated, use read_csv instead, passing sep=\'\\t\'.\x94K\x02\x8c\nstacklevel\x94\x85\x94\x8c/read_table is deprecated, use read_csv instead.\x94\x8c\x0csep_override\x94\x85\x94\x8cXSpecified a delimiter with both sep and delim_whitespace=True; you can only specify one.\x94\x88\x8c\x01c\x94(\x8c\tdelimiter\x94\x8c\x06engine\x94\x8c\x07dialect\x94\x8c\x0bcompression\x94\x8c\x10engine_specified\x94\x8c\x0bdoublequote\x94\x8c\nescapechar\x94\x8c\tquotechar\x94\x8c\x07quoting\x94\x8c\x10skipinitialspace\x94\x8c\x0elineterminator\x94\x8c\x06header\x94\x8c\tindex_col\x94\x8c\x05names\x94\x8c\x06prefix\x94\x8c\x08skiprows\x94\x8c\nskipfooter\x94\x8c\tna_values\x94\x8c\x0btrue_values\x94\x8c\x0cfalse_values\x94\x8c\x0fkeep_default_na\x94\x8c\tthousands\x94\x8c\x07comment\x94\x8c\x07decimal\x94\x8c\x0bparse_dates\x94\x8c\rkeep_date_col\x94\x8c\x08dayfirst\x94\x8c\x0bdate_parser\x94\x8c\x05nrows\x94\x8c\x08iterator\x94\x8c\tchunksize\x94\x8c\nconverters\x94\x8c\x05dtype\x94\x8c\x07usecols\x94\x8c\x07verbose\x94\x8c\x08encoding\x94\x8c\x07squeeze\x94\x8c\nmemory_map\x94\x8c\x0ffloat_precision\x94\x8c\tna_filter\x94\x8c\x10delim_whitespace\x94\x8c\x0ewarn_bad_lines\x94\x8c\x0ferror_bad_lines\x94\x8c\nlow_memory\x94\x8c\x10mangle_dupe_cols\x94\x8c\rtupleize_cols\x94\x8c\x15infer_datetime_format\x94\x8c\x10skip_blank_lines\x94t\x94t\x94(\x8c\x08warnings\x94\x8c\x04warn\x94\x8c\rFutureWarning\x94\x8c\x04dict\x94\x8c\nValueError\x94\x8c\x06update\x94\x8c\x05_read\x94t\x94(\x8c\x12filepath_or_buffer\x94\x8c\x03sep\x94h\x16h!h#h"h7h:h$hBh6h\x17h5h(h)h\x1fh%h&h2h\'h*h=h8hEh.hDh/h1h0h3h4h\x19h+h-h h\x1dh\x1eh\x1bh\x1ch,h9h\x18hCh@h?h>hAh;h<h\x12\x8c\x04kwds\x94h\x1at\x94\x8cUC:\\Users\\p_sierkin\\Miniconda3\\envs\\real_estate\\lib\\site-packages\\pandas\\io\\parsers.py\x94\x8c\x08parser_f\x94M\x13\x02C\x8a\x00C\x08\x01\x10\x01\x06\x02\x0c\x02\x06\x02\n\x01\x08\x01\x04\x0e\x08\x01\x10\x01\x0c\x02\x06\x03\x08\x01\x04\x02\x0c\x01\x08\x04\x08\x01\x06\x02\x04\x01\x04\x02\x06\x01\x02\x01\x02\x01\x02\x01\x02\x02\x02\x01\x02\x01\x02\x01\x02\x01\x02\x01\x02\x02\x02\x01\x02\x01\x02\x01\x02\x01\x02\x01\x02\x01\x02\x01\x02\x01\x02\x01\x02\x01\x02\x01\x02\x01\x02\x02\x02\x01\x02\x01\x02\x01\x02\x02\x02\x01\x02\x01\x02\x01\x02\x01\x02\x01\x02\x01\x02\x01\x02\x01\x02\x01\x02\x01\x02\x02\x02\x01\x02\x01\x02\x01\x02\x01\x02\x01\x02\x01\x02\x01\x02\x01\x08\x02\x94\x8c\x0bdefault_sep\x94\x8c\x04name\x94\x86\x94)t\x94R\x94}\x94(\x8c\x0b__package__\x94\x8c\tpandas.io\x94\x8c\x08__name__\x94\x8c\x11pandas.io.parsers\x94\x8c\x08__file__\x94hTuNNh\x03\x8c\x10_make_empty_cell\x94\x93\x94)R\x94hc)R\x94\x86\x94t\x94R\x94\x8c\x1ccloudpickle.cloudpickle_fast\x94\x8c\x12_function_setstate\x94\x93\x94hh}\x94}\x94(h_\x8c\x08read_csv\x94\x8c\x0c__qualname__\x94\x8c\'_make_parser_function.<locals>.parser_f\x94\x8c\x0f__annotations__\x94}\x94\x8c\x0e__kwdefaults__\x94N\x8c\x0c__defaults__\x94(\x8c\x01,\x94N\x8c\x05infer\x94NNN\x89N\x88NNNNN\x89NK\x00NN\x88\x88\x89\x88\x89\x89\x89N\x89\x89NhvNC\x01.\x94N\x8c\x01"\x94K\x00\x88NNNNN\x88\x88\x89\x88\x89Nt\x94\x8c\n__module__\x94h`\x8c\x07__doc__\x94X\x979\x00\x00\nRead a comma-separated values (csv) file into DataFrame.\n\nAlso supports optionally iterating or breaking of the file\ninto chunks.\n\nAdditional help can be found in the online docs for\n`IO Tools <http://pandas.pydata.org/pandas-docs/stable/io.html>`_.\n\nParameters\n----------\nfilepath_or_buffer : str, path object, or file-like object\n    Any valid string path is acceptable. The string could be a URL. Valid\n    URL schemes include http, ftp, s3, and file. For file URLs, a host is\n    expected. A local file could be: file://localhost/path/to/table.csv.\n\n    If you want to pass in a path object, pandas accepts either\n    ``pathlib.Path`` or ``py._path.local.LocalPath``.\n\n    By file-like object, we refer to objects with a ``read()`` method, such as\n    a file handler (e.g. via builtin ``open`` function) or ``StringIO``.\nsep : str, default \',\'\n    Delimiter to use. If sep is None, the C engine cannot automatically detect\n    the separator, but the Python parsing engine can, meaning the latter will\n    be used and automatically detect the separator by Python\'s builtin sniffer\n    tool, ``csv.Sniffer``. In addition, separators longer than 1 character and\n    different from ``\'\\s+\'`` will be interpreted as regular expressions and\n    will also force the use of the Python parsing engine. Note that regex\n    delimiters are prone to ignoring quoted data. Regex example: ``\'\\r\\t\'``.\ndelimiter : str, default ``None``\n    Alias for sep.\nheader : int, list of int, default \'infer\'\n    Row number(s) to use as the column names, and the start of the\n    data.  Default behavior is to infer the column names: if no names\n    are passed the behavior is identical to ``header=0`` and column\n    names are inferred from the first line of the file, if column\n    names are passed explicitly then the behavior is identical to\n    ``header=None``. Explicitly pass ``header=0`` to be able to\n    replace existing names. The header can be a list of integers that\n    specify row locations for a multi-index on the columns\n    e.g. [0,1,3]. Intervening rows that are not specified will be\n    skipped (e.g. 2 in this example is skipped). Note that this\n    parameter ignores commented lines and empty lines if\n    ``skip_blank_lines=True``, so ``header=0`` denotes the first line of\n    data rather than the first line of the file.\nnames : array-like, optional\n    List of column names to use. If file contains no header row, then you\n    should explicitly pass ``header=None``. Duplicates in this list will cause\n    a ``UserWarning`` to be issued.\nindex_col : int, sequence or bool, optional\n    Column to use as the row labels of the DataFrame. If a sequence is given, a\n    MultiIndex is used. If you have a malformed file with delimiters at the end\n    of each line, you might consider ``index_col=False`` to force pandas to\n    not use the first column as the index (row names).\nusecols : list-like or callable, optional\n    Return a subset of the columns. If list-like, all elements must either\n    be positional (i.e. integer indices into the document columns) or strings\n    that correspond to column names provided either by the user in `names` or\n    inferred from the document header row(s). For example, a valid list-like\n    `usecols` parameter would be ``[0, 1, 2]`` or ``[\'foo\', \'bar\', \'baz\']``.\n    Element order is ignored, so ``usecols=[0, 1]`` is the same as ``[1, 0]``.\n    To instantiate a DataFrame from ``data`` with element order preserved use\n    ``pd.read_csv(data, usecols=[\'foo\', \'bar\'])[[\'foo\', \'bar\']]`` for columns\n    in ``[\'foo\', \'bar\']`` order or\n    ``pd.read_csv(data, usecols=[\'foo\', \'bar\'])[[\'bar\', \'foo\']]``\n    for ``[\'bar\', \'foo\']`` order.\n\n    If callable, the callable function will be evaluated against the column\n    names, returning names where the callable function evaluates to True. An\n    example of a valid callable argument would be ``lambda x: x.upper() in\n    [\'AAA\', \'BBB\', \'DDD\']``. Using this parameter results in much faster\n    parsing time and lower memory usage.\nsqueeze : bool, default False\n    If the parsed data only contains one column then return a Series.\nprefix : str, optional\n    Prefix to add to column numbers when no header, e.g. \'X\' for X0, X1, ...\nmangle_dupe_cols : bool, default True\n    Duplicate columns will be specified as \'X\', \'X.1\', ...\'X.N\', rather than\n    \'X\'...\'X\'. Passing in False will cause data to be overwritten if there\n    are duplicate names in the columns.\ndtype : Type name or dict of column -> type, optional\n    Data type for data or columns. E.g. {\'a\': np.float64, \'b\': np.int32,\n    \'c\': \'Int64\'}\n    Use `str` or `object` together with suitable `na_values` settings\n    to preserve and not interpret dtype.\n    If converters are specified, they will be applied INSTEAD\n    of dtype conversion.\nengine : {\'c\', \'python\'}, optional\n    Parser engine to use. The C engine is faster while the python engine is\n    currently more feature-complete.\nconverters : dict, optional\n    Dict of functions for converting values in certain columns. Keys can either\n    be integers or column labels.\ntrue_values : list, optional\n    Values to consider as True.\nfalse_values : list, optional\n    Values to consider as False.\nskipinitialspace : bool, default False\n    Skip spaces after delimiter.\nskiprows : list-like, int or callable, optional\n    Line numbers to skip (0-indexed) or number of lines to skip (int)\n    at the start of the file.\n\n    If callable, the callable function will be evaluated against the row\n    indices, returning True if the row should be skipped and False otherwise.\n    An example of a valid callable argument would be ``lambda x: x in [0, 2]``.\nskipfooter : int, default 0\n    Number of lines at bottom of file to skip (Unsupported with engine=\'c\').\nnrows : int, optional\n    Number of rows of file to read. Useful for reading pieces of large files.\nna_values : scalar, str, list-like, or dict, optional\n    Additional strings to recognize as NA/NaN. If dict passed, specific\n    per-column NA values.  By default the following values are interpreted as\n    NaN: \'\', \'#N/A\', \'#N/A N/A\', \'#NA\', \'-1.#IND\', \'-1.#QNAN\', \'-NaN\', \'-nan\',\n    \'1.#IND\', \'1.#QNAN\', \'N/A\', \'NA\', \'NULL\', \'NaN\', \'n/a\', \'nan\',\n    \'null\'.\nkeep_default_na : bool, default True\n    Whether or not to include the default NaN values when parsing the data.\n    Depending on whether `na_values` is passed in, the behavior is as follows:\n\n    * If `keep_default_na` is True, and `na_values` are specified, `na_values`\n      is appended to the default NaN values used for parsing.\n    * If `keep_default_na` is True, and `na_values` are not specified, only\n      the default NaN values are used for parsing.\n    * If `keep_default_na` is False, and `na_values` are specified, only\n      the NaN values specified `na_values` are used for parsing.\n    * If `keep_default_na` is False, and `na_values` are not specified, no\n      strings will be parsed as NaN.\n\n    Note that if `na_filter` is passed in as False, the `keep_default_na` and\n    `na_values` parameters will be ignored.\nna_filter : bool, default True\n    Detect missing value markers (empty strings and the value of na_values). In\n    data without any NAs, passing na_filter=False can improve the performance\n    of reading a large file.\nverbose : bool, default False\n    Indicate number of NA values placed in non-numeric columns.\nskip_blank_lines : bool, default True\n    If True, skip over blank lines rather than interpreting as NaN values.\nparse_dates : bool or list of int or names or list of lists or dict, default False\n    The behavior is as follows:\n\n    * boolean. If True -> try parsing the index.\n    * list of int or names. e.g. If [1, 2, 3] -> try parsing columns 1, 2, 3\n      each as a separate date column.\n    * list of lis'
Traceback (most recent call last):
  File "/opt/conda/envs/coiled/lib/python3.7/site-packages/distributed/protocol/pickle.py", line 75, in loads
    return pickle.loads(x)
ValueError: unsupported pickle protocol: 5
distributed.protocol.core - CRITICAL - Failed to deserialize
Traceback (most recent call last):
  File "/opt/conda/envs/coiled/lib/python3.7/site-packages/distributed/protocol/core.py", line 151, in loads
    value = _deserialize(head, fs, deserializers=deserializers)
  File "/opt/conda/envs/coiled/lib/python3.7/site-packages/distributed/protocol/serialize.py", line 374, in deserialize
    return loads(header, frames)
  File "/opt/conda/envs/coiled/lib/python3.7/site-packages/distributed/protocol/serialize.py", line 70, in pickle_loads
    return pickle.loads(x, buffers=buffers)
  File "/opt/conda/envs/coiled/lib/python3.7/site-packages/distributed/protocol/pickle.py", line 75, in loads
    return pickle.loads(x)
ValueError: unsupported pickle protocol: 5
distributed.core - ERROR - unsupported pickle protocol: 5
Traceback (most recent call last):
  File "/opt/conda/envs/coiled/lib/python3.7/site-packages/distributed/core.py", line 554, in handle_stream
    msgs = await comm.read()
  File "/opt/conda/envs/coiled/lib/python3.7/site-packages/distributed/comm/tcp.py", line 216, in read
    allow_offload=self.allow_offload,
  File "/opt/conda/envs/coiled/lib/python3.7/site-packages/distributed/comm/utils.py", line 80, in from_frames
    res = _from_frames()
  File "/opt/conda/envs/coiled/lib/python3.7/site-packages/distributed/comm/utils.py", line 64, in _from_frames
    frames, deserialize=deserialize, deserializers=deserializers
  File "/opt/conda/envs/coiled/lib/python3.7/site-packages/distributed/protocol/core.py", line 151, in loads
    value = _deserialize(head, fs, deserializers=deserializers)
  File "/opt/conda/envs/coiled/lib/python3.7/site-packages/distributed/protocol/serialize.py", line 374, in deserialize
    return loads(header, frames)
  File "/opt/conda/envs/coiled/lib/python3.7/site-packages/distributed/protocol/serialize.py", line 70, in pickle_loads
    return pickle.loads(x, buffers=buffers)
  File "/opt/conda/envs/coiled/lib/python3.7/site-packages/distributed/protocol/pickle.py", line 75, in loads
    return pickle.loads(x)
ValueError: unsupported pickle protocol: 5
distributed.worker - ERROR - unsupported pickle protocol: 5
Traceback (most recent call last):
  File "/opt/conda/envs/coiled/lib/python3.7/site-packages/distributed/worker.py", line 991, in handle_scheduler
    comm, every_cycle=[self.ensure_communicating, self.ensure_computing]
  File "/opt/conda/envs/coiled/lib/python3.7/site-packages/distributed/core.py", line 554, in handle_stream
    msgs = await comm.read()
  File "/opt/conda/envs/coiled/lib/python3.7/site-packages/distributed/comm/tcp.py", line 216, in read
    allow_offload=self.allow_offload,
  File "/opt/conda/envs/coiled/lib/python3.7/site-packages/distributed/comm/utils.py", line 80, in from_frames
    res = _from_frames()
  File "/opt/conda/envs/coiled/lib/python3.7/site-packages/distributed/comm/utils.py", line 64, in _from_frames
    frames, deserialize=deserialize, deserializers=deserializers
  File "/opt/conda/envs/coiled/lib/python3.7/site-packages/distributed/protocol/core.py", line 151, in loads
    value = _deserialize(head, fs, deserializers=deserializers)
  File "/opt/conda/envs/coiled/lib/python3.7/site-packages/distributed/protocol/serialize.py", line 374, in deserialize
    return loads(header, frames)
  File "/opt/conda/envs/coiled/lib/python3.7/site-packages/distributed/protocol/serialize.py", line 70, in pickle_loads
    return pickle.loads(x, buffers=buffers)
  File "/opt/conda/envs/coiled/lib/python3.7/site-packages/distributed/protocol/pickle.py", line 75, in loads
    return pickle.loads(x)
ValueError: unsupported pickle protocol: 5
distributed.worker - INFO - Connection to scheduler broken.  Reconnecting...
tornado.application - ERROR - Exception in callback functools.partial(<bound method IOLoop._discard_future_result of <tornado.platform.asyncio.AsyncIOLoop object at 0x7f8c1955d750>>, <Task finished coro=<Worker.handle_scheduler() done, defined at /opt/conda/envs/coiled/lib/python3.7/site-packages/distributed/worker.py:988> exception=ValueError('unsupported pickle protocol: 5')>)
Traceback (most recent call last):
  File "/opt/conda/envs/coiled/lib/python3.7/site-packages/tornado/ioloop.py", line 741, in _run_callback
    ret = callback()
  File "/opt/conda/envs/coiled/lib/python3.7/site-packages/tornado/ioloop.py", line 765, in _discard_future_result
    future.result()
  File "/opt/conda/envs/coiled/lib/python3.7/site-packages/distributed/worker.py", line 991, in handle_scheduler
    comm, every_cycle=[self.ensure_communicating, self.ensure_computing]
  File "/opt/conda/envs/coiled/lib/python3.7/site-packages/distributed/core.py", line 554, in handle_stream
    msgs = await comm.read()
  File "/opt/conda/envs/coiled/lib/python3.7/site-packages/distributed/comm/tcp.py", line 216, in read
    allow_offload=self.allow_offload,
  File "/opt/conda/envs/coiled/lib/python3.7/site-packages/distributed/comm/utils.py", line 80, in from_frames
    res = _from_frames()
  File "/opt/conda/envs/coiled/lib/python3.7/site-packages/distributed/comm/utils.py", line 64, in _from_frames
    frames, deserialize=deserialize, deserializers=deserializers
  File "/opt/conda/envs/coiled/lib/python3.7/site-packages/distributed/protocol/core.py", line 151, in loads
    value = _deserialize(head, fs, deserializers=deserializers)
  File "/opt/conda/envs/coiled/lib/python3.7/site-packages/distributed/protocol/serialize.py", line 374, in deserialize
    return loads(header, frames)
  File "/opt/conda/envs/coiled/lib/python3.7/site-packages/distributed/protocol/serialize.py", line 70, in pickle_loads
    return pickle.loads(x, buffers=buffers)
  File "/opt/conda/envs/coiled/lib/python3.7/site-packages/distributed/protocol/pickle.py", line 75, in loads
    return pickle.loads(x)
ValueError: unsupported pickle protocol: 5
distributed.worker - INFO - -------------------------------------------------
distributed.worker - WARNING - Mismatched versions found
+---------+-------------+-----------+----------------------+
| Package | This Worker | scheduler | workers              |
+---------+-------------+-----------+----------------------+
| numpy   | 1.20.0      | 1.20.0    | {'1.20.0', '1.16.5'} |
+---------+-------------+-----------+----------------------+
distributed.worker - INFO -         Registered to: tls://ip-10-2-11-126.us-east-2.compute.internal:8786
distributed.worker - INFO - -------------------------------------------------
distributed.core - INFO - Starting established connection
distributed.protocol.pickle - INFO - Failed to deserialize b'\x80\x05\x95\x1dG\x00\x00\x00\x00\x00\x00(\x8c\x15dask.dataframe.io.csv\x94\x8c\x10pandas_read_text\x94\x93\x94\x8c\x17cloudpickle.cloudpickle\x94\x8c\r_builtin_type\x94\x93\x94\x8c\nLambdaType\x94\x85\x94R\x94(h\x05\x8c\x08CodeType\x94\x85\x94R\x94(K1K\x00K4K2K\x13B\x1a\x01\x00\x00\x88\x01d\x01k\x02rF|\x01d\x02k\x08r*|\x02d\x00k\x08r*t\x00j\x01d\x03t\x02d\x04d\x05\x8d\x03\x01\x00n\x10t\x00j\x01d\x06t\x02d\x04d\x05\x8d\x03\x01\x00|\x01d\x02k\x08rF\x88\x00}\x01|)d\x00k\trj|\x02d\x00k\x08o\\|\x01\x88\x00k\x02}1t\x03|1d\x07\x8d\x01}2n\x06t\x03\x83\x00}2|\x02d\x00k\x08r||\x01}\x02|-r\x90|\x02\x88\x00k\x03r\x90t\x04d\x08\x83\x01\x82\x01|\x0bd\x00k\tr\x9ed\t}3n\x08d\n}\x0bd\x02}3|2j\x05|\x02|\x0b|)|\x1f|3|%|&|#|$|\x0f|"|\x03|\x05|\x04|\x08|\x10|\x11|\x13|\r|\x0e|\x14| |\'|!|\x18|\x1a|\x1c|\x1b|\x12|\x1d|\x1e|\x0c|\n|\x06|\x16|(|\x07|/|0|\x15|-|,|+|.|\t|*|\x19|\x17d\x0b\x8d0\x01\x00t\x06|\x00|2\x83\x02S\x00\x94(N\x8c\nread_table\x94\x89\x8cAread_table is deprecated, use read_csv instead, passing sep=\'\\t\'.\x94K\x02\x8c\nstacklevel\x94\x85\x94\x8c/read_table is deprecated, use read_csv instead.\x94\x8c\x0csep_override\x94\x85\x94\x8cXSpecified a delimiter with both sep and delim_whitespace=True; you can only specify one.\x94\x88\x8c\x01c\x94(\x8c\tdelimiter\x94\x8c\x06engine\x94\x8c\x07dialect\x94\x8c\x0bcompression\x94\x8c\x10engine_specified\x94\x8c\x0bdoublequote\x94\x8c\nescapechar\x94\x8c\tquotechar\x94\x8c\x07quoting\x94\x8c\x10skipinitialspace\x94\x8c\x0elineterminator\x94\x8c\x06header\x94\x8c\tindex_col\x94\x8c\x05names\x94\x8c\x06prefix\x94\x8c\x08skiprows\x94\x8c\nskipfooter\x94\x8c\tna_values\x94\x8c\x0btrue_values\x94\x8c\x0cfalse_values\x94\x8c\x0fkeep_default_na\x94\x8c\tthousands\x94\x8c\x07comment\x94\x8c\x07decimal\x94\x8c\x0bparse_dates\x94\x8c\rkeep_date_col\x94\x8c\x08dayfirst\x94\x8c\x0bdate_parser\x94\x8c\x05nrows\x94\x8c\x08iterator\x94\x8c\tchunksize\x94\x8c\nconverters\x94\x8c\x05dtype\x94\x8c\x07usecols\x94\x8c\x07verbose\x94\x8c\x08encoding\x94\x8c\x07squeeze\x94\x8c\nmemory_map\x94\x8c\x0ffloat_precision\x94\x8c\tna_filter\x94\x8c\x10delim_whitespace\x94\x8c\x0ewarn_bad_lines\x94\x8c\x0ferror_bad_lines\x94\x8c\nlow_memory\x94\x8c\x10mangle_dupe_cols\x94\x8c\rtupleize_cols\x94\x8c\x15infer_datetime_format\x94\x8c\x10skip_blank_lines\x94t\x94t\x94(\x8c\x08warnings\x94\x8c\x04warn\x94\x8c\rFutureWarning\x94\x8c\x04dict\x94\x8c\nValueError\x94\x8c\x06update\x94\x8c\x05_read\x94t\x94(\x8c\x12filepath_or_buffer\x94\x8c\x03sep\x94h\x16h!h#h"h7h:h$hBh6h\x17h5h(h)h\x1fh%h&h2h\'h*h=h8hEh.hDh/h1h0h3h4h\x19h+h-h h\x1dh\x1eh\x1bh\x1ch,h9h\x18hCh@h?h>hAh;h<h\x12\x8c\x04kwds\x94h\x1at\x94\x8cUC:\\Users\\p_sierkin\\Miniconda3\\envs\\real_estate\\lib\\site-packages\\pandas\\io\\parsers.py\x94\x8c\x08parser_f\x94M\x13\x02C\x8a\x00C\x08\x01\x10\x01\x06\x02\x0c\x02\x06\x02\n\x01\x08\x01\x04\x0e\x08\x01\x10\x01\x0c\x02\x06\x03\x08\x01\x04\x02\x0c\x01\x08\x04\x08\x01\x06\x02\x04\x01\x04\x02\x06\x01\x02\x01\x02\x01\x02\x01\x02\x02\x02\x01\x02\x01\x02\x01\x02\x01\x02\x01\x02\x02\x02\x01\x02\x01\x02\x01\x02\x01\x02\x01\x02\x01\x02\x01\x02\x01\x02\x01\x02\x01\x02\x01\x02\x01\x02\x02\x02\x01\x02\x01\x02\x01\x02\x02\x02\x01\x02\x01\x02\x01\x02\x01\x02\x01\x02\x01\x02\x01\x02\x01\x02\x01\x02\x01\x02\x02\x02\x01\x02\x01\x02\x01\x02\x01\x02\x01\x02\x01\x02\x01\x02\x01\x08\x02\x94\x8c\x0bdefault_sep\x94\x8c\x04name\x94\x86\x94)t\x94R\x94}\x94(\x8c\x0b__package__\x94\x8c\tpandas.io\x94\x8c\x08__name__\x94\x8c\x11pandas.io.parsers\x94\x8c\x08__file__\x94hTuNNh\x03\x8c\x10_make_empty_cell\x94\x93\x94)R\x94hc)R\x94\x86\x94t\x94R\x94\x8c\x1ccloudpickle.cloudpickle_fast\x94\x8c\x12_function_setstate\x94\x93\x94hh}\x94}\x94(h_\x8c\x08read_csv\x94\x8c\x0c__qualname__\x94\x8c\'_make_parser_function.<locals>.parser_f\x94\x8c\x0f__annotations__\x94}\x94\x8c\x0e__kwdefaults__\x94N\x8c\x0c__defaults__\x94(\x8c\x01,\x94N\x8c\x05infer\x94NNN\x89N\x88NNNNN\x89NK\x00NN\x88\x88\x89\x88\x89\x89\x89N\x89\x89NhvNC\x01.\x94N\x8c\x01"\x94K\x00\x88NNNNN\x88\x88\x89\x88\x89Nt\x94\x8c\n__module__\x94h`\x8c\x07__doc__\x94X\x979\x00\x00\nRead a comma-separated values (csv) file into DataFrame.\n\nAlso supports optionally iterating or breaking of the file\ninto chunks.\n\nAdditional help can be found in the online docs for\n`IO Tools <http://pandas.pydata.org/pandas-docs/stable/io.html>`_.\n\nParameters\n----------\nfilepath_or_buffer : str, path object, or file-like object\n    Any valid string path is acceptable. The string could be a URL. Valid\n    URL schemes include http, ftp, s3, and file. For file URLs, a host is\n    expected. A local file could be: file://localhost/path/to/table.csv.\n\n    If you want to pass in a path object, pandas accepts either\n    ``pathlib.Path`` or ``py._path.local.LocalPath``.\n\n    By file-like object, we refer to objects with a ``read()`` method, such as\n    a file handler (e.g. via builtin ``open`` function) or ``StringIO``.\nsep : str, default \',\'\n    Delimiter to use. If sep is None, the C engine cannot automatically detect\n    the separator, but the Python parsing engine can, meaning the latter will\n    be used and automatically detect the separator by Python\'s builtin sniffer\n    tool, ``csv.Sniffer``. In addition, separators longer than 1 character and\n    different from ``\'\\s+\'`` will be interpreted as regular expressions and\n    will also force the use of the Python parsing engine. Note that regex\n    delimiters are prone to ignoring quoted data. Regex example: ``\'\\r\\t\'``.\ndelimiter : str, default ``None``\n    Alias for sep.\nheader : int, list of int, default \'infer\'\n    Row number(s) to use as the column names, and the start of the\n    data.  Default behavior is to infer the column names: if no names\n    are passed the behavior is identical to ``header=0`` and column\n    names are inferred from the first line of the file, if column\n    names are passed explicitly then the behavior is identical to\n    ``header=None``. Explicitly pass ``header=0`` to be able to\n    replace existing names. The header can be a list of integers that\n    specify row locations for a multi-index on the columns\n    e.g. [0,1,3]. Intervening rows that are not specified will be\n    skipped (e.g. 2 in this example is skipped). Note that this\n    parameter ignores commented lines and empty lines if\n    ``skip_blank_lines=True``, so ``header=0`` denotes the first line of\n    data rather than the first line of the file.\nnames : array-like, optional\n    List of column names to use. If file contains no header row, then you\n    should explicitly pass ``header=None``. Duplicates in this list will cause\n    a ``UserWarning`` to be issued.\nindex_col : int, sequence or bool, optional\n    Column to use as the row labels of the DataFrame. If a sequence is given, a\n    MultiIndex is used. If you have a malformed file with delimiters at the end\n    of each line, you might consider ``index_col=False`` to force pandas to\n    not use the first column as the index (row names).\nusecols : list-like or callable, optional\n    Return a subset of the columns. If list-like, all elements must either\n    be positional (i.e. integer indices into the document columns) or strings\n    that correspond to column names provided either by the user in `names` or\n    inferred from the document header row(s). For example, a valid list-like\n    `usecols` parameter would be ``[0, 1, 2]`` or ``[\'foo\', \'bar\', \'baz\']``.\n    Element order is ignored, so ``usecols=[0, 1]`` is the same as ``[1, 0]``.\n    To instantiate a DataFrame from ``data`` with element order preserved use\n    ``pd.read_csv(data, usecols=[\'foo\', \'bar\'])[[\'foo\', \'bar\']]`` for columns\n    in ``[\'foo\', \'bar\']`` order or\n    ``pd.read_csv(data, usecols=[\'foo\', \'bar\'])[[\'bar\', \'foo\']]``\n    for ``[\'bar\', \'foo\']`` order.\n\n    If callable, the callable function will be evaluated against the column\n    names, returning names where the callable function evaluates to True. An\n    example of a valid callable argument would be ``lambda x: x.upper() in\n    [\'AAA\', \'BBB\', \'DDD\']``. Using this parameter results in much faster\n    parsing time and lower memory usage.\nsqueeze : bool, default False\n    If the parsed data only contains one column then return a Series.\nprefix : str, optional\n    Prefix to add to column numbers when no header, e.g. \'X\' for X0, X1, ...\nmangle_dupe_cols : bool, default True\n    Duplicate columns will be specified as \'X\', \'X.1\', ...\'X.N\', rather than\n    \'X\'...\'X\'. Passing in False will cause data to be overwritten if there\n    are duplicate names in the columns.\ndtype : Type name or dict of column -> type, optional\n    Data type for data or columns. E.g. {\'a\': np.float64, \'b\': np.int32,\n    \'c\': \'Int64\'}\n    Use `str` or `object` together with suitable `na_values` settings\n    to preserve and not interpret dtype.\n    If converters are specified, they will be applied INSTEAD\n    of dtype conversion.\nengine : {\'c\', \'python\'}, optional\n    Parser engine to use. The C engine is faster while the python engine is\n    currently more feature-complete.\nconverters : dict, optional\n    Dict of functions for converting values in certain columns. Keys can either\n    be integers or column labels.\ntrue_values : list, optional\n    Values to consider as True.\nfalse_values : list, optional\n    Values to consider as False.\nskipinitialspace : bool, default False\n    Skip spaces after delimiter.\nskiprows : list-like, int or callable, optional\n    Line numbers to skip (0-indexed) or number of lines to skip (int)\n    at the start of the file.\n\n    If callable, the callable function will be evaluated against the row\n    indices, returning True if the row should be skipped and False otherwise.\n    An example of a valid callable argument would be ``lambda x: x in [0, 2]``.\nskipfooter : int, default 0\n    Number of lines at bottom of file to skip (Unsupported with engine=\'c\').\nnrows : int, optional\n    Number of rows of file to read. Useful for reading pieces of large files.\nna_values : scalar, str, list-like, or dict, optional\n    Additional strings to recognize as NA/NaN. If dict passed, specific\n    per-column NA values.  By default the following values are interpreted as\n    NaN: \'\', \'#N/A\', \'#N/A N/A\', \'#NA\', \'-1.#IND\', \'-1.#QNAN\', \'-NaN\', \'-nan\',\n    \'1.#IND\', \'1.#QNAN\', \'N/A\', \'NA\', \'NULL\', \'NaN\', \'n/a\', \'nan\',\n    \'null\'.\nkeep_default_na : bool, default True\n    Whether or not to include the default NaN values when parsing the data.\n    Depending on whether `na_values` is passed in, the behavior is as follows:\n\n    * If `keep_default_na` is True, and `na_values` are specified, `na_values`\n      is appended to the default NaN values used for parsing.\n    * If `keep_default_na` is True, and `na_values` are not specified, only\n      the default NaN values are used for parsing.\n    * If `keep_default_na` is False, and `na_values` are specified, only\n      the NaN values specified `na_values` are used for parsing.\n    * If `keep_default_na` is False, and `na_values` are not specified, no\n      strings will be parsed as NaN.\n\n    Note that if `na_filter` is passed in as False, the `keep_default_na` and\n    `na_values` parameters will be ignored.\nna_filter : bool, default True\n    Detect missing value markers (empty strings and the value of na_values). In\n    data without any NAs, passing na_filter=False can improve the performance\n    of reading a large file.\nverbose : bool, default False\n    Indicate number of NA values placed in non-numeric columns.\nskip_blank_lines : bool, default True\n    If True, skip over blank lines rather than interpreting as NaN values.\nparse_dates : bool or list of int or names or list of lists or dict, default False\n    The behavior is as follows:\n\n    * boolean. If True -> try parsing the index.\n    * list of int or names. e.g. If [1, 2, 3] -> try parsing columns 1, 2, 3\n      each as a separate date column.\n    * list of lis'
Traceback (most recent call last):
  File "/opt/conda/envs/coiled/lib/python3.7/site-packages/distributed/protocol/pickle.py", line 75, in loads
    return pickle.loads(x)
ValueError: unsupported pickle protocol: 5
distributed.protocol.core - CRITICAL - Failed to deserialize
Traceback (most recent call last):
  File "/opt/conda/envs/coiled/lib/python3.7/site-packages/distributed/protocol/core.py", line 151, in loads
    value = _deserialize(head, fs, deserializers=deserializers)
  File "/opt/conda/envs/coiled/lib/python3.7/site-packages/distributed/protocol/serialize.py", line 374, in deserialize
    return loads(header, frames)
  File "/opt/conda/envs/coiled/lib/python3.7/site-packages/distributed/protocol/serialize.py", line 70, in pickle_loads
    return pickle.loads(x, buffers=buffers)
  File "/opt/conda/envs/coiled/lib/python3.7/site-packages/distributed/protocol/pickle.py", line 75, in loads
    return pickle.loads(x)
ValueError: unsupported pickle protocol: 5
distributed.core - ERROR - unsupported pickle protocol: 5
Traceback (most recent call last):
  File "/opt/conda/envs/coiled/lib/python3.7/site-packages/distributed/core.py", line 554, in handle_stream
    msgs = await comm.read()
  File "/opt/conda/envs/coiled/lib/python3.7/site-packages/distributed/comm/tcp.py", line 216, in read
    allow_offload=self.allow_offload,
  File "/opt/conda/envs/coiled/lib/python3.7/site-packages/distributed/comm/utils.py", line 80, in from_frames
    res = _from_frames()
  File "/opt/conda/envs/coiled/lib/python3.7/site-packages/distributed/comm/utils.py", line 64, in _from_frames
    frames, deserialize=deserialize, deserializers=deserializers
  File "/opt/conda/envs/coiled/lib/python3.7/site-packages/distributed/protocol/core.py", line 151, in loads
    value = _deserialize(head, fs, deserializers=deserializers)
  File "/opt/conda/envs/coiled/lib/python3.7/site-packages/distributed/protocol/serialize.py", line 374, in deserialize
    return loads(header, frames)
  File "/opt/conda/envs/coiled/lib/python3.7/site-packages/distributed/protocol/serialize.py", line 70, in pickle_loads
    return pickle.loads(x, buffers=buffers)
  File "/opt/conda/envs/coiled/lib/python3.7/site-packages/distributed/protocol/pickle.py", line 75, in loads
    return pickle.loads(x)
ValueError: unsupported pickle protocol: 5
distributed.worker - ERROR - unsupported pickle protocol: 5
Traceback (most recent call last):
  File "/opt/conda/envs/coiled/lib/python3.7/site-packages/distributed/worker.py", line 991, in handle_scheduler
    comm, every_cycle=[self.ensure_communicating, self.ensure_computing]
  File "/opt/conda/envs/coiled/lib/python3.7/site-packages/distributed/core.py", line 554, in handle_stream
    msgs = await comm.read()
  File "/opt/conda/envs/coiled/lib/python3.7/site-packages/distributed/comm/tcp.py", line 216, in read
    allow_offload=self.allow_offload,
  File "/opt/conda/envs/coiled/lib/python3.7/site-packages/distributed/comm/utils.py", line 80, in from_frames
    res = _from_frames()
  File "/opt/conda/envs/coiled/lib/python3.7/site-packages/distributed/comm/utils.py", line 64, in _from_frames
    frames, deserialize=deserialize, deserializers=deserializers
  File "/opt/conda/envs/coiled/lib/python3.7/site-packages/distributed/protocol/core.py", line 151, in loads
    value = _deserialize(head, fs, deserializers=deserializers)
  File "/opt/conda/envs/coiled/lib/python3.7/site-packages/distributed/protocol/serialize.py", line 374, in deserialize
    return loads(header, frames)
  File "/opt/conda/envs/coiled/lib/python3.7/site-packages/distributed/protocol/serialize.py", line 70, in pickle_loads
    return pickle.loads(x, buffers=buffers)
  File "/opt/conda/envs/coiled/lib/python3.7/site-packages/distributed/protocol/pickle.py", line 75, in loads
    return pickle.loads(x)
ValueError: unsupported pickle protocol: 5
distributed.worker - INFO - Connection to scheduler broken.  Reconnecting...
tornado.application - ERROR - Exception in callback functools.partial(<bound method IOLoop._discard_future_result of <tornado.platform.asyncio.AsyncIOLoop object at 0x7f8c1955d750>>, <Task finished coro=<Worker.handle_scheduler() done, defined at /opt/conda/envs/coiled/lib/python3.7/site-packages/distributed/worker.py:988> exception=ValueError('unsupported pickle protocol: 5')>)
Traceback (most recent call last):
  File "/opt/conda/envs/coiled/lib/python3.7/site-packages/tornado/ioloop.py", line 741, in _run_callback
    ret = callback()
  File "/opt/conda/envs/coiled/lib/python3.7/site-packages/tornado/ioloop.py", line 765, in _discard_future_result
    future.result()
  File "/opt/conda/envs/coiled/lib/python3.7/site-packages/distributed/worker.py", line 991, in handle_scheduler
    comm, every_cycle=[self.ensure_communicating, self.ensure_computing]
  File "/opt/conda/envs/coiled/lib/python3.7/site-packages/distributed/core.py", line 554, in handle_stream
    msgs = await comm.read()
  File "/opt/conda/envs/coiled/lib/python3.7/site-packages/distributed/comm/tcp.py", line 216, in read
    allow_offload=self.allow_offload,
  File "/opt/conda/envs/coiled/lib/python3.7/site-packages/distributed/comm/utils.py", line 80, in from_frames
    res = _from_frames()
  File "/opt/conda/envs/coiled/lib/python3.7/site-packages/distributed/comm/utils.py", line 64, in _from_frames
    frames, deserialize=deserialize, deserializers=deserializers
  File "/opt/conda/envs/coiled/lib/python3.7/site-packages/distributed/protocol/core.py", line 151, in loads
    value = _deserialize(head, fs, deserializers=deserializers)
  File "/opt/conda/envs/coiled/lib/python3.7/site-packages/distributed/protocol/serialize.py", line 374, in deserialize
    return loads(header, frames)
  File "/opt/conda/envs/coiled/lib/python3.7/site-packages/distributed/protocol/serialize.py", line 70, in pickle_loads
    return pickle.loads(x, buffers=buffers)
  File "/opt/conda/envs/coiled/lib/python3.7/site-packages/distributed/protocol/pickle.py", line 75, in loads
    return pickle.loads(x)
ValueError: unsupported pickle protocol: 5
distributed.worker - INFO - -------------------------------------------------
distributed.worker - WARNING - Mismatched versions found
+---------+-------------+-----------+----------------------+
| Package | This Worker | scheduler | workers              |
+---------+-------------+-----------+----------------------+
| numpy   | 1.20.0      | 1.20.0    | {'1.20.0', '1.16.5'} |
+---------+-------------+-----------+----------------------+
distributed.worker - INFO -         Registered to: tls://ip-10-2-11-126.us-east-2.compute.internal:8786
distributed.worker - INFO - -------------------------------------------------
distributed.core - INFO - Starting established connection

After researching 2 days on this problem I have a little mind blow running the same code on Google colab and works like a charm so basically I should have a problem with version package. Could you please provide any help in this matter? Thanks in advance. https://colab.research.google.com/drive/1P70lbjJ5h9ICAOkqthUP5-6_jlqPQmkO?usp=sharing