Open oguzhannysr opened 7 months ago
You can try to restart Dask scheduler.
@AlexeyPechnikov How do I do that, should I restart the runtime?
To restart Dask without lost of your current state re-execute this cell:
# cleanup for repeatable runs
if 'client' in globals():
client.close()
client = Client()
client
When I got the error I mentioned above, I tried your suggestion and ran it again, but it gave the error again. I don't understand why this section, which always works, now gives errors, can you help?
INFO:distributed.scheduler:Receive client connection: Client-worker-aefae76f-0397-11ef-b3b3-0242ac1c000c
INFO:distributed.core:Starting established connection to tcp://127.0.0.1:55322
INFO:distributed.scheduler:Receive client connection: Client-worker-aefbf145-0397-11ef-b3a9-0242ac1c000c
INFO:distributed.core:Starting established connection to tcp://127.0.0.1:55358
INFO:distributed.scheduler:Receive client connection: Client-worker-aefb4dc0-0397-11ef-b3b5-0242ac1c000c
INFO:distributed.core:Starting established connection to tcp://127.0.0.1:55346
INFO:distributed.scheduler:Receive client connection: Client-worker-aefb4587-0397-11ef-b3ae-0242ac1c000c
INFO:distributed.core:Starting established connection to tcp://127.0.0.1:55338
INFO:distributed.core:Event loop was unresponsive in Nanny for 8.69s. This is often caused by long-running GIL-holding functions or moving large chunks of data. This can cause timeouts and instability.
INFO:distributed.core:Event loop was unresponsive in Scheduler for 8.69s. This is often caused by long-running GIL-holding functions or moving large chunks of data. This can cause timeouts and instability.
INFO:distributed.core:Event loop was unresponsive in Nanny for 8.69s. This is often caused by long-running GIL-holding functions or moving large chunks of data. This can cause timeouts and instability.
INFO:distributed.core:Event loop was unresponsive in Nanny for 8.84s. This is often caused by long-running GIL-holding functions or moving large chunks of data. This can cause timeouts and instability.
INFO:distributed.core:Event loop was unresponsive in Nanny for 8.84s. This is often caused by long-running GIL-holding functions or moving large chunks of data. This can cause timeouts and instability.
INFO:distributed.core:Event loop was unresponsive in Nanny for 16.39s. This is often caused by long-running GIL-holding functions or moving large chunks of data. This can cause timeouts and instability.
INFO:distributed.core:Event loop was unresponsive in Nanny for 16.38s. This is often caused by long-running GIL-holding functions or moving large chunks of data. This can cause timeouts and instability.
INFO:distributed.core:Event loop was unresponsive in Scheduler for 16.38s. This is often caused by long-running GIL-holding functions or moving large chunks of data. This can cause timeouts and instability.
INFO:distributed.core:Event loop was unresponsive in Nanny for 16.38s. This is often caused by long-running GIL-holding functions or moving large chunks of data. This can cause timeouts and instability.
INFO:distributed.core:Event loop was unresponsive in Nanny for 23.13s. This is often caused by long-running GIL-holding functions or moving large chunks of data. This can cause timeouts and instability.
INFO:distributed.core:Event loop was unresponsive in Nanny for 7.00s. This is often caused by long-running GIL-holding functions or moving large chunks of data. This can cause timeouts and instability.
INFO:distributed.core:Event loop was unresponsive in Scheduler for 7.03s. This is often caused by long-running GIL-holding functions or moving large chunks of data. This can cause timeouts and instability.
INFO:distributed.core:Event loop was unresponsive in Nanny for 7.06s. This is often caused by long-running GIL-holding functions or moving large chunks of data. This can cause timeouts and instability.
INFO:distributed.core:Event loop was unresponsive in Nanny for 7.06s. This is often caused by long-running GIL-holding functions or moving large chunks of data. This can cause timeouts and instability.
INFO:distributed.core:Event loop was unresponsive in Nanny for 16.51s. This is often caused by long-running GIL-holding functions or moving large chunks of data. This can cause timeouts and instability.
INFO:distributed.core:Event loop was unresponsive in Scheduler for 16.54s. This is often caused by long-running GIL-holding functions or moving large chunks of data. This can cause timeouts and instability.
INFO:distributed.core:Event loop was unresponsive in Nanny for 16.54s. This is often caused by long-running GIL-holding functions or moving large chunks of data. This can cause timeouts and instability.
INFO:distributed.core:Event loop was unresponsive in Nanny for 16.54s. This is often caused by long-running GIL-holding functions or moving large chunks of data. This can cause timeouts and instability.
INFO:distributed.core:Event loop was unresponsive in Nanny for 16.49s. This is often caused by long-running GIL-holding functions or moving large chunks of data. This can cause timeouts and instability.
INFO:distributed.core:Event loop was unresponsive in Nanny for 16.77s. This is often caused by long-running GIL-holding functions or moving large chunks of data. This can cause timeouts and instability.
INFO:distributed.core:Event loop was unresponsive in Scheduler for 16.71s. This is often caused by long-running GIL-holding functions or moving large chunks of data. This can cause timeouts and instability.
INFO:distributed.core:Event loop was unresponsive in Nanny for 16.76s. This is often caused by long-running GIL-holding functions or moving large chunks of data. This can cause timeouts and instability.
INFO:distributed.core:Event loop was unresponsive in Nanny for 16.76s. This is often caused by long-running GIL-holding functions or moving large chunks of data. This can cause timeouts and instability.
INFO:distributed.core:Event loop was unresponsive in Nanny for 16.76s. This is often caused by long-running GIL-holding functions or moving large chunks of data. This can cause timeouts and instability.
INFO:distributed.core:Event loop was unresponsive in Nanny for 7.66s. This is often caused by long-running GIL-holding functions or moving large chunks of data. This can cause timeouts and instability.
INFO:distributed.core:Event loop was unresponsive in Scheduler for 7.65s. This is often caused by long-running GIL-holding functions or moving large chunks of data. This can cause timeouts and instability.
INFO:distributed.core:Event loop was unresponsive in Nanny for 15.72s. This is often caused by long-running GIL-holding functions or moving large chunks of data. This can cause timeouts and instability.
INFO:distributed.core:Connection to tcp://127.0.0.1:55358 has been closed.
INFO:distributed.scheduler:Remove client Client-worker-aefbf145-0397-11ef-b3a9-0242ac1c000c
INFO:distributed.core:Event loop was unresponsive in Nanny for 30.65s. This is often caused by long-running GIL-holding functions or moving large chunks of data. This can cause timeouts and instability.
INFO:distributed.core:Event loop was unresponsive in Nanny for 30.65s. This is often caused by long-running GIL-holding functions or moving large chunks of data. This can cause timeouts and instability.
INFO:distributed.core:Event loop was unresponsive in Nanny for 24.25s. This is often caused by long-running GIL-holding functions or moving large chunks of data. This can cause timeouts and instability.
INFO:distributed.core:Event loop was unresponsive in Scheduler for 24.25s. This is often caused by long-running GIL-holding functions or moving large chunks of data. This can cause timeouts and instability.
CRITICAL:distributed.scheduler:Closed comm
The above exception was the direct cause of the following exception:
CancelledError Traceback (most recent call last)
In case it still does not work for you you need to check your installed Python libraries. The tested libraries are listed in PyGMTSAR Dockerfile https://github.com/AlexeyPechnikov/pygmtsar/blob/pygmtsar2/docker/pygmtsar.Dockerfile and can be installed as
pip3 install \
adjustText==1.0.4 \
asf_search==7.0.4 \
dask==2024.1.1 \
distributed==2024.1.1 \
geopandas==0.14.3 \
h5netcdf==1.3.0 \
h5py==3.10.0 \
imageio==2.31.5 \
ipywidgets==8.1.1 \
joblib==1.3.2 \
matplotlib==3.8.0 \
nc-time-axis==1.4.1 \
numba==0.57.1 \
numpy==1.24.4 \
pandas==2.2.1 \
remotezip==0.12.2 \
rioxarray==0.15.1 \
scikit-learn==1.3.1 \
scipy==1.11.4 \
seaborn==0.13.0 \
shapely==2.0.3 \
statsmodels==0.14.0 \
tqdm==4.66.1 \
xarray==2024.2.0 \
xmltodict==0.13.0 \
pygmtsar
After that, rerun your jupyter kernel and reprocess the notebook to check.
I'm working on colab, should I try this anyway?
No, on Google Colab just check if you use "High RAM" instance.
I am using high RAM, should I turn off this feature?
I turned off the high ram feature but the error still persists.
High RAM is better, no need to disable it. Ok, you can also try to export single band rasters (for one date).
This feature was working very well, what is the reason why it is not working now? Also, how can I save them one by one from xarray?
Maybe changes you made in your notebook or updates to Google Colab's installed libraries could be causing issues with reproducibility. For consistent execution, you might want to check my examples, which are updated in response to changes in Google Colab, or use the PyGMTSAR Docker image. To export a single date raster as a single-band GeoTIFF, use disp_subset[0]
, disp_subset.isel(date=0)
, or disp_subset.sel(date=...)
. And, pay attention to the PyGMTSAR export functions which are well-optimized for most use cases.
Thanks, what is your most current colab notebook?
All PyGMTSAR public Google Colab notebooks are up-to-date.
Alexey, I examined your notebooks, but I could not see the code area where you saved the displacement maps, similar to my saves. How can I do that. Or am I getting an error due to an update to pygmtsar? I will send you access to my colab notebook to your e-mail address, if you deem it appropriate.
I cannot debug and support your own code for free. Use PyGMTSAR export functions as I have mentioned above (see https://github.com/AlexeyPechnikov/pygmtsar/blob/pygmtsar2/pygmtsar/pygmtsar/Stack_export.py) or you need to pay for my work on your special requirements. But what actually you reason to reinvent the available in PyGMTSAR GeoTiFF export function?…
Thank you Alexey, but I'm stuck like this again :(
@AlexeyPechnikov ,I got this error while trying a different notebook, how can I get past it?
It means your wavelength choice does not make sense. The filter size spans thousands of kilometers, even though the full size of a Sentinel-1 scene is much smaller.
Teşekkür ederim Alexey, ama yine böyle sıkışıp kaldım :(
@AlexeyPechnikov It is very important for me to get past this problem.
The progress indicator is blue, so it is currently calculating. It's possible that yourdisp_sbas_finish
variable is defined in a way that requires too much RAM for processing. Be aware that exporting to NetCDF produces a single large file, which can be quite huge. As discussed above, exporting to GeoTIFF generates a set of files and can be much more efficient. You don’t need to switch between functions; stick with the selected one and adjust your code if problems arise. Also, restarting the Dask scheduler can help resolve issues that occurred before the execution of the selected cell.
I tried using the export geotiff you mentioned above to save them in historical tif format one by one, but I still got the same error.
There is no error in your recent screenshot; it is working.
Son ekran görüntüsünde herhangi bir hata yok; çalışıyor.
CancelledError Traceback (most recent call last)
17 frames /usr/local/lib/python3.10/dist-packages/distributed/client.py in _gather() 2231 else: 2232 raise exception.with_traceback(traceback) -> 2233 raise exc 2234 if errors == "skip": 2235 bad_keys.add(key)
CancelledError: ('getitem-08f5ccff885124bb3aed4c18f43f0f97', 48, 0, 0)
@AlexeyPechnikov However, the analysis I made covered a very small area of 1 year. I don't understand why the dimensions are a problem because I have been running this problematic code for 2 months.
There are no errors in the log, and the processing can continue. The 'CancelledError' simply indicates that one of the tasks was cancelled and will be re-executed automatically. The message 'INFO:distributed.core:Event loop was unresponsive in Nanny for 32.10s.' indicates that the processing tasks are large and require substantial RAM. You should check the sizes of your processing grids. Also, subset.rio.to_raster(filename, driver='GTiff', crs='EPSG:4326') is not part of the PyGMTSAR code but your own, and it requires the complete data cube to be materialized, which is impractical for large grids. Use PyGMTSAR functions or your own well-optimized code because straightforward solutions do not work for large datasets. You would check how 'Lake Sarez Landslides, Tajikistan' and 'Golden Valley, CA.' examples processes large data stacks efficiently.
This is the data I want to save.
As I mentioned above, you start memory-intensive computations with disp_sbas_finish
. PyGMTSAR utilizes delayed computation principles, and your variable can be not actual output data but rather a formula to compute them. When you stack a lot of computation on data without materializing them, it only works for small datasets. Try materializing the data on disk first and then export it afterward.
@AlexeyPechnikov Alexey, thank you very much for your help, the notebooks you mentioned worked faster and better. Now I have a different question. I parsed the los data in vertical and east-west directions and wrote it as geotiff. However, it prints the results as pair. My expectation was that it would be like the los outputs on the left in the screenshot, that is, a single .tif file for each date.
To compute the displacements between scenes (dates) you need to apply the least-squares solution:
disp = sbas.lstsq(unwrap.phase - trend - turbo, corr)
It seems you have inconsistency between your disp_sbas_finish
defined for interferograms and least-squares processed los results.
@AlexeyPechnikov Should I apply this for los values? In the image, disp_sbas_finish is the deformation in the los direction. ew is east-west, ud is vertical deformation. Should I apply the formula you mentioned for disp_sbas_finish, i.e. the efficiency in the los direction, or for ew and ud?
You have already done this step in the notebook, do I need to do it again?
Commonly, we use least-squares processing on phase values and convert them to LOS and EW and vertical displacements later. In the notebook the least-squares processing and LOS projection calculating are merged into a single command, you need to split them.
I didn't fully understand. Also I couldn't find the turbo variable in the notebook, where do I pull it from?
Do not apply east-west or vertical projection to your disp_sbas
variable because it is already LOS projection. See 'CENTRAL Türkiye Mw 7.8 & 7.5 Earthquakes Co-Seismic Interferogram, 2023.' example for the functions usage. If you don't calculate turbulent atmosphere correction turbo
, just omit it.
I opened the notebook you mentioned and changed the initial variables only for my own field, but even though I tried twice, I always get stuck here, what should I do?
Change 300000 meters (300 km) filter size to a reasonable value.
What does this value represent? If I change this value according to 79 myself, will it negatively affect the results? ValueError: The overlapping depth 507 is larger than your array 79.
Do not apply east-west or vertical projection to your
disp_sbas
variable because it is already LOS projection. See 'CENTRAL Türkiye Mw 7.8 & 7.5 Earthquakes Co-Seismic Interferogram, 2023.' example for the functions usage. If you don't calculate turbulent atmosphere correctionturbo
, just omit it.
@AlexeyPechnikov ,Is it right for me to do this?
@AlexeyPechnikov, Do you see any abnormalities in the images?
Do not apply east-west or vertical projection to your
disp_sbas
variable because it is already LOS projection. See 'CENTRAL Türkiye Mw 7.8 & 7.5 Earthquakes Co-Seismic Interferogram, 2023.' example for the functions usage. If you don't calculate turbulent atmosphere correctionturbo
, just omit it.
Also, how can I calculate the turbo variable you mentioned here? Can you show me an example line?
Your interferograms look affected by strong atmospheric noise. Try to cleanup (detrend) them. While I don't know your ground truth and it potentially could be valid surface deformation, I'm doubtful.
I think I haven't shared public examples with turbulence correction. You can use Gaussian filtering as in the 'Imperial Valley SBAS Analysis, 2015' notebook.
@AlexeyPechnikov ,Even though I change the wavelength, the expected array size also changes. How can I choose the appropriate wavelength?
For your area with a side of about 10 km, you selected a filter wavelength that is too long. What is the reason? You should compare this filter wavelength and area size with the example in the 'Imperial Valley SBAS Analysis, 2015' notebook. And, by the way, the ValueError on your screenshot has the exact information about the maximum possible wavelength for your array size.
@AlexeyPechnikov ,Hello, I was saving my results as geotiff with the following snippet. While it was working 2-3 weeks ago, now I am getting errors and cannot print the results. How can I solve this?
INFO:distributed.core:Event loop was unresponsive in Nanny for 9.99s. This is often caused by long-running GIL-holding functions or moving large chunks of data. This can cause timeouts and instability. INFO:distributed.core:Event loop was unresponsive in Nanny for 9.99s. This is often caused by long-running GIL-holding functions or moving large chunks of data. This can cause timeouts and instability. INFO:distributed.core:Event loop was unresponsive in Nanny for 9.91s. This is often caused by long-running GIL-holding functions or moving large chunks of data. This can cause timeouts and instability. INFO:distributed.core:Event loop was unresponsive in Nanny for 10.00s. This is often caused by long-running GIL-holding functions or moving large chunks of data. This can cause timeouts and instability. INFO:distributed.core:Event loop was unresponsive in Scheduler for 10.00s. This is often caused by long-running GIL-holding functions or moving large chunks of data. This can cause timeouts and instability. INFO:distributed.core:Event loop was unresponsive in Nanny for 11.12s. This is often caused by long-running GIL-holding functions or moving large chunks of data. This can cause timeouts and instability. INFO:distributed.core:Event loop was unresponsive in Nanny for 11.12s. This is often caused by long-running GIL-holding functions or moving large chunks of data. This can cause timeouts and instability. INFO:distributed.core:Event loop was unresponsive in Nanny for 11.12s. This is often caused by long-running GIL-holding functions or moving large chunks of data. This can cause timeouts and instability. INFO:distributed.core:Event loop was unresponsive in Nanny for 10.12s. This is often caused by long-running GIL-holding functions or moving large chunks of data. This can cause timeouts and instability. INFO:distributed.core:Event loop was unresponsive in Scheduler for 10.12s. This is often caused by long-running GIL-holding functions or moving large chunks of data. This can cause timeouts and instability. INFO:distributed.core:Event loop was unresponsive in Nanny for 9.81s. This is often caused by long-running GIL-holding functions or moving large chunks of data. This can cause timeouts and instability. INFO:distributed.core:Event loop was unresponsive in Scheduler for 9.81s. This is often caused by long-running GIL-holding functions or moving large chunks of data. This can cause timeouts and instability. INFO:distributed.core:Event loop was unresponsive in Nanny for 20.79s. This is often caused by long-running GIL-holding functions or moving large chunks of data. This can cause timeouts and instability. INFO:distributed.core:Event loop was unresponsive in Nanny for 20.79s. This is often caused by long-running GIL-holding functions or moving large chunks of data. This can cause timeouts and instability. INFO:distributed.core:Event loop was unresponsive in Nanny for 20.79s. This is often caused by long-running GIL-holding functions or moving large chunks of data. This can cause timeouts and instability. INFO:distributed.core:Event loop was unresponsive in Nanny for 11.17s. This is often caused by long-running GIL-holding functions or moving large chunks of data. This can cause timeouts and instability. INFO:distributed.core:Event loop was unresponsive in Scheduler for 11.18s. This is often caused by long-running GIL-holding functions or moving large chunks of data. This can cause timeouts and instability. INFO:distributed.core:Event loop was unresponsive in Nanny for 9.98s. This is often caused by long-running GIL-holding functions or moving large chunks of data. This can cause timeouts and instability. INFO:distributed.core:Event loop was unresponsive in Nanny for 9.98s. This is often caused by long-running GIL-holding functions or moving large chunks of data. This can cause timeouts and instability. INFO:distributed.core:Event loop was unresponsive in Nanny for 9.91s. This is often caused by long-running GIL-holding functions or moving large chunks of data. This can cause timeouts and instability. INFO:distributed.core:Event loop was unresponsive in Scheduler for 9.91s. This is often caused by long-running GIL-holding functions or moving large chunks of data. This can cause timeouts and instability. INFO:distributed.core:Event loop was unresponsive in Nanny for 20.07s. This is often caused by long-running GIL-holding functions or moving large chunks of data. This can cause timeouts and instability. INFO:distributed.core:Event loop was unresponsive in Nanny for 10.30s. This is often caused by long-running GIL-holding functions or moving large chunks of data. This can cause timeouts and instability. INFO:distributed.core:Event loop was unresponsive in Nanny for 10.51s. This is often caused by long-running GIL-holding functions or moving large chunks of data. This can cause timeouts and instability. INFO:distributed.core:Event loop was unresponsive in Nanny for 10.50s. This is often caused by long-running GIL-holding functions or moving large chunks of data. This can cause timeouts and instability. INFO:distributed.core:Event loop was unresponsive in Scheduler for 10.50s. This is often caused by long-running GIL-holding functions or moving large chunks of data. This can cause timeouts and instability. INFO:distributed.core:Event loop was unresponsive in Nanny for 20.94s. This is often caused by long-running GIL-holding functions or moving large chunks of data. This can cause timeouts and instability. INFO:distributed.core:Event loop was unresponsive in Nanny for 20.94s. This is often caused by long-running GIL-holding functions or moving large chunks of data. This can cause timeouts and instability. INFO:distributed.core:Event loop was unresponsive in Nanny for 20.73s. This is often caused by long-running GIL-holding functions or moving large chunks of data. This can cause timeouts and instability. INFO:distributed.core:Event loop was unresponsive in Nanny for 20.73s. This is often caused by long-running GIL-holding functions or moving large chunks of data. This can cause timeouts and instability. INFO:distributed.core:Event loop was unresponsive in Scheduler for 20.85s. This is often caused by long-running GIL-holding functions or moving large chunks of data. This can cause timeouts and instability. INFO:distributed.core:Event loop was unresponsive in Nanny for 9.80s. This is often caused by long-running GIL-holding functions or moving large chunks of data. This can cause timeouts and instability. INFO:distributed.core:Event loop was unresponsive in Nanny for 9.79s. This is often caused by long-running GIL-holding functions or moving large chunks of data. This can cause timeouts and instability. INFO:distributed.core:Event loop was unresponsive in Nanny for 9.98s. This is often caused by long-running GIL-holding functions or moving large chunks of data. This can cause timeouts and instability. INFO:distributed.core:Event loop was unresponsive in Nanny for 9.99s. This is often caused by long-running GIL-holding functions or moving large chunks of data. This can cause timeouts and instability. INFO:distributed.core:Event loop was unresponsive in Scheduler for 20.15s. This is often caused by long-running GIL-holding functions or moving large chunks of data. This can cause timeouts and instability. INFO:distributed.core:Event loop was unresponsive in Nanny for 20.33s. This is often caused by long-running GIL-holding functions or moving large chunks of data. This can cause timeouts and instability. INFO:distributed.core:Event loop was unresponsive in Nanny for 20.34s. This is often caused by long-running GIL-holding functions or moving large chunks of data. This can cause timeouts and instability. INFO:distributed.core:Event loop was unresponsive in Nanny for 20.15s. This is often caused by long-running GIL-holding functions or moving large chunks of data. This can cause timeouts and instability. CRITICAL:distributed.scheduler:Closed comm while trying to write [{'op': 'task-erred', 'key': ('getitem-2f2a6a160e5cca29371a725244bf31c5', 0, 0, 0), 'exception': <distributed.protocol.serialize.Serialized object at 0x7d8c899dd4b0>, 'traceback': <distributed.protocol.serialize.Serialized object at 0x7d8c899dd3f0>}]
Traceback (most recent call last):
File "/usr/local/lib/python3.10/dist-packages/distributed/scheduler.py", line 6074, in send_all
c.send(msgs)
File "/usr/local/lib/python3.10/dist-packages/distributed/batched.py", line 156, in send
raise CommClosedError(f"Comm {self.comm!r} already closed.")
distributed.comm.core.CommClosedError: Comm <TCP (closed) Scheduler->Client local=tcp://127.0.0.1:42143 remote=tcp://127.0.0.1:46968> already closed.
CRITICAL:distributed.scheduler:Closed comm while trying to write [{'op': 'task-erred', 'key': ('getitem-2f2a6a160e5cca29371a725244bf31c5', 15, 0, 0), 'exception': <distributed.protocol.serialize.Serialized object at 0x7d8c899dd6c0>, 'traceback': <distributed.protocol.serialize.Serialized object at 0x7d8c899dd720>}]
Traceback (most recent call last):
File "/usr/local/lib/python3.10/dist-packages/distributed/scheduler.py", line 6074, in send_all
c.send( msgs)
File "/usr/local/lib/python3.10/dist-packages/distributed/batched.py", line 156, in send
raise CommClosedError(f"Comm {self.comm!r} already closed.")
distributed.comm.core.CommClosedError: Comm <TCP (closed) Scheduler->Client local=tcp://127.0.0.1:42143 remote=tcp://127.0.0.1:46968> already closed.
INFO:distributed.core:Connection to tcp://127.0.0.1:46968 has been closed.
INFO:distributed.scheduler:Remove client Client-worker-fe041166-02d4-11ef-9f2d-0242ac1c000c
INFO:distributed.scheduler:Receive client connection: Client-worker-fe0471b1-02d4-11ef-9f27-0242ac1c000c
INFO:distributed.core:Starting established connection to tcp://127.0.0.1:42562
INFO:distributed.scheduler:Receive client connection: Client-worker-fe041166-02d4-11ef-9f2d-0242ac1c000c
INFO:distributed.core:Starting established connection to tcp://127.0.0.1:42564
INFO:distributed.core:Event loop was unresponsive in Nanny for 20.29s. This is often caused by long-running GIL-holding functions or moving large chunks of data. This can cause timeouts and instability.
INFO:distributed.core:Connection to tcp://127.0.0.1:47004 has been closed.
INFO:distributed.scheduler:Remove client Client-worker-fed43ffe-02d4-11ef-9f21-0242ac1c000c
INFO:distributed.core:Event loop was unresponsive in Scheduler for 19.16s. This is often caused by long-running GIL-holding functions or moving large chunks of data. This can cause timeouts and instability.
INFO:distributed.core:Event loop was unresponsive in Nanny for 9.33s. This is often caused by long-running GIL-holding functions or moving large chunks of data. This can cause timeouts and instability.
INFO:distributed.core:Event loop was unresponsive in Nanny for 9.33s. This is often caused by long-running GIL-holding functions or moving large chunks of data. This can cause timeouts and instability.
INFO:distributed.core:Event loop was unresponsive in Nanny for 9.33s. This is often caused by long-running GIL-holding functions or moving large chunks of data. This can cause timeouts and instability.
INFO:distributed.scheduler:Close client connection: Client-worker-fe041166-02d4-11ef-9f2d-0242ac1c000c
INFO:distributed.core:Event loop was unresponsive in Nanny for 9.20s. This is often caused by long-running GIL-holding functions or moving large chunks of data. This can cause timeouts and instability.
INFO:distributed.batched:Batched Comm Closed <TCP (closed) Scheduler->Client local=tcp://127.0.0.1:42143 remote=tcp://127.0.0.1:47004>
Traceback (most recent call last):
File "/usr/local/lib/python3.10/dist-packages/distributed/batched.py", line 115, in _background_send
nbytes = yield coro
File "/usr/local/lib/python3.10/dist-packages/tornado/gen.py", line 767, in run
value = future.result()
File "/usr/local/lib/python3.10/dist-packages/distributed/comm/tcp.py", line 262, in write
raise CommClosedError()
distributed.comm.core.CommClosedError
INFO:distributed.scheduler:Receive client connection: Client-worker-fed43ffe-02d4-11ef-9f21-0242ac1c000c
INFO:distributed.core:Starting established connection to tcp://127.0.0.1:36614
INFO:distributed.scheduler:Close client connection: Client-worker-fed43ffe-02d4-11ef-9f21-0242ac1c000c
INFO:distributed.core:Connection to tcp://127.0.0.1:46982 has been closed.
INFO:distributed.scheduler:Remove client Client-worker-fe0471b1-02d4-11ef-9f27-0242ac1c000c
CancelledError Traceback (most recent call last) in <cell line: 3>()
1 disp_subsett2 = disp_sbas_finish.rio.write_crs("epsg:4326", inplace=False)
2 disp_subsett2.rio.set_spatial_dims('lon', 'lat', inplace=True)
----> 3 disp_subsett2.rio.to_raster(f'disp_sbas.tiff')
12 frames /usr/local/lib/python3.10/dist-packages/distributed/client.py in _gather() 2231 else: 2232 raise exception.with_traceback(traceback) -> 2233 raise exc 2234 if errors == "skip": 2235 bad_keys.add(key)
CancelledError: ('getitem-7bc26de7f9999889faa36778be8593d0', 0, 0)