Open diegobersanetti opened 3 years ago
Hi @diegobersanetti,
Is this happening with a certain pickle file or with multiple ones? If is only one file, can you please upload it here to see if we can track this issue?
@steff456, I think the problem is a variable is too large and so we can't retrieve its value in order to save it.
So I think the fix is to catch that TimeoutError
and show a message if that's the case.
Hello, I made some tests and the very same routine works with smaller data sets, so I guess it could be an out-of-memory issue or something similar to that; if it could be of interest I can try however to provide some more data about it.
so I guess it could be an out-of-memory issue or something similar to that
Thanks @diegobersanetti, that's exactly my guess.
Right now we wait 30 secs to retrieve a variable from the kernel and then give up, which is required to save your current workspace.
So we have two options to solve this problem:
What option would you prefer?
I think we can't do this without a timeout because this operation blocks Spyder, right @impact27?
It does but it does not have to. We could just use a callback in save_namespace
instead. same thing for load_data
. get_value
should probably be left alone.
Description
What steps will reproduce the problem?
Spyder kernel crashes and restart while trying to save the workspace in a .spydata file. This always happens at the same point (given the same workspace).
The .pickle file is empty; the 0000 .npy file is 144 byte, and the crash happens while finalizing the 0008 file (47.2/48 MB).
Platform: Ubuntu Linux 20.04.2, conda 4.9.2, Spyder 4.2.1, Python 3.7.9
Traceback
Versions
Dependencies