Open eromoe opened 6 years ago
I change to put , still get killed.
I change to https://hdfscli.readthedocs.io/ , error gone. Though I have to save file in local then upload to hdfs.
It is hard to diagnose what "killed" might mean - presumably some buffer overrun in the C layer. It would be interesting to know the data size of the file you are trying to write. You might want to try arrow's hdfs interface, which seems to be less error prone.
Can't hdfs3 catch such errors ?
This message is not being created by hdfs3, but by the OS when the process does something illegal. It will be in the C layer, because python produces nicer messages, and so there is no opportunity for python to catch it. You could perhaps invoke gdb to find out what, but such investigations are very hard.
My code like below
I didn't use dask for now, it is pandas. Here df is [31909929 rows x 3 columns] , I fond if I write 1000, it works. But print
Killed
when write the whole df.