So binder has a strict memory limit of 2 GB. Even after having the data saved into parquet files, the filling method takes a lot more memory which crashes the kernel. Any suggestions? One possibility:
to save the filled dataframe on gitlab repo and load that on binder
So binder has a strict memory limit of 2 GB. Even after having the data saved into parquet files, the filling method takes a lot more memory which crashes the kernel. Any suggestions? One possibility: