I have been getting segmentation faults for a directed network of 3,7 million edges and 2.6 million nodes in a machine with 300 cores and 6TB RAM using the default parameters (128 dimensions and path length 80).
After limiting the number of cores available for computation, I noticed I am less likely to get the segmentation fault when I choose a lower number of cores (e.g. 20) than when I choose a higher number (e.g.100). However, I do not change any of the other parameters.
Does anyone know what might be happening?
I would actually need to get embeddings for bigger networks, but if I'm not able to parallelize better it will take forever...
I have been getting segmentation faults for a directed network of 3,7 million edges and 2.6 million nodes in a machine with 300 cores and 6TB RAM using the default parameters (128 dimensions and path length 80). After limiting the number of cores available for computation, I noticed I am less likely to get the segmentation fault when I choose a lower number of cores (e.g. 20) than when I choose a higher number (e.g.100). However, I do not change any of the other parameters. Does anyone know what might be happening? I would actually need to get embeddings for bigger networks, but if I'm not able to parallelize better it will take forever...