I know their are spark functions such as map,mapPartitions which executes tasks in different nodes but I am interested to know about the basic fundamental concepts.how dist-keras go about training and predicting in achieving parallelism ? In other words, does training and predictions of batches happens in different nodes simultaneously ? If so then how do they combine the final results ?
Hi,
I know their are spark functions such as map,mapPartitions which executes tasks in different nodes but I am interested to know about the basic fundamental concepts.how dist-keras go about training and predicting in achieving parallelism ? In other words, does training and predictions of batches happens in different nodes simultaneously ? If so then how do they combine the final results ?