Closed labarba closed 4 years ago
Reply
Changes
Reply
Changes
As highlighted by the authors' in both the manuscript and the Dockerfiles it is necessary (on the Azure platform at least) to use the Intel MPI library. This is highly problematic.
The result is a highly fragile system which is coupled to the wonts of one particular cloud platform.
Similarly, with the CUDA containers there are also compatibility issues. The most obvious is the container specifying a specific GPU architecture version which may not map onto the hardware of the host. This is another 'leaky abstraction' issue.
Reply
docker
sub-folder saying the user who downloads the Docker image agrees to the Intel license terms. The same note was also added to the description of the Docker images on DockerHub.Changes
Reply
ivygpu
nodes and 128GB of 1866MHz DDR3 ECC Registered DRAM for short
nodes. However, we do not have such information for Azure. We contacted Microsoft Azure about that and they told us there is no publicly released information regarding the speed and channels. Right now, only CPU information are publicly available (already included in Table 1 of the manuscript).Changes
Reply
Changes
Reply
Changes
Reply
Changes
Reply
Changes
Reply
Changes
Citation
Portions of the review report quoted here by permissions granted on the original under a CC-BY license.