facebookresearch / CrypTen

A framework for Privacy Preserving Machine Learning
MIT License
1.52k stars 278 forks source link

Matrix multiplication termination #509

Open imtiyazuddin opened 2 months ago

imtiyazuddin commented 2 months ago

This simple code of matrix multiplication is taking forever to run, what could be the issue?

import time

s = time.time()

sal = torch.Tensor(1000, 50176)
sal = crypten.cryptensor(sal)
i=1000
j=50
k=50176

for ii in range(i):
    for jj in range(j):
        for kk in range(k):
            sal[ii][kk] += pr[jj][ii] * m[jj][kk]

e = time.time()
print("time taken: ", (e-s))

The dimensions are correct. but taking too much time and never finished

imtiyazuddin commented 2 months ago

is there any way to parallelize code to make it run faster?

knottb commented 2 months ago

There are a number of reasons this is slow:

  1. You are doing all multiplications sequentially. Each time a value is multiplied here, you have to wait for the communicator to coordinate between processes. Avoid this by doing a matrix multiplication rather than individual multiplies:

sal += pr.matmul(m)

  1. It's possible that due to the size of the matrices, iterating through the indices could cause cache misses, requiring your processors to swap out caches to memory.

  2. The matrix itself isn't tiny. This is 1000 50 50k multiplies = 2.5 billion multiplies. This is all done sequentially and makes 2.5 billion calls to the communicator. Use matmul instead.