cyclops-community / ctf

Cyclops Tensor Framework: parallel arithmetic on multidimensional arrays
Other
194 stars 53 forks source link

issues using sparse file io to load tensors #130

Closed rohany closed 2 years ago

rohany commented 2 years ago

I'm trying to use CTF with sparse tensors and running into some problems. I have this code here which attempts to perform an SpTTV operation (a(i) = B(i, j, k) * c(k)), where B is a tensor loaded from a file.

#include <ctf.hpp>
#include <chrono>
#include <float.h>
using namespace CTF;

void spmv(int nIter, int warmup, std::string filename, World& dw) {
  // TODO (rohany): I don't know what's the best way to get the
  //  dimensions of the tensor (which is encoded in the file already...).
  int x = 2902330;
  int y = 2143368;
  int z = 25495389;
  int lens[] = {x, y, z};
  Tensor<double> B(3, true /* is_sparse */, lens, dw);
  Vector<double> a(x, dw);
  Vector<double> c(z, dw);

  auto compute = [&]() {
    a["i"] = B["ijk"] * c["k"];
  };

  // Attempt to read in B...
  B.read_sparse_from_file(filename.c_str());
  std::cout << B.nnz_tot << std::endl;

  for (int i = 0; i < warmup; i++) { compute(); }
  auto start = std::chrono::high_resolution_clock::now();
  for (int i = 0; i < nIter; i++) { compute(); }
  auto end = std::chrono::high_resolution_clock::now();
  auto ms = std::chrono::duration_cast<std::chrono::milliseconds>(end - start).count();
  if (dw.rank == 0) {
    std::cout << "Average execution time: " << (double(ms) / double(nIter)) << "ms." << std::endl;
  }
}

int main(int argc, char** argv) {
  int nIter = -1, warmup = -1;
  std::string filename;
  for (int i = 1; i < argc; i++) {
    if (strcmp(argv[i], "-n") == 0) {
      nIter = atoi(argv[++i]);
      continue;
    }
    if (strcmp(argv[i], "-warmup") == 0) {
      warmup = atoi(argv[++i]);
      continue;
    }
    if (strcmp(argv[i], "-tensor") == 0) {
      filename = std::string(argv[++i]);
      continue;
    }
  }

  if (nIter == -1 || warmup == -1 || filename.empty()) {
    std::cout << "provide all inputs." << std::endl;
  }

  MPI_Init(&argc, &argv);
  int np;
  MPI_Comm_size(MPI_COMM_WORLD, &np);
  {
    World dw;
    spmv(nIter, warmup, filename, dw);
  }
  MPI_Finalize();
  return 0;
}

I'm running this code with the nell-1 tensor from the FROSTT dataset, found here: http://frostt.io/tensors/nell-1/. I run the code with -n 1 -warmup 0 -tensor /path/to/nell-1.tns.

No matter what I do, it doesn't seem like B ever gets any data loaded into it (B.print() is empty, and B.tot_nnz is zero). If I run the code with 2 ranks per node, I get a segfault.

To make matters more confusing, when I run on a small (2 coordinate tensor), the file io seems to work (however it still segfaults with multiple ranks).

raghavendrak commented 2 years ago

CTF encodes tensor indices in a single int64_t datatype. The dimensions of nell-1 tensor exceed INT64_MAX, and hence the segmentation fault.

rohany commented 2 years ago

Thanks for the response @raghavendrak! Is there anything that I can do to allow for loading of these larger tensors? Additionally, I was wondering about a usability question:

For loading tensor data into CTF, it seems like I must know the dimensions of the tensor before I load the tensor from disk, which seems a bit backwards because the tensor file already encodes this data. Is there some utility in CTF to postpone the declaration of the tensor dimensions until after the read, or is this not possible?

Finally, I'm using this slightly edited code snippet from above to load this matrix market file. When I print out the total number of non-zeros loaded, it is not even close to the number of non-zeros on the description (45204427). Am I using the library incorrectly?

#include <ctf.hpp>
#include <chrono>
#include <float.h>
using namespace CTF;

void spmv(int nIter, int warmup, std::string filename, World& dw) {
  int x = 1102824;
  int y = x;
  Tensor<double> B(2, true /* is_sparse */, lens, dw);
  Vector<double> a(x, dw);
  Vector<double> c(y, dw);

  auto compute = [&]() {
    a["i"] = B["ij"] * c["j"];
  };

  // Attempt to read in B...
  B.read_sparse_from_file(filename.c_str());
  std::cout << B.nnz_tot << std::endl;

  for (int i = 0; i < warmup; i++) { compute(); }
  auto start = std::chrono::high_resolution_clock::now();
  for (int i = 0; i < nIter; i++) { compute(); }
  auto end = std::chrono::high_resolution_clock::now();
  auto ms = std::chrono::duration_cast<std::chrono::milliseconds>(end - start).count();
  if (dw.rank == 0) {
    std::cout << "Average execution time: " << (double(ms) / double(nIter)) << "ms." << std::endl;
  }
}

int main(int argc, char** argv) {
  int nIter = -1, warmup = -1;
  std::string filename;
  for (int i = 1; i < argc; i++) {
    if (strcmp(argv[i], "-n") == 0) {
      nIter = atoi(argv[++i]);
      continue;
    }
    if (strcmp(argv[i], "-warmup") == 0) {
      warmup = atoi(argv[++i]);
      continue;
    }
    if (strcmp(argv[i], "-tensor") == 0) {
      filename = std::string(argv[++i]);
      continue;
    }
  }

  if (nIter == -1 || warmup == -1 || filename.empty()) {
    std::cout << "provide all inputs." << std::endl;
    return -1;
  }

  MPI_Init(&argc, &argv);
  int np;
  MPI_Comm_size(MPI_COMM_WORLD, &np);
  {
    World dw;
    spmv(nIter, warmup, filename, dw);
  }
  MPI_Finalize();
  return 0;
}
raghavendrak commented 2 years ago

You might want to consider batching the tensor/computation. But this would require custom functions (to preprocess and handle batched computation) to be added to your code. I am not aware of an out-of-the-box solution in CTF to load tensor without specifying the dimensions. Maybe a wrapper/preprocessing step can be added in your code? Regarding the matrix market file you pointed to:

rohany commented 2 years ago

I am not aware of an out-of-the-box solution in CTF to load tensor without specifying the dimensions. Maybe a wrapper/preprocessing step can be added in your code?

I could do this, but it would be great if CTF could expose an interface that takes in a fstream or something, rather than the path to the file. Without an interface like that I would have to duplicate relatively large matrix market files just to change the header by removing comments etc.