feiwang3311 / Lantern

BSD 3-Clause "New" or "Revised" License
167 stars 15 forks source link

Add explicit tensor data malloc via `NewTensor`. #21

Closed dan-zheng closed 5 years ago

dan-zheng commented 5 years ago

TODO: Investigate TensorApply and TensorUpdate for GPU.

Addresses feedback from @TiarkRompf.

dan-zheng commented 5 years ago

Merging to unblock progress, happy to address feedback later. I hope this is a step in the right direction.

TiarkRompf commented 5 years ago

I anticipate that we'll need to generate custom CUDA at some point, but for now we should do something simpler:

  1. only access GPU data in bulk, never element-wise -- this removes need for Apply and Update
  2. if data needs to be shipped to the CPU, make it an explicit transfer operation (array.transferToCPU()).

For this, we don't need IR nodes for TensorNew, TensorApply, TensorUpdate, but a simple unchecked("..") should suffice. I'd also prefer the to call it something like NewGPUArray since it's returning an Array, not a Tensor object.

I do expect that we'll evolve this design but I'd like to do the simplest thing that enables us to run benchmarks.

dan-zheng commented 5 years ago

@TiarkRompf Are you suggesting the following design?

Example programs:

def snippet() {
  // backend: BackendCublas

  // Allocate tensor on CPU.
  val x = Tensor.ones(2, 2)
  // Doing `x+x` right now would result in an error because `x` lives on CPU
  // but codegen produces cuBLAS ops.

  // Transfer tensor to GPU.
  x = x.transferToGPU()
  // Now, `x+x` is valid.
  val y = x + x
}

def snippet2() {
  // backend: BackendCublas
  val x = Tensor.ones(2, 2)
  // We can use `withGPU` to eliminate manual transfer operations.
  // However, doing `x+x` here outside `withGPU` still produces an error.
  withGPU(x) {
    val y = x + x
  }
}

I propose an alternative design (nearly implemented):

Example program:

def snippet() {
  // backend: BackendCublas

  // Allocate tensor on GPU. Doing `x+x` is valid right now.
  val x = Tensor.ones(2, 2)
  val y = x + x

  withCPU(y) {
    // Transfer `y` to CPU and perform ops (`+` and `print`).
    val z = y + y
    z.print()
  }
}

TensorFlow's programming model is similar to the second design:

with tf.device('/device:GPU:2'):
  a = tf.constant([1.0, 2.0, 3.0, 4.0, 5.0, 6.0], shape=[2, 3], name='a')
  b = tf.constant([1.0, 2.0, 3.0, 4.0, 5.0, 6.0], shape=[3, 2], name='b')
  c = tf.matmul(a, b)

PyTorch's programming model is similar to the first design, but eliminates the potential error in the example because ops are generated based on the device placement of tensors. This is actually a more robust model than either of the two designs.

x = Tensor(...) # CPU
x + x # Performed on CPU.
x = x.cuda() # GPU
x + x # Performed on GPU.

I suppose the differences between the two designs doesn't matter for benchmarks, because most tensor programs start on the CPU (to do data loading, etc) and then compute on the GPU.

The two designs are different only when the backend is not the CPU. A small limitation of the first design is that it's not possible to allocate memory directly on GPU. Copying from CPU is always necessary, even if unnecessary.

I prefer the second design because there are fewer holes in the abstraction. If you want to push for the first design for simplicity/pragmatism, I'm fine with that, and will implement it. Please let me know.

Edit: there's a critical implementation difficulty in the second design, so I'll abandon it and pivot to the first design.

TiarkRompf commented 5 years ago

@TiarkRompf Are you suggesting the following design?

  • Do not make tensor data allocation backend-dependent. Always allocate tensors on CPU.
  • Implement transfer operations to copy tensors between backends.

Slightly different -- what I'm suggesting is to think in two layers, Arrays and Tensors:

Does that makes sense?

I do think the design with device scopes (withCPU and withGPU) has merit but it's more complex, and the additional complexity doesn't seem on the critical path for benchmarks right now.

dan-zheng commented 5 years ago

Aha, that makes sense!

Previously, I had a misunderstanding, which is that tensor math operations and allocation operations were somehow distinct. In reality, they’re not at all: both should be backend defined. Tensor constructors like Tensor.fill and Tensor.fromData should call backend.mallocArray, which allocates memory either on CPU or GPU.

The complexity of the second design above is entirely avoidable. Thanks for your clarifications!