llvm / llvm-project

The LLVM Project is a collection of modular and reusable compiler and toolchain technologies.
http://llvm.org
Other
27.59k stars 11.34k forks source link

[mlir][bufferization] OneShotBufferize broken when using `defaultMemorySpaceFn` #91518

Open christopherbate opened 3 months ago

christopherbate commented 3 months ago

A change from February allowed callers of OneShotBufferize to set the default memory space from the TensorType of a value. See PR here: https://github.com/llvm/llvm-project/pull/78484. According to the PR, this is used to allow implementing a mapping from tensor encoding attribute to memref memory space.

This change is a nice feature, however it is quite broken outside of limited use. In particular, it doesn't play nicely with bufferization.alloc_tensor|materialize_in_destination|to_tensor|to_memref and produces unexpected/wrong results.

I will post a range of example IR where using this feature produces unexpected/non-sense results when running one-shot-bufferize, but the crux of the issue is that there is limited support with dealing situations like the following:

  1. bufferization.materialize_in_destination where the encoding on the source and destination tensors map to different memory spaces
  2. bufferization.alloc_tensor expects the copy and result types to match, but this is at odds with using the tensor type encoding to set the memory space. It can also fail if the tensor type encoding and the memory_space attribute on the op are different.

I took a shot at fixing this and believe that I have corrected all issues satisfactory (https://github.com/llvm/llvm-project/pull/91524).

christopherbate commented 3 months ago

Examples (introduced as tests in the PR): https://github.com/llvm/llvm-project/blob/80fe1bbd1d49b0af792fd38d04782ed1601e0222/mlir/test/Dialect/Bufferization/Transforms/one-shot-bufferize-encodings.mlir

The PR adds an option to set defaultMemorySpaceFn to use tensor encoding from the one-shot-bufferize pass options. You need that in order to run the examples and see the existing issues at HEAD.