Open mingfeima opened 2 years ago
Thanks a lot @mingfeima for this very well-put proposal. The benchmarks look promising!
Looking at the targeted operators, we typically use these in the model training stage on GPUs. Thus I assume that the main use-case for optimizing them on CPU would be for CPU inference? Would you have concrete examples where this is applicable?
As a side note, since we're talking about vectorization: I might start taking a look into making our Resize() / interpolate()
transform faster (on tensors). Comparing ours with Pillow-SIMD, we're observing major improvements from vectorization. If this is something that can be of interest to you, I'm more than happy to chat more!
As a side note, since we're talking about vectorization: I might start taking a look into making our Resize() / interpolate() transform faster (on tensors).
@NicolasHug FYI, interpolation is already vectorized for 2d case by mingfeima : https://github.com/pytorch/pytorch/blob/bd854588fb927371c319d24d31b659731eddc3bc/aten/src/ATen/native/cpu/UpSampleKernel.cpp#L442-L602
However, we can benefit from the vectorization (according to the current implementation) only for inputs with >=4 channels (@mingfeima please correct me if I'm wrong):
IMO, the main needs in resize optimization is native support for uint8 without copying data to float and back.
@NicolasHug First of all, yes our priority is inference. And the most requested model from our customers are MaskedRCNN
and its variants. So from this point of view, the key bottleneck operator would be RoiAlign
forward path.
Anyway we would centainly like to hear more inputs from you guys, what other models/operators might be interested, so as to sort out the priorities among the TODOs.
Meanwhile, we would also like to contribute to backwards (this is more from our internal KPI pressure not business requirements).
@vfdev-5 Talking about resize
or interpolate
, the first factor is the memory format, usually we can only do vectorization on NHWC (NCHW can be vectorized on some specific case, such as scale=2; but generically NCHW will use scalar logic).
Secondly, as you have pointed out, only when C > Vec::size() will the code be vectorized. And Vec::size()
will be 8 for float under avx2 and 16 under avx512, and so on. This is because current impl for vectorization with remainder requires a memcpy (instead of masked load) so it's not that efficient. Interpolation on unit8
should be done on acc type (float32) but this doesn't mean it should be slow, we can do inplace dtype conversion and the whole process can be vectorized.
Anyway. do you have any minimal example/benchmark to reproduce resize
performance? I can give it a try to see how to improve it.
@mingfeima thanks for your answer about resize. Maybe we can continue discussion in another issue related to interpolation. There are few of them, e.g. https://github.com/pytorch/vision/issues/6465 (image is read in 3d HWC format but once unsqueezed it was not recognized as 1HWC channel last and thus resize is going as channels first fallback, very slow)
As for NCHW, I agree with what you say. In our previous approach we did implicit compiler vectorization which was done on reccurent ops like out += w * src
and some others.
Anyway, here is a gist to produce a benchmark pth vs pil: https://gist.github.com/vfdev-5/7885ee41d31789cd159dcd52b2e8fc6a
We would like optimize cases like:
@NicolasHug @vfdev-5 Oh sorry for the late response, super busy recently, just got time to take a look at this last weekend ...
I opened https://github.com/pytorch/pytorch/pull/87053 to address mode=bilinear (3, H, W) on float
, shall we move the discussion ?
Thanks @mingfeima , I'll take a look
I just want to note that this part has been addressed in https://github.com/pytorch/pytorch/pull/86361, so there's no need to focus on it anymore
mode=nearest for (1, H, W) uint8, where IMO there is "bug" that implementation goes to your channels last route and it is slow but if it were going to channels first implementation it could be faster.
Hopefully there will be support for uint8 type input and an accelerated version of it for interpolate()
as mentioned in https://github.com/pytorch/pytorch/pull/86361#issuecomment-1269822386 and https://github.com/pytorch/pytorch/issues/5580 .
Hopefully there will be support for uint8 type input and an accelerated version of it for
interpolate()
as mentioned in pytorch/pytorch#86361 (comment) and pytorch/pytorch#5580 .
sum up the status a little bit:
mode=nearest
support unit8 and @NicolasHug has fixed a performance bug when C<4 with #86361 (previously it will go to the channels last kernel, ant that kernel will do vectorization on C but C=1 can't be vectorized so it is rather slow).mode=bilinear/bicubic
unit8 support will be added.mode=bilinear, antialias=True
float32 on memory format NCHW optimization is currently WIP. mode=bicubic
to go next. And followed by unit8 optimization. (from optimization point of way, mode=bilinear
and mode=bicubic
could use the same set of kernel. But unit8 will have different kernel from float32).Just FYI, I started working on support for uint8, mode=bilinear, antialias=True, channels_last, shape=(1,3,H,W) in https://github.com/pytorch/pytorch/pull/87863
Hi, any update on this?
Hi, any update on this?
NicolasHug and vfdev-5 have done a lot of job in optimizing int8/uint8 image scaling/resize on torch.
🚀 The feature
This RFC is targeting at improving performance of operators from torchvision on CPU.
Motivation, pitch
Generally performance improvements can be made in 3 ways:
RoiAlign
pooling could be beneficial because: a) first of allRoiAlign
can be vectorized on NHWC (on NCHW or channels first memory format, it can only use scalar logic); b) secondly,Conv2d
can save memory format reorders between PyTorch's plain format and mkldnn's blocked formats.BFloat16
takes half of the memory footprint offloat32
.The plan is to cover both inference and training optimizations at the same time.
Affected Operators
The optimization scope will cover the native kernels from csrc/ops/cpu, including:
These operators will affect models such as
FasterRCNN
,MaskedRCNN
, etc.[Discussion Needed]: need to sort out the priorities of these kernels.
API and Behavior Change
Since all the optimizations will be done on the kernel level, no API change will be required.
Users will be able to run models in
channels last
as recommended from memory_format_tutorial:To run model in bfloat16 with explicit data type conversion or AMP:
Non-Batch Mode Input
Some models will have the input in non-batch mode e.g. CHW (N = 1), this can not be converted to channels last in torch at the moment:
torch.nn.conv2d
will check the memory format ofinput
andweight
, if either one of them is channels last, the convolution wil use channels last path. Therefore, for non-batch mode input, we can only converting themodel
and still channels last will be used.This part requires special attention and validation effort.
Parallelization on Multi Core CPUs
We propose to follow the identical parallelization scheme with torch, e.g. using the wrapper
at::parallel_for
. It can be linked to OpenMP or TBB depending on the build option (by default OpenMP will be used).This commit is an example of paralleling
roi_align
on the 1st dimension of the input tensor, e.g.n_rois
, with help ofat::parallel_for
.Vectorization on x86 CPUs
Vectorization can be done multiple ways, namely:
Auto Vectorization
Let compiler automatically vectorize with
#pragma omp simd
, this commit adds channels last support forroi_align
and did vectorization on the last dimension, e.g.channels
:Note that on NCHW, this kernel can not be vectorized.
BFloat16
can not be vectorized by compiler properly, which means if we choose this approach,RoiAlign
won't have BFloat16 support and will be put into fallback list of AMP;Manual Vectorization
Vectorize the code via
at::vec::Vectorized<>
struct, which will be compiled to different assembly depending on arch, avx2/avx512 or neon.BFloat16
vectorization; cross platform support.From performance point of view, these two approaches would have similar results.
[Discussion Needed]: need to decide which way to go.
Experiment Results
A demo shows performance improvement with
channels last
support on modelfast_rcnn_R_50_FPN_1x
fromdetectron2
:torch: 1.13.0a0 torchvision: 0.14.0a0 detectron2: 0.6 cpu: Intel(R) Xeon(R) Gold 6248 CPU @ 2.50GHz
Breakdown
Here is performance breakdown of NCHW (before) v.s. NHWC (after):
NCHW (before)
NHWC (after)
We can see that the performance improvement primarily comes from:
torchvision::roi_align
time reduced from 82.6s to 2.3s, due to parallelization and vectorization.aten::conv2d
time reduced from 88.3s to 63.1s, on channels last, mkldnn reorders on activations will be saved.Additional
[Discussion Needed]: need to decide details of performance benchmarking, such as:
benchmark.py
from detectron2 or use torch-bench?[Discussion Needed]: test cases: we will add new test cases in corresponding modules from vision/test when making pull requests, what else is needed?