Closed narendasan closed 3 years ago
Not sure what exactly ScalarOpt_dim
corresponds to. There are different types of norm operations in pytorch
https://pytorch.org/docs/stable/linalg.html#torch.linalg.norm
The default L2 norm signature that I see on my end is
aten::frobenius_norm.dim(Tensor self, int[1] dim, bool keepdim=False) -> (Tensor)
Also, TensorRT doesn't have a native normalization layer. Normalize plugin in TensorRT is opensourced. One solution is to integrate TRT plugin implementation. Another solution is to design a plugin and call pytorch kernels within it.
In torch/functional.py:1221 of pytorch source code
def norm(input, p="fro", dim=None, keepdim=False, out=None, dtype=None): # noqa: 749
r"""Returns the matrix norm or vector norm of a given tensor.
.. warning::
torch.norm is deprecated and may be removed in a future PyTorch release.
Use :func:`torch.linalg.norm` instead, but note that :func:`torch.linalg.norm`
has a different signature and slightly different behavior that is
more consistent with NumPy's numpy.linalg.norm.
Args:
input (Tensor): the input tensor
p (int, float, inf, -inf, 'fro', 'nuc', optional): the order of norm. Default: ``'fro'``
The following norms can be calculated:
====== ============== ==========================
ord matrix norm vector norm
====== ============== ==========================
'fro' Frobenius norm --
'nuc' nuclear norm --
Number -- sum(abs(x)**ord)**(1./ord)
====== ============== ==========================
The vector norm can be calculated across any number of dimensions.
The corresponding dimensions of :attr:`input` are flattened into
one dimension, and the norm is calculated on the flattened
dimension.
Frobenius norm produces the same result as ``p=2`` in all cases
except when :attr:`dim` is a list of three or more dims, in which
case Frobenius norm throws an error.
Nuclear norm can only be calculated across exactly two dimensions.
dim (int, tuple of ints, list of ints, optional):
Specifies which dimension or dimensions of :attr:`input` to
calculate the norm across. If :attr:`dim` is ``None``, the norm will
be calculated across all dimensions of :attr:`input`. If the norm
type indicated by :attr:`p` does not support the specified number of
dimensions, an error will occur.
keepdim (bool, optional): whether the output tensors have :attr:`dim`
retained or not. Ignored if :attr:`dim` = ``None`` and
:attr:`out` = ``None``. Default: ``False``
out (Tensor, optional): the output tensor. Ignored if
:attr:`dim` = ``None`` and :attr:`out` = ``None``.
dtype (:class:`torch.dtype`, optional): the desired data type of
returned tensor. If specified, the input tensor is casted to
:attr:'dtype' while performing the operation. Default: None.
.. note::
Even though ``p='fro'`` supports any number of dimensions, the true
mathematical definition of Frobenius norm only applies to tensors with
exactly two dimensions. :func:`torch.linalg.norm` with ``ord='fro'`` aligns
with the mathematical definition, since it can only be applied across
exactly two dimensions.
Example::
>>> import torch
>>> a = torch.arange(9, dtype= torch.float) - 4
>>> b = a.reshape((3, 3))
>>> torch.norm(a)
tensor(7.7460)
>>> torch.norm(b)
tensor(7.7460)
>>> torch.norm(a, float('inf'))
tensor(4.)
>>> torch.norm(b, float('inf'))
tensor(4.)
>>> c = torch.tensor([[ 1, 2, 3],[-1, 1, 4]] , dtype= torch.float)
>>> torch.norm(c, dim=0)
tensor([1.4142, 2.2361, 5.0000])
>>> torch.norm(c, dim=1)
tensor([3.7417, 4.2426])
>>> torch.norm(c, p=1, dim=1)
tensor([6., 6.])
>>> d = torch.arange(8, dtype= torch.float).reshape(2,2,2)
>>> torch.norm(d, dim=(1,2))
tensor([ 3.7417, 11.2250])
>>> torch.norm(d[0, :, :]), torch.norm(d[1, :, :])
(tensor(3.7417), tensor(11.2250))
"""
if not torch.jit.is_scripting():
if type(input) is not Tensor and has_torch_function((input,)):
return handle_torch_function(
norm, (input,), input, p=p, dim=dim, keepdim=keepdim, out=out, dtype=dtype)
ndim = input.dim()
# catch default case
if dim is None and out is None and dtype is None and p is not None:
if isinstance(p, str):
if p == "fro":
return _VF.frobenius_norm(input, dim=(), keepdim=keepdim) # type: ignore
if not isinstance(p, str):
_dim = [i for i in range(ndim)] # noqa: C416 TODO: rewrite as list(range(m))
return _VF.norm(input, p, dim=_dim, keepdim=keepdim) # type: ignore
There are three types of norm according to the documentation: 'fro', 'nuc', Number Their implementations are list below (in aten/src/Aten/native/LinearAlgebra.cpp: 1303, 1351, 1431
Tensor &frobenius_norm_out(
Tensor& result,
const Tensor& self,
IntArrayRef dim,
bool keepdim)
Tensor& nuclear_norm_out(Tensor& result, const Tensor& self, IntArrayRef dim, bool keepdim)
// Performs matrix norm
static Tensor& _linalg_norm_matrix_out(Tensor& result, const Tensor &self, optional<Scalar> opt_ord,
IntArrayRef dim, bool keepdim, optional<ScalarType> opt_dtype)
static Tensor& _linalg_norm_vector_out(Tensor& result, const Tensor& self, optional<Scalar> opt_ord, std::vector<int64_t> dim, bool keepdim, optional<ScalarType> opt_dtype)
static Tensor& linalg_norm_out_impl(Tensor& result, const Tensor& self, optional<Scalar> opt_num_ord, optional<std::string> opt_str_ord, optional<IntArrayRef> opt_dim, bool keepdim, optional<ScalarType> opt_dtype) {
// Callers must give the ord argument as either a number, a string, or neither.
// Since the user-facing API has no direct control over how this function is called, this is an internal assert.
TORCH_INTERNAL_ASSERT(!(opt_num_ord.has_value() && opt_str_ord.has_value()));
if (opt_dtype.has_value()) {
auto dtype = opt_dtype.value();
TORCH_CHECK(dtype == result.scalar_type(), "provided dtype must match dtype of result, but got",
"dtype = ", dtype, ", out.dtype = ", result.scalar_type());
}
int64_t ndim = self.dim();
if (opt_str_ord.has_value()) {
// 'ord' is string
auto str_ord = opt_str_ord.value();
check_str_ord_valid(str_ord, opt_dim, ndim);
Tensor self_ = opt_dtype.has_value() ? self.to(opt_dtype.value()) : self;
if (str_ord == "fro") {
at::frobenius_norm_out(result, self_, opt_dim.value_or(IntArrayRef({0, 1})), keepdim);
} else if (str_ord == "nuc") {
if (opt_dim.has_value()) {
at::nuclear_norm_out(result, self_, opt_dim.value(), keepdim);
} else {
at::nuclear_norm_out(result, self_, keepdim);
}
}
} else {
// 'ord' is int or None
std::vector<int64_t> dim_ = opt_dim.has_value() ? opt_dim.value().vec() : make_dim_list(ndim);
if (!opt_num_ord.has_value() || dim_.size() == 1) {
_linalg_norm_vector_out(result, self, opt_num_ord, dim_, keepdim, opt_dtype);
} else if (dim_.size() == 2) {
_linalg_norm_matrix_out(result, self, opt_num_ord.value(), dim_, keepdim, opt_dtype);
} else {
TORCH_CHECK(false, "'dim' must specify 1 or 2 dimensions when order is numerical and input is "
"not 1-D or 2-D");
}
}
return result;
}
// Numerical or None norms
Tensor linalg_norm(const Tensor& self, optional<Scalar> opt_ord, optional<IntArrayRef> opt_dim, bool keepdim, optional<ScalarType> opt_dtype) {
auto options = TensorOptions().dtype(opt_dtype.has_value() ? opt_dtype.value() : self.scalar_type()).device(self.device());
Tensor result = at::empty({0}, options);
return at::native::linalg_norm_out(result, self, opt_ord, opt_dim, keepdim, opt_dtype);
}
// Frobenius and nuclear norms
Tensor linalg_norm(const Tensor& self, std::string ord, optional<IntArrayRef> opt_dim, bool keepdim, optional<ScalarType> opt_dtype) {
auto options = TensorOptions().dtype(opt_dtype.has_value() ? opt_dtype.value() : self.scalar_type()).device(self.device());
Tensor result = at::empty({0}, options);
return at::native::linalg_norm_out(result, self, ord, opt_dim, keepdim, opt_dtype);
}
// Numerical or None norms
Tensor& linalg_norm_out(Tensor& result, const Tensor& self, optional<Scalar> opt_ord, optional<IntArrayRef> opt_dim, bool keepdim, optional<ScalarType> opt_dtype) {
return linalg_norm_out_impl(result, self, opt_ord, c10::nullopt, opt_dim, keepdim, opt_dtype);
}
Not sure what exactly
ScalarOpt_dim
corresponds to. There are different types of norm operations in pytorch https://pytorch.org/docs/stable/linalg.html#torch.linalg.norm The default L2 norm signature that I see on my end isaten::frobenius_norm.dim(Tensor self, int[1] dim, bool keepdim=False) -> (Tensor)
Also, TensorRT doesn't have a native normalization layer. Normalize plugin in TensorRT is opensourced. One solution is to integrate TRT plugin implementation. Another solution is to design a plugin and call pytorch kernels within it.
Calling pytorch kernels seem to be a better choice?
This issue has not seen activity for 90 days, Remove stale label or comment or this will be closed in 10 days
aten::norm
aten::norm.ScalarOpt_dim(Tensor self, Scalar? p, int[1] dim, bool keepdim=False) -> (Tensor)
https://pytorch.org/docs/stable/generated/torch.norm.html?highlight=norm#torch.norm
Alternatives
Additional context