ivy-llc / ivy

Convert Machine Learning Code Between Frameworks
https://ivy.dev
Other
14.02k stars 5.77k forks source link

[Bug]: What framework to follow for `rfftn` as currently it will return dtype miss-match error for when used at frontend function except NumPy. #21100

Open akshatvishu opened 1 year ago

akshatvishu commented 1 year ago

Bug Explanation

At present the rfftn function returns dtype as complex128 regardless of input_dtype . This is the matches with the behavior of numpy.fft.rfftn but all other frameworks returns different complex dtype depending upon the input dtype

Current Implementation:

My question is which framework behavior should be followed ? Because at present there will be an issue of dtype miss-matching for all framework except NumPy or do we keep the NumPy behavior but in that case the frontend function for other framework will start to throw error as the ground-truth will return both complex64 and complex128 while we return only complex128 .

Also, not included TensorFlow because rfftn is not available natively at it and its current compositional implementation at backend seems to be erroneous .

Steps to Reproduce Bug

NumPy

NumPy has no problem with any of the dtype and return complex128 as output dtype.

import numpy as np

dtypes = ['uint16','bool','float16','float32', 'float64', 'int16','int32', 'int64', 'complex64', 'complex128']

for dtype in dtypes:

    try:

        x1 = np.array([0.1, 0.2, 0.3, 0.4, 0.5], dtype=dtype)
        y1 = np.fft.rfftn(x1)
        print(f"rfftn applied successfully on dtype: {dtype}. Output NP dtype: {y1.dtype}")
    except Exception as e_np:
        print(f"Error when applying rfftn on dtype: {dtype} for numpy. Error message: {e_np}")

"""
rfftn applied successfully on dtype: uint16. Output NP dtype: complex128
rfftn applied successfully on dtype: bool. Output NP dtype: complex128
rfftn applied successfully on dtype: float16. Output NP dtype: complex128
rfftn applied successfully on dtype: float32. Output NP dtype: complex128
rfftn applied successfully on dtype: float64. Output NP dtype: complex128
rfftn applied successfully on dtype: int16. Output NP dtype: complex128
rfftn applied successfully on dtype: int32. Output NP dtype: complex128
rfftn applied successfully on dtype: int64. Output NP dtype: complex128
rfftn applied successfully on dtype: complex64. Output NP dtype: complex128
rfftn applied successfully on dtype: complex128. Output NP dtype: complex128
"""

Jax

import jax.numpy as jnp

dtypes = ['uint16','bool','float16','float32', 'float64', 'int16','int32', 'int64', 'complex64', 'complex128']
for dtype in dtypes:
    try:
        x1 = np.array([0.1, 0.2, 0.3, 0.4, 0.5], dtype=dtype)
        y1 = jnp.fft.rfftn(x1)
        print(f"rfftn applied successfully on dtype: {dtype}. Output jax dtype: {y1.dtype}")
    except Exception as e_jnp:
        print(f"Error when applying rfftn on dtype: {dtype} for numpy. Error message: {e_jnp}")

"""
rfftn applied successfully on dtype: uint16. Output jax dtype: complex64
rfftn applied successfully on dtype: bool. Output jax dtype: complex64
rfftn applied successfully on dtype: float16. Output jax dtype: complex64
rfftn applied successfully on dtype: float32. Output jax dtype: complex64
rfftn applied successfully on dtype: float64. Output jax dtype: complex128
rfftn applied successfully on dtype: int16. Output jax dtype: complex64
rfftn applied successfully on dtype: int32. Output jax dtype: complex64
rfftn applied successfully on dtype: int64. Output jax dtype: complex128
Error when applying rfftn on dtype: complex64 for numpy. Error message: only real valued inputs supported for rfft
Error when applying rfftn on dtype: complex128 for numpy. Error message: only real valued inputs supported for rfft
"""

paddle


import paddle

dtypes = ['uint16','bool','float16','float32', 'float64', 'int16','int32', 'int64', 'complex64', 'complex128']

for dtype in dtypes:
    try:
        x = paddle.to_tensor([0.1, 0.2, 0.3, 0.4, 0.5], dtype=dtype)
        y = paddle.fft.rfftn(x)
        print("----------------------------")
        print(f"rfftn applied successfully on dtype: {dtype}. Output dtype: {y.dtype}")
        print("----------------------------")
    except ValueError as e_paddle:
        print(f"Error in paddle with dtype: {dtype}. Error message: {e_paddle}")
    except Exception as e:
        print(f"Error when applying rfftn on dtype: {dtype} for other library. Error message: {e}")

"""
Error when applying rfftn on dtype: uint16 for other library. Error message: (NotFound) The kernel with key (CPU, Undefined(AnyLayout), bfloat16) of kernel `fft_r2c` is not registered. Selected wrong DataType `bfloat16`. Paddle support following DataTypes: float64, float32.
  [Hint: Expected kernel_iter == iter->second.end() && kernel_key.backend() == Backend::CPU != true, but received kernel_iter == iter->second.end() && kernel_key.backend() == Backend::CPU:1 == true:1.] (at /paddle/paddle/phi/core/kernel_factory.cc:199)

Error when applying rfftn on dtype: bool for other library. Error message: (NotFound) The kernel with key (CPU, Undefined(AnyLayout), bool) of kernel `fft_r2c` is not registered. Selected wrong DataType `bool`. Paddle support following DataTypes: float64, float32.
  [Hint: Expected kernel_iter == iter->second.end() && kernel_key.backend() == Backend::CPU != true, but received kernel_iter == iter->second.end() && kernel_key.backend() == Backend::CPU:1 == true:1.] (at /paddle/paddle/phi/core/kernel_factory.cc:199)

Error when applying rfftn on dtype: float16 for other library. Error message: (NotFound) The kernel with key (CPU, Undefined(AnyLayout), float16) of kernel `fft_r2c` is not registered. Selected wrong DataType `float16`. Paddle support following DataTypes: float64, float32.
  [Hint: Expected kernel_iter == iter->second.end() && kernel_key.backend() == Backend::CPU != true, but received kernel_iter == iter->second.end() && kernel_key.backend() == Backend::CPU:1 == true:1.] (at /paddle/paddle/phi/core/kernel_factory.cc:199)

----------------------------
rfftn applied successfully on dtype: float32. Output dtype: paddle.complex64
----------------------------
----------------------------
rfftn applied successfully on dtype: float64. Output dtype: paddle.complex128
----------------------------
----------------------------
rfftn applied successfully on dtype: int16. Output dtype: paddle.complex64
----------------------------
----------------------------
rfftn applied successfully on dtype: int32. Output dtype: paddle.complex64
----------------------------
----------------------------
rfftn applied successfully on dtype: int64. Output dtype: paddle.complex64
----------------------------
Error when applying rfftn on dtype: complex64 for other library. Error message: (NotFound) The kernel with key (CPU, Undefined(AnyLayout), complex64) of kernel `fft_r2c` is not registered. Selected wrong DataType `complex64`. Paddle support following DataTypes: float64, float32.
  [Hint: Expected kernel_iter == iter->second.end() && kernel_key.backend() == Backend::CPU != true, but received kernel_iter == iter->second.end() && kernel_key.backend() == Backend::CPU:1 == true:1.] (at /paddle/paddle/phi/core/kernel_factory.cc:199)

Error when applying rfftn on dtype: complex128 for other library. Error message: (NotFound) The kernel with key (CPU, Undefined(AnyLayout), complex128) of kernel `fft_r2c` is not registered. Selected wrong DataType `complex128`. Paddle support following DataTypes: float64, float32.
  [Hint: Expected kernel_iter == iter->second.end() && kernel_key.backend() == Backend::CPU != true, but received kernel_iter == iter->second.end() && kernel_key.backend() == Backend::CPU:1 == true:1.] (at /paddle/paddle/phi/core/kernel_factory.cc:199)
"""

torch

import torch

dtype_map = {
    'uint8':torch.uint8,
    'float16': torch.float16,
    'float32': torch.float32,
    'float64': torch.float64,
    'int32' : torch.int16,
    'int32': torch.int32,
    'int64': torch.int64,
    'bool': torch.bool,
    'complex64' : torch.complex64,
    'complex128': torch.complex128
}

dtypes = ['uint8','bool','float16','float32', 'float64', 'int16','int32', 'int64', 'complex64', 'complex128']
for dtype in dtypes:
    try:
        x2 = torch.tensor([0.1, 0.2, 0.3, 0.4, 0.5], dtype=dtype_map[dtype])
        y3 = torch.fft.rfftn(x2)
        print(f"rfftn applied successfully on dtype: {dtype}. Output dtype: {y3.dtype}")
    except Exception as e_torch:
        print(f"Error when applying rfftn on dtype: {dtype} for PyTorch. Error message: {e_torch}")

"""
rfftn applied successfully on dtype: uint8. Output dtype: torch.complex64
rfftn applied successfully on dtype: bool. Output dtype: torch.complex64
Error when applying rfftn on dtype: float16 for PyTorch. Error message: Unsupported dtype Half
rfftn applied successfully on dtype: float32. Output dtype: torch.complex64
rfftn applied successfully on dtype: float64. Output dtype: torch.complex128
rfftn applied successfully on dtype: int16. Output dtype: torch.complex64
rfftn applied successfully on dtype: int32. Output dtype: torch.complex64
rfftn applied successfully on dtype: int64. Output dtype: torch.complex64
Error when applying rfftn on dtype: complex64 for PyTorch. Error message: rfftn expects a real-valued input tensor, but got ComplexFloat
Error when applying rfftn on dtype: complex128 for PyTorch. Error message: rfftn expects a real-valued input tensor, but got ComplexDouble
"""

Environment

linux , vscode + docker

Ivy Version

v1.1.9

Backend

Device

CPU

rajveer43 commented 1 year ago

@akshatvishu I worked on this function. also the PR was merged only when all the local tests were passing. also it should give complex128 to match with groundtruth framework for all other backend. if that is an issue then we can explicitly typecast the output for other backends. but it is not a good practice

you pointed out the error in backend of tf that error if currently fixed but has not been merged yet as it is in review.

akshatvishu commented 1 year ago

@akshatvishu I worked on this function. also the PR was merged only when all the local tests were passing. also it should give complex128 to match with groundtruth framework for all other backend. if that is an issue then we can explicitly typecast the output for other backends. but it is not a good practice.

you pointed out the error in backend of tf that error if currently fixed but has not been merged yet as it is in review.

That will be great ! But this issue will still remain whenever we call the backend at its current state for our frontend implementation(other than numpy).

Also, a thing to note is that for your IVY FUNCTIONAL API PR -> link the ground_truth_backend was NumPy hence, all the test cases got passed as NumPy natively also returns complex128 for all valid dtypes whenever we call numpy.fft.rfttn as I've already shown above.

The problem will occur when we try to implement frontend function for other frameworks and let me showcase it with an example:

Suppose, we are implementing rfftn for paddle. The ground-truth-backend for it will be paddle.rfftn

Now, a thing to note is all backends are returning complex128 as output_dtype.

import ivy
import ivy.functional.frontends.paddle as ivy_paddle
import paddle

ivy.set_backend("numpy")
dtype = ivy.float32
x = ivy.array([1,2], dtype=dtype)
y = ivy_paddle.fft.rfftn(x)
print(f"NP Input_dtype:{dtype}, output_dtype:{y.dtype} ")
ivy.set_backend("jax")
dtype = ivy.float32
x = ivy.array([1,2], dtype=dtype)
y = ivy_paddle.fft.rfftn(x)
print(f"jaxBackend, Input_dtype:{dtype}, output_dtype:{y.dtype} ")
ivy.set_backend("torch")
dtype = ivy.float32
x = ivy.array([1,2], dtype=dtype)
y = ivy_paddle.fft.rfftn(x)
print(f"TorchBackend, Input_dtype:{dtype}, output_dtype:{y.dtype} ")
ivy.set_backend("paddle")
dtype = ivy.float32
x = ivy.array([1,2], dtype=dtype)
y = ivy_paddle.fft.rfftn(x)
print(f"PaddleBackend, Input_dtype:{dtype}, output_dtype:{y.dtype} ")
print("--------------------NativeFunc------------------------")
dtype = paddle.float32
x = paddle.to_tensor([1,2], dtype=dtype)
y = paddle.fft.rfftn(x)
print(f"PaddleBackend, Input_dtype:{dtype}, output_dtype:{y.dtype} ")

"""
NP Input_dtype:float32, output_dtype:complex128 
[2023-07-31 16:58:15,081] [ WARNING] xla_bridge.py:636 - No GPU/TPU found, falling back to CPU. (Set TF_CPP_MIN_LOG_LEVEL=0 and rerun for more info.)
jaxBackend, Input_dtype:float32, output_dtype:complex128 
TorchBackend, Input_dtype:float32, output_dtype:complex128 
PaddleBackend, Input_dtype:float32, output_dtype:complex128 
--------------------NativeFunc------------------------
PaddleBackend, Input_dtype:paddle.float32, output_dtype:paddle.complex64 
"""

I hope now you can see what I am taking about; even If I change the behavior of rfftn from its present state to one that will make it pass for paddle_frontend then it will fail for NumPy backend instead, hence, the reason I raised this bug-report.

Additionally, here is a live PR that is getting affected by this : https://github.com/unifyai/ivy/pull/20895/files

ZiadAmerr commented 1 year ago

Hey @akshatvishu, can you please show the output comparison between the groundtruth frameworks and ivy? So for example:

--- NumPy Backend ---
in: uint16 --- out: complex128 --- gt: <<whatever dtype numpy returns>>
...

--- JAX Backend ---
...

We can dive more into this, I can see some dtypes being promoted and others not. If you can show this output we can deduce the pattern and mimic it.

akshatvishu commented 1 year ago

Hey @akshatvishu, can you please show the output comparison between the groundtruth frameworks and ivy? So for example:

--- NumPy Backend ---
in: uint16 --- out: complex128 --- gt: <<whatever dtype numpy returns>>
...

--- JAX Backend ---
...

We can dive more into this, I can see some dtypes being promoted and others not. If you can show this output we can deduce the pattern and mimic it.

Definitely!!

Glossary

Jax:

import ivy 
import jax
import ivy.functional.backends.jax as j_b

j_map = {
    'uint8':jnp.uint8,
    'uint16':jnp.uint16,
    'bfloat16': jnp.bfloat16,
    'float16': jnp.float16,
    'float32': jnp.float32,
    'float64': jnp.float64,
    'int16' : jnp.int16,
    'int32': jnp.int32,
    'int64': jnp.int64,
    'bool': jnp.bool_,
    'complex64' : jnp.complex64,
    'complex128': jnp.complex128
}
dtypes_b =['uint8','uint16','bool','bfloat16','float16','float32', 'float64', 'int16','int32', 'int64', 'complex64', 'complex128']
print(f"{'Input dtype':<15}{'|':<3}{'Native Output dtype':<20}{'|':<3}{'Ivy Backend dtype':<20}{'|':<3}{'Error Message':<20}")
print('-'*80) 
for dtype in dtypes_b:
    try:
        a = jnp.array([1,2], dtype=j_map[dtype])
        b = jnp.fft.rfftn(a)
        b_ = j_b.rfftn(a)
        print(f"{dtype:<15}{'|':<3}{str(b.dtype):<20}{'|':<3}{str(b_.dtype):<20}{'|':<3}{'None':<20}")
    except Exception as e:
        print(f"{dtype:<15}{'|':<3}{'Error':<20}{'|':<3}{'Error':<20}{'|':<3}{str(e):<20}")

"Input dtype    |  Native Output dtype |  Ivy Backend dtype   |  Error Message       
--------------------------------------------------------------------------------
uint8          |  complex64           |  complex128          |  None                
uint16         |  complex64           |  complex128          |  None                
bool           |  complex64           |  complex128          |  None                
bfloat16       |  Error               |  Error               |  data type <class 'ml_dtypes.bfloat16'> not inexact
float16        |  complex64           |  complex128          |  None                
float32        |  complex64           |  complex128          |  None                
float64        |  complex128          |  complex128          |  None                
int16          |  complex64           |  complex128          |  None                
int32          |  complex64           |  complex128          |  None                
int64          |  complex128          |  complex128          |  None                
complex64      |  Error               |  Error               |  only real valued inputs supported for rfft
complex128     |  Error               |  Error               |  only real valued inputs supported for rfft""

"""

NumPy

import ivy
import numpy as np
import ivy.functional.backends.numpy as np_b

dtype_map = {
    'uint8':np.uint8,
    'uint16':np.uint16,
    'float16': np.float16,
    'float32': np.float32,
    'float64': np.float64,
    'int16' : np.int16,
    'int32': np.int32,
    'int64': np.int64,
    'bool': np.bool_,
    'complex64' : np.complex64,
    'complex128': np.complex128
}

dtypes_b =['uint8','uint16','bool','bfloat16','float16','float32', 'float64', 'int16','int32', 'int64', 'complex64', 'complex128']
print(f"{'Input dtype':<15}{'|':<3}{'Native Output dtype':<20}{'|':<3}{'Ivy Backend dtype':<20}{'|':<3}{'Error Message':<20}")
print('-'*80)
for dtype in dtypes_b:
    try:
        a = np.array([1,2], dtype=dtype_map[dtype])
        b = np.fft.rfftn(a)
        b_ = np_b.rfftn(a)
        print(f"{dtype:<15}{'|':<3}{str(b.dtype):<20}{'|':<3}{str(b_.dtype):<20}{'|':<3}{'None':<20}")
    except Exception as e:
        print(f"{dtype:<15}{'|':<3}{'Error':<20}{'|':<3}{'Error':<20}{'|':<3}{str(e):<20}")

"""
Input dtype    |  Native Output dtype |  Ivy Backend dtype   |  Error Message       
--------------------------------------------------------------------------------
uint8          |  complex128          |  complex128          |  None                
uint16         |  complex128          |  complex128          |  None                
bool           |  complex128          |  complex128          |  None                
bfloat16       |  Error               |  Error               |  'bfloat16'          
float16        |  complex128          |  complex128          |  None                
float32        |  complex128          |  complex128          |  None                
float64        |  complex128          |  complex128          |  None                
int16          |  complex128          |  complex128          |  None                
int32          |  complex128          |  complex128          |  None                
int64          |  complex128          |  complex128          |  None                
complex64      |  complex128          |  complex128          |  None                
complex128     |  complex128          |  complex128          |  None  
"""

Paddle

import ivy
import paddle
import ivy.functional.backends.paddle as pd_b

p_map = {
    'uint8':paddle.uint8,
    'bfloat16':paddle.bfloat16,
    'float16': paddle.float16,
    'float32': paddle.float32,
    'float64': paddle.float64,
    'int16' : paddle.int16,
    'int32': paddle.int32,
    'int64': paddle.int64,
    'bool': paddle.bool,
    'complex64' : paddle.complex64,
    'complex128': paddle.complex128
}

dtypes_b =['uint8','uint16','bool','bfloat16','float16','float32', 'float64', 'int16','int32', 'int64', 'complex64', 'complex128']

print(f"{'Input dtype':<15}{'|':<3}{'Native Output dtype':<20}{'|':<3}{'Ivy Backend dtype':<20}{'|':<3}{'Error Message':<20}")
print('-'*80) 
for dtype in dtypes_b:
    try:
        ivy.set_backend("paddle")
        a = paddle.to_tensor([1,2], dtype=p_map[dtype])
        b = paddle.fft.rfftn(a)
        b_ = pd_b.rfftn(a)
        print(f"{dtype:<15}{'|':<3}{str(b.dtype):<20}{'|':<3}{str(b_.dtype):<20}{'|':<3}{'None':<20}")
    except Exception:
        print(f"{dtype:<15}{'|':<3}{'Error':<20}{'|':<3}{'Error':<20}{'|':<3}{'Not working':<20}")

"""
paddle natively return very big exception hence, replaced it with `Not working` for clarity.
eg of paddle exception return:
DataType `complex64`. Paddle support following DataTypes: float64, float32.
  [Hint: Expected kernel_iter == iter->second.end() && kernel_key.backend() == Backend::CPU != true, but received kernel_iter == iter->second.end() && kernel_key.backend() == Backend::CPU:1 == true:1.] (at /paddle/paddle/phi/core/kernel_factory.cc:199)
"""

"""
Input dtype    |  Native Output dtype |  Ivy Backend dtype   |  Error Message       
--------------------------------------------------------------------------------
uint8          |  paddle.complex64    |  paddle.complex128   |  None                
uint16         |  Error               |  Error               |  Not working         
bool           |  Error               |  Error               |  Not working         
bfloat16       |  Error               |  Error               |  Not working         
float16        |  Error               |  Error               |  Not working         
float32        |  paddle.complex64    |  paddle.complex128   |  None                
float64        |  paddle.complex128   |  paddle.complex128   |  None                
int16          |  paddle.complex64    |  paddle.complex128   |  None                
int32          |  paddle.complex64    |  paddle.complex128   |  None                
int64          |  paddle.complex64    |  paddle.complex128   |  None                
complex64      |  Error               |  Error               |  Not working         
complex128     |  Error               |  Error               |  Not working       
"""

Torch

import ivy
import torch
import ivy.functional.backends.torch as t_b

t_map = {
    'uint8':torch.uint8,
    'bfloat16':torch.bfloat16,
    'float16': torch.float16,
    'float32': torch.float32,
    'float64': torch.float64,
    'int16' : torch.int16,
    'int32': torch.int32,
    'int64': torch.int64,
    'bool': torch.bool,
    'complex64' : torch.complex64,
    'complex128': torch.complex128
}

dtypes_b =['uint8','uint16','bool','bfloat16','float16','float32', 'float64', 'int16','int32', 'int64', 'complex64', 'complex128']

print(f"{'Input dtype':<15}{'|':<3}{'Native Output dtype':<20}{'|':<3}{'Ivy Backend dtype':<20}{'|':<3}{'Error Message':<20}")
print('-'*80) 
for dtype in dtypes_b:
    try:
        ivy.set_backend("torch")
        a = torch.tensor([1,2], dtype=t_map[dtype])
        b = torch.fft.rfftn(a)
        b_ = t_b.rfftn(a)
        print(f"{dtype:<15}{'|':<3}{str(b.dtype):<20}{'|':<3}{str(b_.dtype):<20}{'|':<3}{'None':<20}")
    except Exception as e:
        print(f"{dtype:<15}{'|':<3}{'Error':<20}{'|':<3}{'Error':<20}{'|':<3}{str(e):<20}")

"""
Input dtype    |  Native Output dtype |  Ivy Backend dtype   |  Error Message       
--------------------------------------------------------------------------------
uint8          |  torch.complex64     |  torch.complex128    |  None                
uint16         |  Error               |  Error               |  'uint16'            
bool           |  torch.complex64     |  torch.complex128    |  None                
bfloat16       |  Error               |  Error               |  Unsupported dtype BFloat16
float16        |  Error               |  Error               |  Unsupported dtype Half
float32        |  torch.complex64     |  torch.complex128    |  None                
float64        |  torch.complex128    |  torch.complex128    |  None                
int16          |  torch.complex64     |  torch.complex128    |  None                
int32          |  torch.complex64     |  torch.complex128    |  None                
int64          |  torch.complex64     |  torch.complex128    |  None                
complex64      |  Error               |  Error               |  rfftn expects a real-valued input tensor, but got ComplexFloat
complex128     |  Error               |  Error               |  rfftn expects a real-valued input tensor, but got ComplexDouble

"""

TensorFlow

Implementation still under work!

rajveer43 commented 1 year ago

@akshatvishu I worked on this function. also the PR was merged only when all the local tests were passing. also it should give complex128 to match with groundtruth framework for all other backend. if that is an issue then we can explicitly typecast the output for other backends. but it is not a good practice. you pointed out the error in backend of tf that error if currently fixed but has not been merged yet as it is in review.

That will be great ! But this issue will still remain whenever we call the backend at its current state for our frontend implementation(other than numpy).

Also, a thing to note is that for your IVY FUNCTIONAL API PR -> link the ground_truth_backend was NumPy hence, all the test cases got passed as NumPy natively also returns complex128 for all valid dtypes whenever we call numpy.fft.rfttn as I've already shown above.

The problem will occur when we try to implement frontend function for other frameworks and let me showcase it with an example:

Suppose, we are implementing rfftn for paddle. The ground-truth-backend for it will be paddle.rfftn

Now, a thing to note is all backends are returning complex128 as output_dtype.

import ivy
import ivy.functional.frontends.paddle as ivy_paddle
import paddle

ivy.set_backend("numpy")
dtype = ivy.float32
x = ivy.array([1,2], dtype=dtype)
y = ivy_paddle.fft.rfftn(x)
print(f"NP Input_dtype:{dtype}, output_dtype:{y.dtype} ")
ivy.set_backend("jax")
dtype = ivy.float32
x = ivy.array([1,2], dtype=dtype)
y = ivy_paddle.fft.rfftn(x)
print(f"jaxBackend, Input_dtype:{dtype}, output_dtype:{y.dtype} ")
ivy.set_backend("torch")
dtype = ivy.float32
x = ivy.array([1,2], dtype=dtype)
y = ivy_paddle.fft.rfftn(x)
print(f"TorchBackend, Input_dtype:{dtype}, output_dtype:{y.dtype} ")
ivy.set_backend("paddle")
dtype = ivy.float32
x = ivy.array([1,2], dtype=dtype)
y = ivy_paddle.fft.rfftn(x)
print(f"PaddleBackend, Input_dtype:{dtype}, output_dtype:{y.dtype} ")
print("--------------------NativeFunc------------------------")
dtype = paddle.float32
x = paddle.to_tensor([1,2], dtype=dtype)
y = paddle.fft.rfftn(x)
print(f"PaddleBackend, Input_dtype:{dtype}, output_dtype:{y.dtype} ")

"""
NP Input_dtype:float32, output_dtype:complex128 
[2023-07-31 16:58:15,081] [ WARNING] xla_bridge.py:636 - No GPU/TPU found, falling back to CPU. (Set TF_CPP_MIN_LOG_LEVEL=0 and rerun for more info.)
jaxBackend, Input_dtype:float32, output_dtype:complex128 
TorchBackend, Input_dtype:float32, output_dtype:complex128 
PaddleBackend, Input_dtype:float32, output_dtype:complex128 
--------------------NativeFunc------------------------
PaddleBackend, Input_dtype:paddle.float32, output_dtype:paddle.complex64 
"""

I hope now you can see what I am taking about; even If I change the behavior of rfftn from its present state to one that will make it pass for paddle_frontend then it will fail for NumPy backend instead, hence, the reason I raised this bug-report.

Additionally, here is a live PR that is getting affected by this : https://github.com/unifyai/ivy/pull/20895/files

Oh, I can see. then I think the best solution would be to add a new kwarg to the rfftn function in Ivy that allows the user to specify the output dtype. This would give the user the flexibility to choose the output dtype that they want, and it would also preserve the compatibility with the NumPy backend.

EXAMPLE

import ivy

def my_function():
  x = ivy.array([1, 2], dtype=ivy.float32)
  # Specify the output dtype to be complex64.
  y = ivy.fft.rfftn(x, output_dtype=ivy.complex64)
  return y

result = my_function()
print(result.dtype)

What do you think?

ZiadAmerr commented 1 year ago

Oh, I can see. then I think the best solution would be to add a new kwarg to the rfftn function in Ivy that allows the user to specify the output dtype. This would give the user the flexibility to choose the output dtype that they want, and it would also preserve the compatibility with the NumPy backend.

What do you think?

I think dtype promotion would work best as it should be the default behavior of the function. But if the developer wants to specify an output dtype, they can just cast it later on, from the outputs of that @akshatvishu showed, I can see that it's either complex128 or complex64. complex128 always happens in other backends when the input dtype is float64 (except for JAX, which outputs complex128 when the input is either int64 or float64) otherwise, the output is always complex128. Let me know what you think!

akshatvishu commented 1 year ago

Oh, I can see. then I think the best solution would be to add a new kwarg to the rfftn function in Ivy that allows the user to specify the output dtype. This would give the user the flexibility to choose the output dtype that they want, and it would also preserve the compatibility with the NumPy backend. What do you think?

I think dtype promotion would work best as it should be the default behavior of the function. But if the developer wants to specify an output dtype, they can just cast it later on, from the outputs of that @akshatvishu showed, I can see that it's either complex128 or complex64. complex128 always happens in other backends when the input dtype is float64 (except for JAX, which outputs complex128 when the input is either int64 or float64) otherwise, the output is always complex128. Let me know what you think!

Can you explain with a minimalist example on how we'll handle this with dtype promotion?

Do you mean we remove the current behavior of casting every backend result to complex128 and let them return their native output_dtype. Then while testing we handle this dtype miss-match like : Eg: if current backend == torch then we check the input_dtype and based on it manually cast return dtype to match torch behavior and do it for every backend?

akshatvishu commented 1 year ago
import ivy

def my_function():
  x = ivy.array([1, 2], dtype=ivy.float32)
  # Specify the output dtype to be complex64.
  y = ivy.fft.rfftn(x, output_dtype=ivy.complex64)
  return y

result = my_function()
print(result.dtype)

What do you think?

Sorry, I am not able to understand how will this help us as with the current problem at hand. The thing is each backend as its own way of handling the output_dtype based on valid_dtype that it accepts, so , we've to make our testing and backend code robust enough to handle this change.

for eg: if we set the output_dtype to complex128 in at one backend to accustom numpy behaviour then it will fail for torch backend etc. and as @ZiadAmerr mentioned this is the same as using astype("YourDesiredDtype") which the user can do on its own.

rajveer43 commented 1 year ago

@akshatvishu @ZiadAmerr I could not find any solution to this. have you guyz found any?