Open daniil-lyakhov opened 1 month ago
.take
Thank you for looking into this issue! Please let us know if you have any questions or require any help.
.take
Thanks for being interested in this issue. It looks like this ticket is already assigned to a contributor. Please communicate with the assigned contributor to confirm the status of the issue.
@daniil-lyakhov Hi, Could you please push your branch before merging it into the develop branch? This will allow me to develop based on your branch.
Additionally, I noticed the file tests/torch/fx/test_sanity.py. Should I implement the unit tests in tests/torch/fx/test_model_transformer.py instead of tests/torch_fx/test_model_transformer.py?
Thank you!
Hi @awayzjj,
Thank you for your contribution! PR is on review right now and should be merged soon, I'll keep you updated. Yes, please use tests/torch/fx directory, I forgot to update the issue.
Thanks!
Hi @awayzjj, the base PR #2764 was merged, please rebase your cahnges
@daniil-lyakhov Hi, I attempted to implement the test_leaf_module_insertion_transformation
as follows, but encountered two problems:
import torch
import torch.nn.functional as F
from torch import nn
from torch._export import capture_pre_autograd_graph
from nncf.common.graph.transformations.commands import TargetType
from nncf.common.graph.transformations.layout import TransformationLayout
def test_leaf_module_insertion_transformation():
class InsertionPointTestModel(nn.Module):
def __init__(self):
super().__init__()
self.conv1 = nn.Conv2d(1, 1, 1, 1)
self.linear_wts = nn.Parameter(torch.FloatTensor(size=(100, 100)))
self.conv2 = nn.Conv2d(1, 1, 1, 1)
self.relu = nn.ReLU()
def forward(self, input_):
x = self.conv1(input_)
x = x.flatten()
x = nn.functional.linear(x, self.linear_wts)
x = x.reshape((1, 1, 10, 10))
x = self.conv2(x)
x = self.relu(x)
return x
model = InsertionPointTestModel()
with torch.no_grad():
ex_input = torch.ones([1, 1, 10, 10])
model.eval()
exported_model = capture_pre_autograd_graph(model, args=(ex_input,))
print(exported_model.print_readable())
from nncf.experimental.torch.fx.model_transformer import FXModelTransformer
from nncf.torch.graph.transformations.commands import PTTargetPoint
from nncf.experimental.torch.fx.transformations import leaf_module_insertion_transformation_builder
from nncf.experimental.torch.fx.commands import FXApplyTransformationCommand
model_transformer = FXModelTransformer(exported_model)
conv1_node_name = "InsertionPointTestModel/NNCFConv2d[conv1]/conv2d_0"
target_point = PTTargetPoint(
target_type=TargetType.OPERATION_WITH_WEIGHTS, target_node_name=conv1_node_name, input_port_id=1
)
transformation = leaf_module_insertion_transformation_builder(
exported_model, [target_point]
)
command = FXApplyTransformationCommand(
transformation
)
transformation_layout = TransformationLayout()
transformation_layout.register(command)
model_transformer.transform(transformation_layout)
The tests fail with the following exception:
I have to place the following imports after exported_model = capture_pre_autograd_graph(model, args=(ex_input,))
from nncf.experimental.torch.fx.model_transformer import FXModelTransformer
from nncf.torch.graph.transformations.commands import PTTargetPoint
from nncf.experimental.torch.fx.transformations import leaf_module_insertion_transformation_builder
from nncf.experimental.torch.fx.commands import FXApplyTransformationCommand
or it raises an error:
Could you give me some suggestions? Thank you very much!
@awayzjj you defined the variable conv1_node_name
incorrectly. In this case, for example if you define it with conv1_node_name = "conv2d"
instead as this is a valid node name, it should work. The node names in the Graph are different for Torch FX backend.
@rk119 Thank you so much! Your suggestions does work!
Greetings🐱! As a part of https://github.com/openvinotoolkit/nncf/issues/2766 TorchFX PTQ backend support, we are gladly presenting to you following issue
Context
The task is to cover FXModelTransformer by simple unit tests as it done for other backends: https://github.com/openvinotoolkit/nncf/blob/develop/tests/onnx/test_model_transformer.py
What needs to be done?
Unit tests in file
tests/torch/fx/test_model_transformer.py
:Example Pull Requests
No response
Resources
Contact points
@daniil-lyakhov
Ticket
141640