fastmachinelearning / hls4ml-tutorial

Tutorial notebooks for hls4ml
http://fastmachinelearning.org/hls4ml-tutorial/
274 stars 123 forks source link

Pytorch converter doesn't work. #50

Open areeb-agha opened 1 year ago

areeb-agha commented 1 year ago

I trained VGG16 model on CIFAR100 dataset on pytorch. When I run:

import hls4ml
import plotting

config = hls4ml.utils.config_from_pytorch_model(model, granularity='layer')
print("-----------------------------------")
print("Configuration")
plotting.print_dict(config)
print("-----------------------------------")
hls_model = hls4ml.converters.convert_from_pytorch_model(
    model, hls_config=config, output_dir='model_3/hls4ml_prj', part='xcu250-figd2104-2L-e'
)

I get the error on the last line: TypeError: cannot unpack non-iterable NoneType object

While I ran pre-trained VGG16 model of keras on hls4ml, it runs smoothly without any error. The cause of the error I found out is the config file generated from config = hls4ml.utils.config_from_pytorch_model(model, granularity='layer'). When I print this variable config, it shows: {'Model': {'Precision': 'ap_fixed<16,6>', 'ReuseFactor': 1, 'Strategy': 'Latency'}} which shows there is no information regarding the layers. In case of Keras i.e. config = hls4ml.utils.config_from_keras_model(model, granularity='layer') generates following output:


Interpreting Model
Topology:
Layer name: input_1, layer type: InputLayer, input shapes: [[None, 224, 224, 3]], output shape: [None, 224, 224, 3]
Layer name: block1_conv1, layer type: Conv2D, input shapes: [[None, 224, 224, 3]], output shape: [None, 224, 224, 64]
Layer name: block1_conv2, layer type: Conv2D, input shapes: [[None, 224, 224, 64]], output shape: [None, 224, 224, 64]
Layer name: block1_pool, layer type: MaxPooling2D, input shapes: [[None, 224, 224, 64]], output shape: [None, 112, 112, 64]
Layer name: block2_conv1, layer type: Conv2D, input shapes: [[None, 112, 112, 64]], output shape: [None, 112, 112, 128]
Layer name: block2_conv2, layer type: Conv2D, input shapes: [[None, 112, 112, 128]], output shape: [None, 112, 112, 128]
Layer name: block2_pool, layer type: MaxPooling2D, input shapes: [[None, 112, 112, 128]], output shape: [None, 56, 56, 128]
Layer name: block3_conv1, layer type: Conv2D, input shapes: [[None, 56, 56, 128]], output shape: [None, 56, 56, 256]
Layer name: block3_conv2, layer type: Conv2D, input shapes: [[None, 56, 56, 256]], output shape: [None, 56, 56, 256]
Layer name: block3_conv3, layer type: Conv2D, input shapes: [[None, 56, 56, 256]], output shape: [None, 56, 56, 256]
Layer name: block3_pool, layer type: MaxPooling2D, input shapes: [[None, 56, 56, 256]], output shape: [None, 28, 28, 256]
Layer name: block4_conv1, layer type: Conv2D, input shapes: [[None, 28, 28, 256]], output shape: [None, 28, 28, 512]
Layer name: block4_conv2, layer type: Conv2D, input shapes: [[None, 28, 28, 512]], output shape: [None, 28, 28, 512]
Layer name: block4_conv3, layer type: Conv2D, input shapes: [[None, 28, 28, 512]], output shape: [None, 28, 28, 512]
Layer name: block4_pool, layer type: MaxPooling2D, input shapes: [[None, 28, 28, 512]], output shape: [None, 14, 14, 512]
Layer name: block5_conv1, layer type: Conv2D, input shapes: [[None, 14, 14, 512]], output shape: [None, 14, 14, 512]
Layer name: block5_conv2, layer type: Conv2D, input shapes: [[None, 14, 14, 512]], output shape: [None, 14, 14, 512]
Layer name: block5_conv3, layer type: Conv2D, input shapes: [[None, 14, 14, 512]], output shape: [None, 14, 14, 512]
Layer name: block5_pool, layer type: MaxPooling2D, input shapes: [[None, 14, 14, 512]], output shape: [None, 7, 7, 512]
Layer name: flatten, layer type: Reshape, input shapes: [[None, 7, 7, 512]], output shape: [None, 25088]
Layer name: fc1, layer type: Dense, input shapes: [[None, 25088]], output shape: [None, 4096]
Layer name: fc2, layer type: Dense, input shapes: [[None, 4096]], output shape: [None, 4096]
Layer name: predictions, layer type: Dense, input shapes: [[None, 4096]], output shape: [None, 1000]
{'Model': {'Precision': 'fixed<16,6>', 'ReuseFactor': 1, 'Strategy': 'Latency', 'BramFactor': 1000000000, 'TraceOutput': False}}

Please resolve this issue.

zyt1024 commented 1 year ago

hello,I also encountered this problem. have you solve this issue?

areeb-agha commented 1 year ago

No, I am still waiting for their reply. It seems their Pytorch converter has some bug. I temporarily switched to keras, which works fine.

poulamiM25 commented 10 months ago

No, I am still waiting for their reply. It seems their Pytorch converter has some bug. I temporarily switched to keras, which works fine.

Hi,did you use Vitis HLS or the Vivado HLS? Please reply.

areeb-agha commented 10 months ago

I used Vitis HLS

poulamiM25 commented 10 months ago

Can you please help me how you run the code on Vitis HLS. The github repo is not working for Vitis hls

areeb-agha commented 10 months ago

Are you using Pytorch or Keras?

poulamiM25 commented 10 months ago

I am using Keras only