intel / torch-xpu-ops

Apache License 2.0
12 stars 7 forks source link

[E2E_baseline] Dynamo Benchmark E2E Accuracy test timm_models has some fail_accuracy models #109

Open chuanqi129 opened 2 months ago

chuanqi129 commented 2 months ago

🐛 Describe the bug

Timm_models has some models failed on accuracy check, the detail model list can be found as below table.

precision mode model
bfloat16 inference eca_botnext26ts_256
bfloat16 inference levit_128
bfloat16 inference adv_inception_v3
bfloat16 inference nfnet_l0
bfloat16 inference eca_halonext26ts
bfloat16 inference mixer_b16_224
bfloat16 inference gmixer_24_224
bfloat16 inference beit_base_patch16_224
bfloat16 inference swin_base_patch4_window7_224
bfloat16 inference crossvit_9_240
bfloat16 inference ese_vovnet19b_dw
bfloat16 inference gmlp_s16_224
bfloat16 inference resmlp_12_224
bfloat16 inference resnest101e
bfloat16 inference mixnet_l
bfloat16 inference swsl_resnext101_32x16d
bfloat16 inference dm_nfnet_f0
bfloat16 inference lcnet_050
bfloat16 inference regnety_002
bfloat16 inference tf_efficientnet_b0
bfloat16 inference sebotnet33ts_256
bfloat16 inference fbnetv3_b
bfloat16 inference tf_mixnet_l
bfloat16 inference mobilenetv3_large_100
bfloat16 inference tinynet_a
bfloat16 inference ghostnet_100
bfloat16 inference twins_pcpvt_base
bfloat16 training levit_128
bfloat16 training eca_botnext26ts_256
bfloat16 training gluon_inception_v3
bfloat16 training adv_inception_v3
bfloat16 training mixer_b16_224
bfloat16 training nfnet_l0
bfloat16 training res2net50_14w_8s
bfloat16 training eca_halonext26ts
bfloat16 training beit_base_patch16_224
bfloat16 training gmixer_24_224
bfloat16 training botnet26t_256
bfloat16 training gmlp_s16_224
bfloat16 training res2next50
bfloat16 training crossvit_9_240
bfloat16 training resmlp_12_224
bfloat16 training swin_base_patch4_window7_224
bfloat16 training hrnet_w18
bfloat16 training cspdarknet53
bfloat16 training inception_v3
bfloat16 training resnest101e
bfloat16 training dla102
bfloat16 training dm_nfnet_f0
bfloat16 training regnety_002
bfloat16 training lcnet_050
bfloat16 training repvgg_a2
bfloat16 training sebotnet33ts_256
bfloat16 training selecsls42b
bfloat16 training res2net101_26w_4s
bfloat16 training mobilenetv3_large_100
bfloat16 training gernet_l
bfloat16 training tf_mixnet_l
bfloat16 training mobilevit_s
bfloat16 training ghostnet_100
bfloat16 training tinynet_a
bfloat16 training twins_pcpvt_base
bfloat16 training visformer_small
bfloat16 training volo_d1_224
float16 inference eca_botnext26ts_256
float16 inference levit_128
float16 inference res2net50_14w_8s
float16 inference nfnet_l0
float16 inference gmixer_24_224
float16 inference eca_halonext26ts
float16 inference mixer_b16_224
float16 inference beit_base_patch16_224
float16 inference res2next50
float16 inference swin_base_patch4_window7_224
float16 inference ese_vovnet19b_dw
float16 inference crossvit_9_240
float16 inference gmlp_s16_224
float16 inference resmlp_12_224
float16 inference resnest101e
float16 inference mixnet_l
float16 inference inception_v3
float16 inference dm_nfnet_f0
float16 inference lcnet_050
float16 inference poolformer_m36
float16 inference regnety_002
float16 inference sebotnet33ts_256
float16 inference tf_efficientnet_b0
float16 inference fbnetv3_b
float16 inference mobilenetv3_large_100
float16 inference tf_mixnet_l
float16 inference ghostnet_100
float16 inference tinynet_a
float16 inference twins_pcpvt_base
float16 training levit_128
float16 training eca_botnext26ts_256
float16 training gluon_inception_v3
float16 training adv_inception_v3
float16 training mixer_b16_224
float16 training nfnet_l0
float16 training res2net50_14w_8s
float16 training eca_halonext26ts
float16 training gmixer_24_224
float16 training beit_base_patch16_224
float16 training gmlp_s16_224
float16 training botnet26t_256
float16 training res2next50
float16 training ese_vovnet19b_dw
float16 training crossvit_9_240
float16 training resmlp_12_224
float16 training swin_base_patch4_window7_224
float16 training hrnet_w18
float16 training cspdarknet53
float16 training resnest101e
float16 training inception_v3
float16 training dla102
float16 training poolformer_m36
float16 training dm_nfnet_f0
float16 training regnety_002
float16 training lcnet_050
float16 training sebotnet33ts_256
float16 training repvgg_a2
float16 training selecsls42b
float16 training res2net101_26w_4s
float16 training mobilenetv3_large_100
float16 training gernet_l
float16 training tf_mixnet_l
float16 training mobilevit_s
float16 training ghostnet_100
float16 training tinynet_a
float16 training twins_pcpvt_base
float16 training visformer_small
float16 training volo_d1_224
float32 training levit_128
float32 training eca_botnext26ts_256
float32 training gluon_inception_v3
float32 training adv_inception_v3
float32 training res2net50_14w_8s
float32 training eca_halonext26ts
float32 training botnet26t_256
float32 training res2next50
float32 training ese_vovnet19b_dw
float32 training cspdarknet53
float32 training hrnet_w18
float32 training resnest101e
float32 training inception_v3
float32 training dla102
float32 training regnety_002
float32 training repvgg_a2
float32 training lcnet_050
float32 training sebotnet33ts_256
float32 training selecsls42b
float32 training res2net101_26w_4s
float32 training mobilenetv3_large_100
float32 training gernet_l
float32 training tf_mixnet_l
float32 training mobilevit_s
float32 training ghostnet_100
float32 training tinynet_a
float32 training visformer_small
float32 training volo_d1_224

Versions

Pytorch: git clone -b e2e-baseline https://github.com/etaf/pytorch-inductor-xpu pytorch Test script: inductor_xpu_test.sh

riverliuintel commented 2 months ago

@etaf have a look whether there is common issue. Please have the first round of check. For details model failure triage, Yunfei will cover it.