intel / ai-reference-models

Intel® AI Reference Models: contains Intel optimizations for running deep learning workloads on Intel® Xeon® Scalable processors and Intel® Data Center GPUs
Apache License 2.0
674 stars 220 forks source link

How can I export a optimized wide and deep model? #67

Open lizhen2017 opened 3 years ago

lizhen2017 commented 3 years ago

In https://software.intel.com/content/www/us/en/develop/articles/accelerate-int8-inference-performance-for-recommender-systems-with-intel-deep-learning.html, graph optimization includes the following: """ Categorical columns are optimized by removing redundant and unnecessary OPs. The left portion of Figure 2 contains the unoptimized portion of the graph. These are optimized as described below: The Expand Dimension, GatherNd, NotEqual, and Where OPs that are used to get a non-empty input string of the required dimension are removed as they are redundant for the current dataset. Error checking and handling OPs (NotEqual, GreaterEqual, SparseFillEmptyRows, Unique, etc.) and unique value calculation and reconstruction OPs (Unique, SparseSegmentSum/Mean, StridedSlice, Pack, Tile, etc.) are removed as they are not necessary for the current dataset. """ Would you please share this part of the graph optimization method or code?

snehak11 commented 3 years ago

Feature column optimization script is used for graph level optimization. We have different methods to optimized different kind of feature columns. For categorical column with hash bucket you can use the corresponding method def optimize_categorical_embedding_with_hash_bucket(). Here is the link of the code you can look into https://github.com/IntelAI/models/blob/master/models/recommendation/tensorflow/wide_deep_large_ds/dataset/featurecolumn_graph_optimization.py#L69

sramakintel commented 5 months ago

@lizhen2017: did the workaround fix the issue you were facing?