sun1638650145 / Libraries-and-Extensions-for-TensorFlow-for-Apple-Silicon

This Repo will provide TensorFlow libraries and extended build tutorials that require compilation to build, as well as pre-compiled wheel files.
115 stars 9 forks source link

Issue with the install of tensorflow_text for keras_nlp #14

Closed ramkumarkoppu closed 1 year ago

ramkumarkoppu commented 1 year ago

downloaded latest version of package and tried to install it but it's not successful. it complains, pip install tensorflow_text-2.12.1-cp39-cp39-macosx_11_0_arm64.whl ERROR: tensorflow_text-2.12.1-cp39-cp39-macosx_11_0_arm64.whl is not a supported wheel on this platform.

Following is my current environment: conda list | grep -i tensorflow tensorflow-deps 2.9.0 0 apple tensorflow-estimator 2.12.0 pypi_0 pypi tensorflow-macos 2.12.0 pypi_0 pypi tensorflow-metal 0.8.0 pypi_0 pypi

python --version Python 3.10.11

sun1638650145 commented 1 year ago

Now an obvious mistake, you installed the whl file compiled for Python 3.9 in the Python 3.10 environment.

ramkumarkoppu commented 1 year ago

Thank you for pointing out the mistake. Managed to dopip install ~/Downloads/tensorflow_text-2.12.0-cp310-cp310-macosx_11_0_arm64.whl

conda list | grep tensorflow tensorflow-addons 0.20.0 pypi_0 pypi tensorflow-datasets 4.9.2 pypi_0 pypi tensorflow-deps 2.9.0 0 apple tensorflow-estimator 2.12.0 pypi_0 pypi tensorflow-hub 0.13.0 pypi_0 pypi tensorflow-macos 2.12.0 pypi_0 pypi tensorflow-metadata 1.13.1 pypi_0 pypi tensorflow-metal 0.8.0 pypi_0 pypi tensorflow-text 2.12.0 pypi_0 pypi

but now have run time error by eras_nlp with this tensor flow-text package. Do you have recommendation on known working keras_nlp package for apple silicon?

Metal device set to: Apple M2 Max

systemMemory: 64.00 GB maxCacheSize: 24.00 GB

WARNING:tensorflow:The following Variables were used in a Lambda layer's call (tf.linalg.matmul), but are not present in its tracked objects: <tf.Variable 'token_embedding/embeddings:0' shape=(50257, 768) dtype=float32>. This is a strong indication that the Lambda layer should be rewritten as a subclassed Layer. WARNING:absl:At this time, the v2.11+ optimizer tf.keras.optimizers.Adam runs slowly on M1/M2 Macs, please use the legacy Keras optimizer instead, located at tf.keras.optimizers.legacy.Adam. WARNING:absl:There is a known slowdown when using v2.11+ Keras optimizers on M1/M2 Macs. Falling back to the legacy Keras optimizer, i.e., tf.keras.optimizers.legacy.Adam. 2023-05-17 07:18:30.391709: W tensorflow/tsl/platform/profile_utils/cpu_utils.cc:128] Failed to get CPU frequency: 0 Hz 2023-05-17 07:18:32.098000: W tensorflow/core/framework/op_kernel.cc:1830] OP_REQUIRES failed at xla_compile_on_demand_op.cc:178 : NOT_FOUND: could not find registered platform with id: 0x12a7606a0 2023-05-17 07:18:32.099959: W tensorflow/core/framework/op_kernel.cc:1830] OP_REQUIRES failed at xla_compile_on_demand_op.cc:178 : NOT_FOUND: could not find registered platform with id: 0x12a7606a0 Traceback (most recent call last): File "/Users/ramkumarkoppu/Downloads/GPT2.py", line 41, in output = gpt2_lm.generate("My trip to Yosemite was", max_length = 200) File "/Users/ramkumarkoppu/miniconda3/envs/tf_apple/lib/python3.10/site-packages/keras_nlp/src/models/gpt2/gpt2_causal_lm.py", line 497, in generate outputs = [generate(x) for x in inputs] File "/Users/ramkumarkoppu/miniconda3/envs/tf_apple/lib/python3.10/site-packages/keras_nlp/src/models/gpt2/gpt2_causal_lm.py", line 497, in outputs = [generate(x) for x in inputs] File "/Users/ramkumarkoppu/miniconda3/envs/tf_apple/lib/python3.10/site-packages/keras_nlp/src/models/gpt2/gpt2_causal_lm.py", line 481, in generate return generate_function(x, end_token_id=end_token_id) File "/Users/ramkumarkoppu/miniconda3/envs/tf_apple/lib/python3.10/site-packages/tensorflow/python/util/traceback_utils.py", line 153, in error_handler raise e.with_traceback(filtered_tb) from None File "/Users/ramkumarkoppu/miniconda3/envs/tf_apple/lib/python3.10/site-packages/tensorflow/python/eager/execute.py", line 52, in quick_execute tensors = pywrap_tfe.TFE_Py_Execute(ctx._handle, device_name, op_name, tensorflow.python.framework.errors_impl.NotFoundError: Graph execution error:

Detected at node 'transformer_layer_0/cached_multi_head_attention/XlaDynamicUpdateSlice' defined at (most recent call last): File "/Users/ramkumarkoppu/Downloads/GPT2.py", line 41, in output = gpt2_lm.generate("My trip to Yosemite was", max_length = 200) File "/Users/ramkumarkoppu/miniconda3/envs/tf_apple/lib/python3.10/site-packages/keras_nlp/src/models/gpt2/gpt2_causal_lm.py", line 497, in generate outputs = [generate(x) for x in inputs] File "/Users/ramkumarkoppu/miniconda3/envs/tf_apple/lib/python3.10/site-packages/keras_nlp/src/models/gpt2/gpt2_causal_lm.py", line 497, in outputs = [generate(x) for x in inputs] File "/Users/ramkumarkoppu/miniconda3/envs/tf_apple/lib/python3.10/site-packages/keras_nlp/src/models/gpt2/gpt2_causal_lm.py", line 481, in generate return generate_function(x, end_token_id=end_token_id) File "/Users/ramkumarkoppu/miniconda3/envs/tf_apple/lib/python3.10/site-packages/keras_nlp/src/models/gpt2/gpt2_causal_lm.py", line 326, in generate_step hidden_states, cache = self._build_cache(token_ids) File "/Users/ramkumarkoppu/miniconda3/envs/tf_apple/lib/python3.10/site-packages/keras_nlp/src/models/gpt2/gpt2_causal_lm.py", line 267, in _buildcache , hidden_states, cache = self.call_with_cache(token_ids, cache, 0) File "/Users/ramkumarkoppu/miniconda3/envs/tf_apple/lib/python3.10/site-packages/keras_nlp/src/models/gpt2/gpt2_causal_lm.py", line 240, in call_with_cache for i in range(self.backbone.num_layers): File "/Users/ramkumarkoppu/miniconda3/envs/tf_apple/lib/python3.10/site-packages/keras_nlp/src/models/gpt2/gpt2_causal_lm.py", line 242, in call_with_cache x, next_cache = self.backbone.get_layer(f"transformerlayer{i}")( File "/Users/ramkumarkoppu/miniconda3/envs/tf_apple/lib/python3.10/site-packages/keras/utils/traceback_utils.py", line 65, in error_handler return fn(*args, kwargs) File "/Users/ramkumarkoppu/miniconda3/envs/tf_apple/lib/python3.10/site-packages/keras/engine/base_layer.py", line 1145, in call outputs = call_fn(inputs, *args, *kwargs) File "/Users/ramkumarkoppu/miniconda3/envs/tf_apple/lib/python3.10/site-packages/keras/utils/traceback_utils.py", line 96, in error_handler return fn(args, kwargs) File "/Users/ramkumarkoppu/miniconda3/envs/tf_apple/lib/python3.10/site-packages/keras_nlp/src/layers/transformer_decoder.py", line 298, in call x, cache = self._self_attention_layer( File "/Users/ramkumarkoppu/miniconda3/envs/tf_apple/lib/python3.10/site-packages/keras/utils/traceback_utils.py", line 65, in error_handler return fn(*args, kwargs) File "/Users/ramkumarkoppu/miniconda3/envs/tf_apple/lib/python3.10/site-packages/keras/engine/base_layer.py", line 1145, in call outputs = call_fn(inputs, *args, *kwargs) File "/Users/ramkumarkoppu/miniconda3/envs/tf_apple/lib/python3.10/site-packages/keras/utils/traceback_utils.py", line 96, in error_handler return fn(args, kwargs) File "/Users/ramkumarkoppu/miniconda3/envs/tf_apple/lib/python3.10/site-packages/keras_nlp/src/layers/cached_multi_head_attention.py", line 80, in call if cache is not None: File "/Users/ramkumarkoppu/miniconda3/envs/tf_apple/lib/python3.10/site-packages/keras_nlp/src/layers/cached_multi_head_attention.py", line 83, in call key = dynamic_update_slice(key_cache, key, start) File "/Users/ramkumarkoppu/miniconda3/envs/tf_apple/lib/python3.10/site-packages/tensorflow/compiler/tf2xla/ops/gen_xla_ops.py", line 1484, in xla_dynamic_updateslice , _, _op, _outputs = _op_def_library._apply_op_helper( Node: 'transformer_layer_0/cached_multi_head_attention/XlaDynamicUpdateSlice' Detected at node 'transformer_layer_0/cached_multi_head_attention/XlaDynamicUpdateSlice' defined at (most recent call last): File "/Users/ramkumarkoppu/Downloads/GPT2.py", line 41, in output = gpt2_lm.generate("My trip to Yosemite was", max_length = 200) File "/Users/ramkumarkoppu/miniconda3/envs/tf_apple/lib/python3.10/site-packages/keras_nlp/src/models/gpt2/gpt2_causal_lm.py", line 497, in generate outputs = [generate(x) for x in inputs] File "/Users/ramkumarkoppu/miniconda3/envs/tf_apple/lib/python3.10/site-packages/keras_nlp/src/models/gpt2/gpt2_causal_lm.py", line 497, in outputs = [generate(x) for x in inputs] File "/Users/ramkumarkoppu/miniconda3/envs/tf_apple/lib/python3.10/site-packages/keras_nlp/src/models/gpt2/gpt2_causal_lm.py", line 481, in generate return generate_function(x, end_token_id=end_token_id) File "/Users/ramkumarkoppu/miniconda3/envs/tf_apple/lib/python3.10/site-packages/keras_nlp/src/models/gpt2/gpt2_causal_lm.py", line 326, in generate_step hidden_states, cache = self._build_cache(token_ids) File "/Users/ramkumarkoppu/miniconda3/envs/tf_apple/lib/python3.10/site-packages/keras_nlp/src/models/gpt2/gpt2_causal_lm.py", line 267, in _buildcache , hidden_states, cache = self.call_with_cache(token_ids, cache, 0) File "/Users/ramkumarkoppu/miniconda3/envs/tf_apple/lib/python3.10/site-packages/keras_nlp/src/models/gpt2/gpt2_causal_lm.py", line 240, in call_with_cache for i in range(self.backbone.num_layers): File "/Users/ramkumarkoppu/miniconda3/envs/tf_apple/lib/python3.10/site-packages/keras_nlp/src/models/gpt2/gpt2_causal_lm.py", line 242, in call_with_cache x, next_cache = self.backbone.get_layer(f"transformerlayer{i}")( File "/Users/ramkumarkoppu/miniconda3/envs/tf_apple/lib/python3.10/site-packages/keras/utils/traceback_utils.py", line 65, in error_handler return fn(*args, kwargs) File "/Users/ramkumarkoppu/miniconda3/envs/tf_apple/lib/python3.10/site-packages/keras/engine/base_layer.py", line 1145, in call outputs = call_fn(inputs, *args, *kwargs) File "/Users/ramkumarkoppu/miniconda3/envs/tf_apple/lib/python3.10/site-packages/keras/utils/traceback_utils.py", line 96, in error_handler return fn(args, kwargs) File "/Users/ramkumarkoppu/miniconda3/envs/tf_apple/lib/python3.10/site-packages/keras_nlp/src/layers/transformer_decoder.py", line 298, in call x, cache = self._self_attention_layer( File "/Users/ramkumarkoppu/miniconda3/envs/tf_apple/lib/python3.10/site-packages/keras/utils/traceback_utils.py", line 65, in error_handler return fn(*args, kwargs) File "/Users/ramkumarkoppu/miniconda3/envs/tf_apple/lib/python3.10/site-packages/keras/engine/base_layer.py", line 1145, in call outputs = call_fn(inputs, *args, *kwargs) File "/Users/ramkumarkoppu/miniconda3/envs/tf_apple/lib/python3.10/site-packages/keras/utils/traceback_utils.py", line 96, in error_handler return fn(args, kwargs) File "/Users/ramkumarkoppu/miniconda3/envs/tf_apple/lib/python3.10/site-packages/keras_nlp/src/layers/cached_multi_head_attention.py", line 80, in call if cache is not None: File "/Users/ramkumarkoppu/miniconda3/envs/tf_apple/lib/python3.10/site-packages/keras_nlp/src/layers/cached_multi_head_attention.py", line 83, in call key = dynamic_update_slice(key_cache, key, start) File "/Users/ramkumarkoppu/miniconda3/envs/tf_apple/lib/python3.10/site-packages/tensorflow/compiler/tf2xla/ops/gen_xla_ops.py", line 1484, in xla_dynamic_updateslice , _, _op, _outputs = _op_def_library._apply_op_helper( Node: 'transformer_layer_0/cached_multi_head_attention/XlaDynamicUpdateSlice' 2 root error(s) found. (0) NOT_FOUND: could not find registered platform with id: 0x12a7606a0 [[{{node transformer_layer_0/cached_multi_head_attention/XlaDynamicUpdateSlice}}]] [[while/body/_1/while/transformer_layer_8/cached_multi_head_attention/Mul/_764]] (1) NOT_FOUND: could not find registered platform with id: 0x12a7606a0 [[{{node transformer_layer_0/cached_multi_head_attention/XlaDynamicUpdateSlice}}]] 0 successful operations. 0 derived errors ignored. [Op:__inference_generate_step_12971]

sun1638650145 commented 1 year ago

First of all, it needs to be clarified that this repository is only for building whl on Apple Silicon. However, I can still provide you with some troubleshooting ideas. You can uninstall tensorflow-metal to determine if it's a GPU issue (as tensorflow-macos can run independently). If it's a GPU problem, you should go to Apple's developer website and file an issue. If it's not a GPU problem, please file an issue on the keras-nlp repository.

ramkumarkoppu commented 1 year ago

Thank you and appreciate your help:-)