PAIR-code / lit

The Learning Interpretability Tool: Interactively analyze ML models to understand their behavior in an extensible and framework agnostic interface.
https://pair-code.github.io/lit
Apache License 2.0
3.46k stars 352 forks source link

Defective pip dependencies: numba, numpy and protobuf #1148

Open MhhhxX opened 1 year ago

MhhhxX commented 1 year ago

I installed lit as described in the section Install from source. Every command succeeded, also building the frontend with yarn.

Then I tried to run a demo from the examples module within the conda environment with that command: python -m lit_nlp.examples.penguin_demo --port=4321 --quickstart. I received the following error:

Traceback (most recent call last):
  File "/home/user/.conda/envs/lit-nlp/lib/python3.9/runpy.py", line 197, in _run_module_as_main
    return _run_code(code, main_globals, None,
  File "/home/user/.conda/envs/lit-nlp/lib/python3.9/runpy.py", line 87, in _run_code
    exec(code, run_globals)
  File "/home/user/lit/lit_nlp/examples/penguin_demo.py", line 16, in <module>
    from lit_nlp import dev_server
  File "/home/user/lit/lit_nlp/dev_server.py", line 21, in <module>
    from lit_nlp import app as lit_app
  File "/home/user/lit/lit_nlp/app.py", line 33, in <module>
    from lit_nlp.components import core
  File "/home/user/lit/lit_nlp/components/core.py", line 35, in <module>
    from lit_nlp.components import shap_explainer
  File "/home/user/lit/lit_nlp/components/shap_explainer.py", line 28, in <module>
    import shap
  File "/home/user/.conda/envs/lit-nlp/lib/python3.9/site-packages/shap/__init__.py", line 12, in <module>
    from ._explanation import Explanation, Cohorts
  File "/home/user/.conda/envs/lit-nlp/lib/python3.9/site-packages/shap/_explanation.py", line 12, in <module>
    from .utils._general import OpChain
  File "/home/user/.conda/envs/lit-nlp/lib/python3.9/site-packages/shap/utils/__init__.py", line 1, in <module>
    from ._clustering import hclust_ordering, partition_tree, partition_tree_shuffle, delta_minimization_order, hclust
  File "/home/user/.conda/envs/lit-nlp/lib/python3.9/site-packages/shap/utils/_clustering.py", line 4, in <module>
    from numba import jit
  File "/home/user/.conda/envs/lit-nlp/lib/python3.9/site-packages/numba/__init__.py", line 43, in <module>
    from numba.np.ufunc import (vectorize, guvectorize, threading_layer,
  File "/home/user/.conda/envs/lit-nlp/lib/python3.9/site-packages/numba/np/ufunc/__init__.py", line 3, in <module>
    from numba.np.ufunc.decorators import Vectorize, GUVectorize, vectorize, guvectorize
  File "/home/user/.conda/envs/lit-nlp/lib/python3.9/site-packages/numba/np/ufunc/decorators.py", line 3, in <module>
    from numba.np.ufunc import _internal
SystemError: initialization of _internal failed without raising an exception

Do you have any ideas how to fix that?

MhhhxX commented 1 year ago

I could fix the problem with downgrading the numpy package but then I run into another problem with the protobuf package:

Traceback (most recent call last):
  File "/home/user/.conda/envs/lit-nlp/lib/python3.9/runpy.py", line 197, in _run_module_as_main
    return _run_code(code, main_globals, None,
  File "/home/user/.conda/envs/lit-nlp/lib/python3.9/runpy.py", line 87, in _run_code
    exec(code, run_globals)
  File "/home/user/lit/lit_nlp/examples/toxicity_demo.py", line 19, in <module>
    from lit_nlp.examples.datasets import classification
  File "/home/user/lit/lit_nlp/examples/datasets/classification.py", line 8, in <module>
    import tensorflow_datasets as tfds
  File "/home/user/.conda/envs/lit-nlp/lib/python3.9/site-packages/tensorflow_datasets/__init__.py", line 43, in <module>
    import tensorflow_datasets.core.logging as _tfds_logging
  File "/home/user/.conda/envs/lit-nlp/lib/python3.9/site-packages/tensorflow_datasets/core/__init__.py", line 22, in <module>
    from tensorflow_datasets.core import community
  File "/home/user/.conda/envs/lit-nlp/lib/python3.9/site-packages/tensorflow_datasets/core/community/__init__.py", line 18, in <module>
    from tensorflow_datasets.core.community.huggingface_wrapper import mock_builtin_to_use_gfile
  File "/home/user/.conda/envs/lit-nlp/lib/python3.9/site-packages/tensorflow_datasets/core/community/huggingface_wrapper.py", line 31, in <module>
    from tensorflow_datasets.core import dataset_builder
  File "/home/user/.conda/envs/lit-nlp/lib/python3.9/site-packages/tensorflow_datasets/core/dataset_builder.py", line 34, in <module>
    from tensorflow_datasets.core import dataset_info
  File "/home/user/.conda/envs/lit-nlp/lib/python3.9/site-packages/tensorflow_datasets/core/dataset_info.py", line 50, in <module>
    from tensorflow_datasets.core import splits as splits_lib
  File "/home/user/.conda/envs/lit-nlp/lib/python3.9/site-packages/tensorflow_datasets/core/splits.py", line 34, in <module>
    from tensorflow_datasets.core import proto as proto_lib
  File "/home/user/.conda/envs/lit-nlp/lib/python3.9/site-packages/tensorflow_datasets/core/proto/__init__.py", line 18, in <module>
    from tensorflow_datasets.core.proto import dataset_info_generated_pb2 as dataset_info_pb2  # pylint: disable=line-too-long
  File "/home/user/.conda/envs/lit-nlp/lib/python3.9/site-packages/tensorflow_datasets/core/proto/dataset_info_generated_pb2.py", line 22, in <module>
    from google.protobuf.internal import builder as _builder
ImportError: cannot import name 'builder' from 'google.protobuf.internal' (/home/max/.conda/envs/lit-nlp/lib/python3.9/site-packages/google/protobuf/internal/__init__.py)

Downgrading and upgrading protobuf didn't help as it breaks other dependencies.

MhhhxX commented 1 year ago

I could also solve the second problem by applying the steps suggested in that stackoverflow post.

RyanMullins commented 8 months ago

@MhhhxX Are you still running into this issue after our (somewhat) recent release? That release did a lot to pin down Python version numbers in dependencies to address breakages.