Closed tuobay closed 4 months ago
To resolve the AttributeError: torch._inductor.config.fx_graph_cache does not exist
error, follow these steps:
Check PyTorch Version Compatibility: Ensure that the PyTorch version you are using is compatible with the infinity_emb
library. The error might be due to an incompatibility between the PyTorch version and the library.
Update infinity_emb
Library: Make sure you are using the latest version of the infinity_emb
library. Update it if necessary.
Modify test_infinity_server.py
: Update the test cases in /libs/infinity_emb/tests/unit_test/test_infinity_server.py
to ensure they are compatible with the current PyTorch version.
@pytest.mark.skipif(sys.platform == "win32", reason="does not run on windows")
def test_cli_v2_help():
log = subprocess.run(["infinity_emb", "v2", "--help"])
assert log.returncode == 0
Check for Deprecated Attributes: Verify if torch._inductor.config.fx_graph_cache
has been deprecated or moved in the latest PyTorch version. If so, update the code to use the new attribute or method.
Debugging: Add debugging statements to identify where the attribute is being accessed and ensure it exists in the current PyTorch version.
If the issue persists, consider reaching out to the maintainers of the infinity_emb
library for further assistance.
/.github/ISSUE_TEMPLATE/bug-report.yml /docs/assets/openapi.json /.github/ISSUE_TEMPLATE/new-model-addition.yml /libs/infinity_emb/tests/unit_test/test_infinity_server.py /libs/infinity_emb /docs/docs/index.md /README.md /.github/ISSUE_TEMPLATE /libs/infinity_emb/tests/unit_test /libs/infinity_emb/Dockerfile /libs/infinity_emb/poetry.toml /docs/docs/deploy.md /libs/infinity_emb/Makefile /libs/infinity_emb/tests/unit_test/fastapi_schemas /libs/infinity_emb/pyproject.toml /libs/infinity_emb/tests/data/benchmark /docs/benchmarks/simple_app.py /libs /libs/infinity_emb/tests/end_to_end/test_optimum_embedding.py /docs /libs/infinity_emb/infinity_emb/fastapi_schemas /docs/assets /libs/infinity_emb/tests /docs/docs
Try torch>=2.2 . infinity_emb==0.0.52
-> pip should automatically install that, or at least you skipped some example
System Info
OS: Linux MODEL: NO model HARDWARE: +-----------------------------------------------------------------------------+ | NVIDIA-SMI 470.82.01 Driver Version: 470.82.01 CUDA Version: 11.4 | |-------------------------------+----------------------+----------------------+ | GPU Name Persistence-M| Bus-Id Disp.A | Volatile Uncorr. ECC | | Fan Temp Perf Pwr:Usage/Cap| Memory-Usage | GPU-Util Compute M. | | | | MIG M. | |===============================+======================+======================| | 0 NVIDIA A100-SXM... On | 00000000:25:00.0 Off | 0 | | N/A 59C P0 421W / 400W | 80922MiB / 81251MiB | 93% Default | | | | Disabled | +-------------------------------+----------------------+----------------------+ | 1 NVIDIA A100-SXM... On | 00000000:2B:00.0 Off | 0 | | N/A 73C P0 403W / 400W | 80938MiB / 81251MiB | 100% Default | | | | Disabled | +-------------------------------+----------------------+----------------------+ | 2 NVIDIA A100-SXM... On | 00000000:65:00.0 Off | 0 | | N/A 74C P0 404W / 400W | 80938MiB / 81251MiB | 94% Default | | | | Disabled | +-------------------------------+----------------------+----------------------+ | 3 NVIDIA A100-SXM... On | 00000000:6A:00.0 Off | 0 | | N/A 61C P0 388W / 400W | 80242MiB / 81251MiB | 94% Default | | | | Disabled | +-------------------------------+----------------------+----------------------+
+-----------------------------------------------------------------------------+ | Processes: | | GPU GI CI PID Type Process name GPU Memory | | ID ID Usage | |=============================================================================| +-----------------------------------------------------------------------------+
SORFTWARE: PyTorch version 2.1.2+cu118+infinity_emb0.0.52
Information
Tasks
Reproduction
Just type
infinity_emb v2 --help
can get the errorExpected behavior
Just show the
help
command output