abacaj / code-eval

Run evaluation on LLMs using human-eval benchmark
MIT License
362 stars 34 forks source link

No GPU Found #16

Closed qxpBlog closed 3 months ago

qxpBlog commented 3 months ago

@abacaj My environment is NVIDIA TX2, when use the package codecarbon to get information of GPU,but it can not find GPU:

[codecarbon INFO @ 21:03:55] [setup] RAM Tracking...
[codecarbon INFO @ 21:03:55] [setup] GPU Tracking...
[codecarbon INFO @ 21:03:55] No GPU found.
[codecarbon INFO @ 21:03:55] [setup] CPU Tracking...
[codecarbon WARNING @ 21:03:55] No CPU tracking mode found. Falling back on CPU constant mode.
huggingface/tokenizers: The current process just got forked, after parallelism has already been used. Disabling parallelism to avoid deadlocks...
To disable this warning, you can either:
    - Avoid using `tokenizers` before the fork if possible
    - Explicitly set the environment variable TOKENIZERS_PARALLELISM=(true | false)
[codecarbon WARNING @ 21:03:55] We saw that you have a ARMv8 Processor rev 1 (v8l) but we don't know it. Please contact us.
[codecarbon INFO @ 21:03:55] CPU Model on constant consumption mode: ARMv8 Processor rev 1 (v8l)
[codecarbon INFO @ 21:03:55] >>> Tracker's metadata:
[codecarbon INFO @ 21:03:55]   Platform system: Linux-5.10.104-tegra-aarch64-with-glibc2.17
[codecarbon INFO @ 21:03:55]   Python version: 3.8.13
[codecarbon INFO @ 21:03:55]   CodeCarbon version: 2.3.4
[codecarbon INFO @ 21:03:55]   Available RAM : 6.329 GB
[codecarbon INFO @ 21:03:55]   CPU count: 6
[codecarbon INFO @ 21:03:55]   CPU model: ARMv8 Processor rev 1 (v8l)
[codecarbon INFO @ 21:03:55]   GPU count: None
[codecarbon INFO @ 21:03:55]   GPU model: None

But the result of torch.cuda.is_available() is true, so i want to know if ` codecarbon could suport the facility TX2? Looking forward to your reply.