pytorch / ao

PyTorch native quantization and sparsity for training and inference
BSD 3-Clause "New" or "Revised" License
1.62k stars 181 forks source link

Remove lm_eval warning #1347

Closed msaroufim closed 3 days ago

msaroufim commented 3 days ago

I was running torchtitan now with fp8 and this warning popped up

6, multiple_of=1024, ffn_dim_multiplier=1.3, norm_eps=1e-05, rope_theta=500000, max_seq_len=8192, depth_init=True, norm_type='rmsnorm')
[rank0]:2024-11-25 17:29:38,310 - root - INFO - Skipping import of cpp extensions
[rank0]:2024-11-25 17:29:38,391 - root - INFO - lm_eval is not installed, GPTQ may not be usable
[rank0]:2024-11-25 17:29:38,393 - root - INFO - Float8 training active
[rank0]:2024-11-25 17:29:38,414 - root - INFO - Swapped to Float8Linear layers with enable_fsdp_float8_all_gather=False
[rank0]:2024-11-25 17:29:38,415 - root - INFO - Model llama3 8B size: 8,030,261,248 total parameters
[rank0]:2024-11-25 17:29:38,416 - root - INFO - Applied selective activation checkpointing to the model
[rank0]:2024-11-25 17:29:38,480 - root - INFO - Applied FSDP to the model

There is no reason this warning needs to pop up, as if someone is using GPTQ without the necessary requirements it should fail. lm_eval is also unused in this specific file

Also practically speaking most of our custom kernels are inference only and warning people they're not imported in a training codebase seems off so also downgrading a message to debug instead of info so we don't see it

pytorch-bot[bot] commented 3 days ago

:link: Helpful Links

:test_tube: See artifacts and rendered test results at hud.pytorch.org/pr/pytorch/ao/1347

Note: Links to docs will display an error until the docs builds have been completed.

:white_check_mark: No Failures

As of commit 4dd60d4f5bc9aba6a86c63bf5be16481df0eeeea with merge base 478d15b6b7d83aaadfafd07bda18d66399e1c2e1 (image): :green_heart: Looks good so far! There are no failures yet. :green_heart:

This comment was automatically generated by Dr. CI and updates every 15 minutes.