Open
Description
As examples/3.x_api/pytorch/nlp/huggingface_models/language-modeling/quantization/mx_quant/Readme suggests, I run
python run_clm_no_trainer.py --model ./Qwen2-1.5B-Instruct --quantize --accuracy --tasks lambada_openai --w_dtype fp4 --woq
But it returns error:
2025-02-12 13:21:16 [WARNING][auto_accelerator.py:418] Auto detect accelerator: CPU_Accelerator. 2025-02-12 13:21:16 [INFO][run_clm_no_trainer.py:63] Preparation started. 2025-02-12 13:21:16 [INFO][quantize.py:160] Start to prepare model with mx_quant. 2025-02-12 13:21:16 [INFO][algorithm_entry.py:745] Quantize model with the mx quant algorithm. 2025-02-12 13:21:16 [INFO][run_clm_no_trainer.py:63] Preparation end. 2025-02-12 13:21:16 [INFO][run_clm_no_trainer.py:65] Conversion started. 2025-02-12 13:21:16 [INFO][quantize.py:226] Start to convert model with mx_quant. 2025-02-12 13:21:16 [INFO][algorithm_entry.py:745] Quantize model with the mx quant algorithm. 2025-02-12 13:22:11 [INFO][run_clm_no_trainer.py:65] Conversion end. Traceback (most recent call last): File "/Users/xuhan/Desktop/learning/neural-compressor/examples/3.x_api/pytorch/nlp/huggingface_models/language-modeling/quantization/mx_quant/run_clm_no_trainer.py", line 68, in <module> from neural_compressor.evaluation.lm_eval import evaluate, LMEvalParser File "/Users/xuhan/huggingface-env/lib/python3.10/site-packages/neural_compressor/evaluation/lm_eval/__init__.py", line 17, in <module> from .accuracy import cli_evaluate as evaluate File "/Users/xuhan/huggingface-env/lib/python3.10/site-packages/neural_compressor/evaluation/lm_eval/accuracy.py", line 42, in <module> from lm_eval.loggers import WandbLogger ModuleNotFoundError: No module named 'lm_eval.loggers'
How can I fix it?
Metadata
Metadata
Assignees
Labels
No labels