Open raj-ritu17 opened 1 month ago
Hi, @raj-ritu17 , I have reproduced error during merging model. We will try to fix it, update here once it is solved.
Hi, @raj-ritu17 . We have fixed this bug. Please install the latest ipex-llm (2.1.0b20240527), no need to modify utils code and just run this script to merge model.
According to my local experiment, this merging process works and you could use this merged model do inference following https://github.com/intel-analytics/ipex-llm/tree/main/python/llm/example/GPU/HF-Transformers-AutoModels/Model/mistral
base-model: Weyaxi/Dolphin2.1-OpenOrca-7B
Scenario:
what else tried: added 'torch_dtype=torch.bfloat16' in utils code (in function -> merge_adapter) for e.g. --> common/utils/util.py +183
this doesn't solve the issue and gives an empty error.