-
I have update my miner from 2.5 to 2.10, and I found some problems.
1. The setting of "low_power_mode" in the cpu.txt seems not work. For example, one of my CPU is i7-5930k, that is 6 cores with 15…
-
I am trying to increase the crawl rate of pyspider, but I just cant get it above 80/sec.
My setup is
1 server with schedule result_worker webui processor (low CPU load, 70% memory usage)
3 servers w…
-
hey there,
I found this fork upon google while looking for a possible patch to improve support for the IT8665E chip used on many AMD B-350 chipsets. Are you interested into trying to improve the d…
-
I use awq to quantize llama 2 70b-chat by:
```
CUDA_VISIBLE_DEVICES="1,2,3,4,5,6,7" python quantize_llama.py
```
the codes of quantize_llama.py:
```
from awq import AutoAWQForCausalLM
from tr…
-
I am getting
```
Traceback (most recent call last):
File "predict.py", line 219, in
predictor.setup(model_base=None, model_name="nextgpt-v1.5-7b", model_path="./checkpoints/nextgpt-v1.5-7…
-
Hi,
I saved the LLAVA model in 4bit using generate.py from:
https://github.com/intel-analytics/ipex-llm/tree/main/python/llm/example/CPU/PyTorch-Models/Model/llava
model = optimize_model(model) …
-
### Qubes OS release
4.1
### Brief summary
CPU frequencies do not seem to scale properly, at least for the lower frequencies.
### Steps to reproduce
Tested on:
Lenovo Thinkpad L14 Gen 3 …
-
I have a ThinkPad T14s gen 3 where I installed zcfan. The fan is cycling like crazy. I am monitoring the CPU and GPU temperature and they never reached 70 or 61. I also feel that the fun runs at high …
-
I have tested the inference speed and memory usage of Qwen1.5-14b on my machine using the example in ipex-llm. The peek cpu usage to load Qwen1.5-14b in 4-bit is about 24GB. The peek GPU usage is abou…
-
2024-03-16 11:17:44,339 - INFO - Converting the current model to bf16 format......
2024-03-16 11:17:44,339 - INFO - BIGDL_OPT_IPEX: True
Traceback (most recent call last):
File "/home/llm/BigDL/p…