-
**Description:**
There is an issue with OneHotEncoder, as it isn't accepting argument 'sparse'
**explanation:**
recent versions of OneHotEncoder Object changed the 'sparse' argument to 'sparse_ou…
-
**Describe the bug**
When training with nightly PyTorch, the logs are full of deprecation warnings like this:
```
/home/alyssavance/miniforge3/envs/brr/lib/python3.10/site-packages/deepspeed/runt…
-
### Feature request
Hi! I’ve been researching LLM quantization recently ([this paper](https://arxiv.org/abs/2405.14852)), and noticed a potentially improtant issue that arises when using LLMs with 1-…
-
![image](https://github.com/user-attachments/assets/dcac863e-0062-4f2d-86f0-52415810dbcc)
## Summary
Dino 방식으로 학습된 transformer 아키텍처를 잘 활용하면, 아주 간단하게 multi-class anomaly detection을 수행할 수 있다. 1) Noi…
-
Hi! I'm tried to finetune llama-2-13b with bottleneck Adapter, but it got a ValueError that cannot finetune the model loading by using load_in8bit. What is the problem? How can I solve it?
**ValueE…
-
### A thorough review of the PyTorch interfacing class
- [ ] NeuralForecast
NeuralForecast interfaces the PyTorch indirectly through its neural network dependency.
* [**Adapter**](https…
-
Thank you for making this repo its very educational. This minimal implementation is brilliant. The bigger SD repos are very hard to understand.
Did you have a script to convert them for official mo…
-
the 2nd latency of llama3-8b-instruct with int4 and bs=1 is larger than bs=2, ipex-llm=2.5.0b20240504
![image](https://github.com/intel-analytics/ipex-llm/assets/99886928/6666bd33-4aa9-491b-ab60-b392…
-
If you are submitting a bug report, please fill in the following details and use the tag [bug].
**Describe the bug**
The generations from huggingface model (LlamaForCausalLM) and HookedTransformer…
-
File "main.py", line 9, in
from transformers import AdamW, WarmUp, get_linear_schedule_with_warmup
ImportError: cannot import name 'WarmUp' from 'transformers' (/home/user/.local/lib/python3.8/…