Open arianyambao opened 2 weeks ago
.A quick fix I used is to install unsloth
from a previously working branch version:
pip --no-cache-dir install "unsloth[cu118-ampere] @ git+https://github.com/unslothai/unsloth.git@a2f8db3e7341f983af5814a2c56f54fa29ee548d"
And it worked however you need to specify
os.environ["UNSLOTH_IS_PRESENT"] = "1"
which was a related issue from !1252
Oh maybe this is an old triton version - I will add a flag to turn the casting off!
Even i am facing the same issue. Please help me with a resolution
Yes, getting same error Tried the following, ended up getting more issues
.A quick fix I used is to install
unsloth
from a previously working branch version:pip --no-cache-dir install "unsloth[cu118-ampere] @ git+https://github.com/unslothai/unsloth.git@a2f8db3e7341f983af5814a2c56f54fa29ee548d"
And it worked however you need to specify
os.environ["UNSLOTH_IS_PRESENT"] = "1"
which was a related issue from !1252
Same issue here
@sureshmol @arianyambao @Ar9av @jonwolds
Apologies everyone - I added a temporary solution in the nightly branch - would it possible for you guys to test to see if it works - thanks a lot - also apologies on the issue!
pip uninstall unsloth -y && pip install --upgrade --no-cache-dir --no-deps "unsloth[colab-new] @ git+https://github.com/unslothai/unsloth.git@nightly"
@sureshmol @arianyambao @Ar9av @jonwolds Apologies everyone - I added a temporary solution in the nightly branch - would it possible for you guys to test to see if it works - thanks a lot - also apologies on the issue!
pip uninstall unsloth -y && pip install --upgrade --no-cache-dir --no-deps "unsloth[colab-new] @ git+https://github.com/unslothai/unsloth.git@nightly"
解决了 谢谢!
I am having the same issue
@sureshmol @arianyambao @Ar9av @jonwolds Apologies everyone - I added a temporary solution in the nightly branch - would it possible for you guys to test to see if it works - thanks a lot - also apologies on the issue!
pip uninstall unsloth -y && pip install --upgrade --no-cache-dir --no-deps "unsloth[colab-new] @ git+https://github.com/unslothai/unsloth.git@nightly"
Hi, @danielhanchen it works by default and without the flags I explicitly set last time. However, I built it with cu118-ampere
instead of colab-new
:
pip --no-cache-dir install "unsloth[cu118-ampere] @ git+https://github.com/unslothai/unsloth.git@nightly"
Here's a short output:
2024-11-11T02:56:35.050957027-08:00
Map: 0%| | 0/854 [00:00<?, ? examples/s]
Map: 100%|██████████| 854/854 [00:00<00:00, 59543.12 examples/s]
2024-11-11T02:56:35.823763879-08:00 ==((====))== Unsloth 2024.11.5: Fast Llama patching. Transformers = 4.46.2.
2024-11-11T02:56:35.823787625-08:00 \\ /| GPU: NVIDIA RTX A6000. Max memory: 47.536 GB. Platform = Linux.
2024-11-11T02:56:35.823794190-08:00 O^O/ \_/ \ Pytorch: 2.1.0+cu118. CUDA = 8.6. CUDA Toolkit = 11.8.
2024-11-11T02:56:35.823799358-08:00 \ / Bfloat16 = TRUE. FA [Xformers = 0.0.22.post7+cu118. FA2 = True]
2024-11-11T02:56:35.823804177-08:00 "-____-" Free Apache license: http://github.com/unslothai/unsloth
2024-11-11T02:56:35.823809275-08:00 Unsloth: Fast downloading is enabled - ignore downloading bars which are red colored!
2024-11-11T02:59:02.909143520-08:00 Unsloth 2024.11.5 patched 32 layers with 32 QKV layers, 32 O layers and 32 MLP layers.
2024-11-11T02:59:05.118042752-08:00
Map (num_proc=2): 0%| | 0/854 [00:00<?, ? examples/s]
Map (num_proc=2): 50%|█████ | 427/854 [00:00<00:00, 556.73 examples/s]
Map (num_proc=2): 100%|██████████| 854/854 [00:00<00:00, 1070.49 examples/s]
Map (num_proc=2): 100%|██████████| 854/854 [00:01<00:00, 836.49 examples/s]
2024-11-11T02:59:05.209715063-08:00 Detected kernel version 5.4.0, which is below the recommended minimum of 5.5.0; this can cause the process to hang. It is recommended to upgrade the kernel to the minimum version or higher.
2024-11-11T02:59:06.223910137-08:00 ==((====))== Unsloth - 2x faster free finetuning | Num GPUs = 1
2024-11-11T02:59:06.223946594-08:00 \\ /| Num examples = 854 | Num Epochs = 6
2024-11-11T02:59:06.223954626-08:00 O^O/ \_/ \ Batch size per device = 4 | Gradient Accumulation steps = 4
2024-11-11T02:59:06.223959305-08:00 \ / Total batch size = 16 | Total steps = 318
2024-11-11T02:59:06.223962727-08:00 "-____-" Number of trainable parameters = 41,943,040
2024-11-11T02:59:11.991167966-08:00
0%| | 0/318 [00:00<?, ?it/s]
0%| | 1/318 [00:05<27:22, 5.18s/it]
Thank you!
@arianyambao Oh ok glad it works!
@opertifelipe Did you try updating Unsloth? pip install --upgrade --no-cache-dir --no-deps unsloth
@arianyambao Oh ok glad it works!
@opertifelipe Did you try updating Unsloth?
pip install --upgrade --no-cache-dir --no-deps unsloth
Yes, now it works! 😀
Been using unsloth for my trainings, however updating the version resulted to an error.
Tried fine-tuning and got this error result: