pytorch / torchchat

Run PyTorch LLMs locally on servers, desktop and mobile
BSD 3-Clause "New" or "Revised" License
3.4k stars 224 forks source link

Torchchat on Android crashes on second prompt with Llama-3.2-3b-instruct #1395

Open infil00p opened 4 hours ago

infil00p commented 4 hours ago

🐛 Describe the bug

Device Info: Device: Google Pixel 9 Android Version: 15 API Level: 35

Steps to reproduce the bug:

Expected: The Llama model should produce output

What happened:

2024-11-25 14:52:50.659 19932-20110 ExecuTorch              org.pytorch.torchchat                I  RSS after loading model: 2391.855469 MiB (0 if unsupported)
2024-11-25 14:52:50.660 19932-20110 ExecuTorch              org.pytorch.torchchat                A  In function generate(), assert failed (num_prompt_tokens < metadata_.at(kMaxSeqLen)): num_prompt_tokens 140 >= max_seq_len_ 128, Max seq length exceeded - please increase max seq len value in .../llama2/model.py
2024-11-25 14:52:50.661 19932-20110 libc                    org.pytorch.torchchat                A  Fatal signal 6 (SIGABRT), code -1 (SI_QUEUE) in tid 20110 (pool-3-thread-1), pid 19932 (torch.torchchat)
2024-11-25 14:52:50.782 19932-20004 HWUI                    org.pytorch.torchchat                I  Davey! duration=3084ms; Flags=0, FrameTimelineVsyncId=8776715, IntendedVsync=648189722387971, Vsync=648192674316622, InputEventId=276502475, HandleInputStart=648192688975798, AnimationStart=648192689011239, PerformTraversalsStart=648192689012013, DrawStart=648192795535247, FrameDeadline=648189738987971, FrameInterval=648192688396331, FrameStartTime=16677563, SyncQueued=648192798785491, SyncStart=648192799176116, IssueDrawCommandsStart=648192799894540, SwapBuffers=648192802982390, FrameCompleted=648192807580860, DequeueBufferDuration=332927, QueueBufferDuration=514974, GpuCompleted=648192807580860, SwapBuffersCompleted=648192803706715, DisplayPresentTime=648178625308026, CommandSubmissionCompleted=648192802982390, 
2024-11-25 14:52:51.146 20135-20135 DEBUG                                                        A  Cmdline: org.pytorch.torchchat
2024-11-25 14:52:51.146 20135-20135 DEBUG                                                        A  pid: 19932, tid: 20110, name: pool-3-thread-1  >>> org.pytorch.torchchat <<<
2024-11-25 14:52:51.146 20135-20135 DEBUG                                                        A        #01 pc 00000000015fdf54  /data/app/~~vQVnC2iQW4Ws7d2zTIrIEQ==/org.pytorch.torchchat-ta0utv2u-lYvdiOSyxOdLA==/lib/arm64/libexecutorch.so (et_pal_abort+8) (BuildId: 87abca08e486390fd661b9f8676b8b0c40ba5d04)
2024-11-25 14:52:51.146 20135-20135 DEBUG                                                        A        #02 pc 00000000015fdd80  /data/app/~~vQVnC2iQW4Ws7d2zTIrIEQ==/org.pytorch.torchchat-ta0utv2u-lYvdiOSyxOdLA==/lib/arm64/libexecutorch.so (executorch::runtime::runtime_abort()+8) (BuildId: 87abca08e486390fd661b9f8676b8b0c40ba5d04)
2024-11-25 14:52:51.146 20135-20135 DEBUG                                                        A        #03 pc 0000000001583d9c  /data/app/~~vQVnC2iQW4Ws7d2zTIrIEQ==/org.pytorch.torchchat-ta0utv2u-lYvdiOSyxOdLA==/lib/arm64/libexecutorch.so (example::Runner::generate(std::__ndk1::basic_string<char, std::__ndk1::char_traits<char>, std::__ndk1::allocator<char>> const&, int, std::__ndk1::function<void (std::__ndk1::basic_string<char, std::__ndk1::char_traits<char>, std::__ndk1::allocator<char>> const&)>, std::__ndk1::function<void (executorch::extension::llm::Stats const&)>, bool, bool)+3748) (BuildId: 87abca08e486390fd661b9f8676b8b0c40ba5d04)
2024-11-25 14:52:51.146 20135-20135 DEBUG                                                        A        #04 pc 00000000001e8b18  /data/app/~~vQVnC2iQW4Ws7d2zTIrIEQ==/org.pytorch.torchchat-ta0utv2u-lYvdiOSyxOdLA==/lib/arm64/libexecutorch.so (executorch_jni::ExecuTorchLlamaJni::generate(facebook::jni::alias_ref<_jintArray*>, int, int, int, facebook::jni::alias_ref<_jstring*>, int, facebook::jni::alias_ref<executorch_jni::ExecuTorchLlamaCallbackJni>, unsigned char)+408) (BuildId: 87abca08e486390fd661b9f8676b8b0c40ba5d04)
2024-11-25 14:52:51.146 20135-20135 DEBUG                                                        A        #05 pc 00000000001e9438  /data/app/~~vQVnC2iQW4Ws7d2zTIrIEQ==/org.pytorch.torchchat-ta0utv2u-lYvdiOSyxOdLA==/lib/arm64/libexecutorch.so (facebook::jni::detail::MethodWrapper<int (executorch_jni::ExecuTorchLlamaJni::*)(facebook::jni::alias_ref<_jintArray*>, int, int, int, facebook::jni::alias_ref<_jstring*>, int, facebook::jni::alias_ref<executorch_jni::ExecuTorchLlamaCallbackJni>, unsigned char), &executorch_jni::ExecuTorchLlamaJni::generate(facebook::jni::alias_ref<_jintArray*>, int, int, int, facebook::jni::alias_ref<_jstring*>, int, facebook::jni::alias_ref<executorch_jni::ExecuTorchLlamaCallbackJni>, unsigned char), executorch_jni::ExecuTorchLlamaJni, int, facebook::jni::alias_ref<_jintArray*>, int, int, int, facebook::jni::alias_ref<_jstring*>, int, facebook::jni::alias_ref<executorch_jni::ExecuTorchLlamaCallbackJni>, unsigned char>::dispatch(facebook::jni::alias_ref<facebook::jni::detail::JTypeFor<facebook::jni::HybridClass<executorch_jni::ExecuTorchLlamaJni, facebook::jni::detail::BaseHybridClass>::JavaPart, facebook::jni::JObject, void>::_javaobject*>, facebook::jni::alias_ref<_jintArray*>&&, int&&, int&&, int&&, facebook::jni::alias_ref<_jstring*>&&, int&&, facebook::jni::alias_ref<executorch_jni::ExecuTorchLlamaCallbackJni>&&, unsigned char&&)+156) (BuildId: 87abca08e486390fd661b9f8676b8b0c40ba5d04)
2024-11-25 14:52:51.146 20135-20135 DEBUG                                                        A        #06 pc 00000000001e9304  /data/app/~~vQVnC2iQW4Ws7d2zTIrIEQ==/org.pytorch.torchchat-ta0utv2u-lYvdiOSyxOdLA==/lib/arm64/libexecutorch.so (facebook::jni::detail::FunctionWrapper<int (*)(facebook::jni::alias_ref<facebook::jni::detail::JTypeFor<facebook::jni::HybridClass<executorch_jni::ExecuTorchLlamaJni, facebook::jni::detail::BaseHybridClass>::JavaPart, facebook::jni::JObject, void>::_javaobject*>, facebook::jni::alias_ref<_jintArray*>&&, int&&, int&&, int&&, facebook::jni::alias_ref<_jstring*>&&, int&&, facebook::jni::alias_ref<executorch_jni::ExecuTorchLlamaCallbackJni>&&, unsigned char&&), facebook::jni::detail::JTypeFor<facebook::jni::HybridClass<executorch_jni::ExecuTorchLlamaJni, facebook::jni::detail::BaseHybridClass>::JavaPart, facebook::jni::JObject, void>::_javaobject*, int, facebook::jni::alias_ref<_jintArray*>, int, int, int, facebook::jni::alias_ref<_jstring*>, int, facebook::jni::alias_ref<executorch_jni::ExecuTorchLlamaCallbackJni>, unsigned char>::call(_JNIEnv*, _jobject*, _jintArray*, int, int, int, _jstring*, int, facebook::jni::detail::JTypeFor<executorch_jni::ExecuTorchLlamaCallbackJni, facebook::jni::JObject, void>::_javaobject*, unsigned char, int (*)(facebook::jni::alias_ref<facebook::jni::detail::JTypeFor<facebook::jni::HybridClass<executorch_jni::ExecuTorchLlamaJni, facebook::jni::detail::BaseHybridClass>::JavaPart, facebook::jni::JObject, void>::_javaobject*>, facebook::jni::alias_ref<_jintArray*>&&, int&&, int&&, int&&, facebook::jni::alias_ref<_jstring*>&&, int&&, facebook::jni::alias_ref<executorch_jni::ExecuTorchLlamaCallbackJni>&&, unsigned char&&))+164) (BuildId: 87abca08e486390fd661b9f8676b8b0c40ba5d04)
2024-11-25 14:52:51.146 20135-20135 DEBUG                                                        A        #07 pc 00000000001e794c  /data/app/~~vQVnC2iQW4Ws7d2zTIrIEQ==/org.pytorch.torchchat-ta0utv2u-lYvdiOSyxOdLA==/lib/arm64/libexecutorch.so (facebook::jni::detail::MethodWrapper<int (executorch_jni::ExecuTorchLlamaJni::*)(facebook::jni::alias_ref<_jintArray*>, int, int, int, facebook::jni::alias_ref<_jstring*>, int, facebook::jni::alias_ref<executorch_jni::ExecuTorchLlamaCallbackJni>, unsigned char), &executorch_jni::ExecuTorchLlamaJni::generate(facebook::jni::alias_ref<_jintArray*>, int, int, int, facebook::jni::alias_ref<_jstring*>, int, facebook::jni::alias_ref<executorch_jni::ExecuTorchLlamaCallbackJni>, unsigned char), executorch_jni::ExecuTorchLlamaJni, int, facebook::jni::alias_ref<_jintArray*>, int, int, int, facebook::jni::alias_ref<_jstring*>, int, facebook::jni::alias_ref<executorch_jni::ExecuTorchLlamaCallbackJni>, unsigned char>::call(_JNIEnv*, _jobject*, _jintArray*, int, int, int, _jstring*, int, facebook::jni::detail::JTypeFor<executorch_jni::ExecuTorchLlamaCallbackJni, facebook::jni::JObject, void>::_javaobject*, unsigned char)+40) (BuildId: 87abca08e486390fd661b9f8676b8b0c40ba5d04)
2024-11-25 14:52:51.146 20135-20135 DEBUG                                                        A        #14 pc 0000000000357504  /data/app/~~vQVnC2iQW4Ws7d2zTIrIEQ==/org.pytorch.torchchat-ta0utv2u-lYvdiOSyxOdLA==/base.apk (org.pytorch.executorch.LlamaModule.generate+0)
2024-11-25 14:52:51.146 20135-20135 DEBUG                                                        A        #19 pc 0000000000005d08  /data/app/~~vQVnC2iQW4Ws7d2zTIrIEQ==/org.pytorch.torchchat-ta0utv2u-lYvdiOSyxOdLA==/base.apk (org.pytorch.torchchat.MainActivity$4.run+0)

It seems that the tokens are miscounted on the second call. This isn't the case for the iOS application. I haven't looked at the Android version of the demo located in the executorch repo. I haven't tested on other models yet, but I can start moving other llama models over to the Android device to see if I can reproduce this tokenizer bug.

Versions

Here's the info on my MBP.

Collecting environment information...
PyTorch version: 2.6.0.dev20241002
Is debug build: False
CUDA used to build PyTorch: None
ROCM used to build PyTorch: N/A

OS: macOS 14.6.1 (arm64)
GCC version: Could not collect
Clang version: 16.0.0 (clang-1600.0.26.4)
CMake version: version 3.30.4
Libc version: N/A

Python version: 3.10.15 | packaged by conda-forge | (main, Sep 30 2024, 17:48:38) [Clang 17.0.6 ] (64-bit runtime)
Python platform: macOS-14.6.1-arm64-arm-64bit
Is CUDA available: False
CUDA runtime version: No CUDA
CUDA_MODULE_LOADING set to: N/A
GPU models and configuration: No CUDA
Nvidia driver version: No CUDA
cuDNN version: No CUDA
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True

CPU:
Apple M1 Max

Versions of relevant libraries:
[pip3] executorch==0.5.0a0+72b3bb3
[pip3] numpy==1.26.4
[pip3] torch==2.6.0.dev20241002
[pip3] torchao==0.5.0
[pip3] torchaudio==2.5.0.dev20241007
[pip3] torchsr==1.0.4
[pip3] torchtune==0.4.0.dev20241010+cpu
[pip3] torchvision==0.20.0.dev20241002
[conda] numpy                     1.26.4          py312h7f4fdc5_0  
[conda] numpy-base                1.26.4          py312he047099_0  
[conda] numpydoc                  1.7.0           py312hca03da5_0