PaddlePaddle / PaddleNLP

👑 Easy-to-use and powerful NLP and LLM library with 🤗 Awesome model zoo, supporting wide-range of NLP tasks from research to industrial applications, including 🗂Text Classification, 🔍 Neural Search, ❓ Question Answering, ℹ️ Information Extraction, 📄 Document Intelligence, 💌 Sentiment Analysis etc.
https://paddlenlp.readthedocs.io
Apache License 2.0
12.12k stars 2.94k forks source link

[Question]: 当我在使用PPDiffuser时调用fastdeploy出现了如下的错误,请问如何解决 #6437

Open Siri-2001 opened 1 year ago

Siri-2001 commented 1 year ago

请提出你的问题

The following part of your input was truncated because CLIP can only handle sequences up to 77 tokens: [''] Traceback (most recent call last): File "main.py", line 34, in image_text2img = fd_pipe.text2img(prompt=prompt, num_inference_steps=50).images[0] File "/root/miniconda3/lib/python3.8/site-packages/ppdiffusers-0.14.2-py3.8.egg/ppdiffusers/pipelines/stable_diffusion/pipeline_fastdeploy_stable_diffusion_mega.py", line 91, in text2img output = temp_pipeline( File "/root/miniconda3/lib/python3.8/site-packages/ppdiffusers-0.14.2-py3.8.egg/ppdiffusers/pipelines/stable_diffusion/pipeline_fastdeploy_stable_diffusion.py", line 368, in call text_embeddings = self._encode_prompt( File "/root/miniconda3/lib/python3.8/site-packages/ppdiffusers-0.14.2-py3.8.egg/ppdiffusers/pipelines/stable_diffusion/pipeline_fastdeploy_stable_diffusion.py", line 160, in _encode_prompt text_embeddings = self.text_encoder(input_ids=text_input_ids.astype(np.int64))[0] File "/root/miniconda3/lib/python3.8/site-packages/ppdiffusers-0.14.2-py3.8.egg/ppdiffusers/pipelines/fastdeploy_utils.py", line 102, in call return self.model.infer(inputs) File "/root/miniconda3/lib/python3.8/site-packages/fastdeploy/runtime.py", line 64, in infer return self._runtime.infer(data) OSError:


C++ Traceback (most recent call last):

0 paddle::AnalysisPredictor::ZeroCopyRun() 1 paddle::framework::OperatorBase::Run(paddle::framework::Scope const&, phi::Place const&) 2 paddle::framework::OperatorWithKernel::RunImpl(paddle::framework::Scope const&, phi::Place const&) const 3 paddle::framework::OperatorWithKernel::RunImpl(paddle::framework::Scope const&, phi::Place const&, paddle::framework::RuntimeContext) const 4 paddle::framework::StructKernelImpl<paddle::operators::MultiHeadMatMulV2Kernel<float, phi::GPUContext>, void>::Compute(phi::KernelContext) 5 paddle::operators::MultiHeadMatMulV2Kernel<float, phi::GPUContext>::Compute(paddle::framework::ExecutionContext const&) const 6 void phi::funcs::Blas::MatMul(phi::DenseTensor const&, bool, phi::DenseTensor const&, bool, float, phi::DenseTensor, float) const 7 void phi::funcs::Blas::GEMM(CBLAS_TRANSPOSE, CBLAS_TRANSPOSE, int, int, int, float, float const, float const, float, float) const 8 phi::GPUContext::CublasCall(std::function<void (cublasContext)> const&) const 9 phi::GPUContext::Impl::CublasCall(std::function<void (cublasContext)> const&)::{lambda()#1}::operator()() const 10 phi::enforce::EnforceNotMet::EnforceNotMet(phi::ErrorSummary const&, char const*, int) 11 phi::enforce::GetCurrentTraceBackStringabi:cxx11


Error Message Summary:

ExternalError: CUBLAS error(7). [Hint: Please search for the error code(7) on website (https://docs.nvidia.com/cuda/cublas/index.html#cublasstatus_t) to get Nvidia's official solution and advice about CUBLAS Error.] (at /home/fastdeploy/develop/paddle_build/v0.0.0/Paddle/paddle/fluid/inference/api/resource_manager.cc:282)

w5688414 commented 6 months ago

PPDiffuser相关的内容请移步paddlemix。

https://github.com/PaddlePaddle/PaddleMIX