Open nongze opened 4 months ago
挂个代理
可还行,大概是没问题了
orangepi@orangepi5:~/RK3588-stable-diffusion-GPU$ python ./convert_model_from_pth_safetensors.py --checkpoint_path ./1.safetensors --dump_path ./1/ --from_safetensors --original_config_file ./v1-inference.yaml
tokenizer.json: 100%|██████████████████████████████████████████████████████████████| 2.22M/2.22M [00:01<00:00, 1.62MB/s]
config.json: 100%|█████████████████████████████████████████████████████████████████| 4.55k/4.55k [00:00<00:00, 21.5MB/s]
pytorch_model.bin: 100%|███████████████████████████████████████████████████████████| 1.22G/1.22G [03:24<00:00, 5.95MB/s]
preprocessor_config.json: 100%|████████████████████████████████████████████████████████| 342/342 [00:00<00:00, 1.53MB/s]
/home/orangepi/.local/lib/python3.12/site-packages/transformers/models/clip/feature_extraction_clip.py:28: FutureWarning: The class CLIPFeatureExtractor is deprecated and will be removed in version 5 of Transformers. Please use CLIPImageProcessor instead.
这样就行了嘛
请问python ./deploy.py --device-name opencl
这个命令就能开启webui吗
(sd1) orangepi@orangepi5:~/RK3588-stable-diffusion-GPU$ python ./build.py
Automatically configuring target: opencl -keys=mali,opencl,gpu -device=mali -max_function_args=128 -max_num_threads=1024 -max_shared_memory_per_block=32768 -max_threads_per_block=1024 -texture_spatial_limit=16384 -thread_warp_size=16
Loading pipeline components...: 0%| | 0/7 [00:00<?, ?it/s]An error occurred while trying to fetch ./1/vae: Error no file named diffusion_pytorch_model.safetensors found in directory ./1/vae.
Defaulting to unsafe serialization. Pass `allow_pickle=False` to raise an error instead.
Loading pipeline components...: 29%|██████████████████████████████████▌ | 2/7 [00:01<00:03, 1.29it/s]An error occurred while trying to fetch ./1/unet: Error no file named diffusion_pytorch_model.safetensors found in directory ./1/unet.
Defaulting to unsafe serialization. Pass `allow_pickle=False` to raise an error instead.
Loading pipeline components...: 43%|███████████████████████████████████████████████████▊ | 3/7 [00:10<00:18, 4.59s/it]/home/orangepi/anaconda3/envs/sd1/lib/python3.10/site-packages/transformers/models/clip/feature_extraction_clip.py:28: FutureWarning: The class CLIPFeatureExtractor is deprecated and will be removed in version 5 of Transformers. Please use CLIPImageProcessor instead.
warnings.warn(
Loading pipeline components...: 100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 7/7 [00:11<00:00, 1.61s/it]
Entry functions: ['clip', 'unet', 'vae', 'dpm_solver_multistep_scheduler_convert_model_output', 'dpm_solver_multistep_scheduler_step', 'pndm_scheduler_step_0', 'pndm_scheduler_step_1', 'pndm_scheduler_step_2', 'pndm_scheduler_step_3', 'pndm_scheduler_step_4', 'image_to_rgba', 'concat_embeddings']
Traceback (most recent call last):
File "/home/orangepi/RK3588-stable-diffusion-GPU/./build.py", line 239, in <module>
mod = legalize_and_lift_params(mod, params, ARGS)
File "/home/orangepi/RK3588-stable-diffusion-GPU/./build.py", line 129, in legalize_and_lift_params
new_params = utils.transform_params(mod_transform, model_params)
File "/home/orangepi/RK3588-stable-diffusion-GPU/web_stable_diffusion/utils.py", line 95, in transform_params
new_params[name] = vm[name + "_transform_params"](params)
File "/home/orangepi/tvm/python/tvm/_ffi/_ctypes/packed_func.py", line 239, in __call__
raise_last_ffi_error()
File "/home/orangepi/tvm/python/tvm/_ffi/base.py", line 481, in raise_last_ffi_error
raise py_err
ValueError: Traceback (most recent call last):
8: _ZN3tvm7runtime13PackedFuncObj9ExtractorINS0_16PackedFuncSubObjIZNS0_8relax_vm18VirtualMachineImpl15_LookupFunctionERKNS0_6StringEEUlNS0_7TVMArgsEPNS0_11TVM
7: tvm::runtime::relax_vm::VirtualMachineImpl::InvokeClosurePacked(tvm::runtime::ObjectRef const&, tvm::runtime::TVMArgs, tvm::runtime::TVMRetValue*)
6: tvm::runtime::PackedFuncObj::Extractor<tvm::runtime::PackedFuncSubObj<tvm::runtime::relax_vm::VirtualMachineImpl::GetClosureInternal(tvm::runtime::String const&, bool)::{lambda(tvm::runtime::TVMArgs, tvm::runtime::TVMRetValue*)#1}> >::Call(tvm::runtime::PackedFuncObj const*, tvm::runtime::TVMArgs, tvm::runtime::TVMRetValue*)
5: tvm::runtime::relax_vm::VirtualMachineImpl::InvokeBytecode(long, std::vector<tvm::runtime::TVMRetValue, std::allocator<tvm::runtime::TVMRetValue> > const&)
4: tvm::runtime::relax_vm::VirtualMachineImpl::RunLoop()
3: tvm::runtime::relax_vm::VirtualMachineImpl::RunInstrCall(tvm::runtime::relax_vm::VMFrame*, tvm::runtime::relax_vm::Instruction)
2: tvm::runtime::relax_vm::VirtualMachineImpl::InvokeClosurePacked(tvm::runtime::ObjectRef const&, tvm::runtime::TVMArgs, tvm::runtime::TVMRetValue*)
1: tvm::runtime::PackedFuncObj::Extractor<tvm::runtime::PackedFuncSubObj<tvm::runtime::TypedPackedFunc<void (tvm::runtime::ObjectRef, long, tvm::runtime::Optional<tvm::runtime::String>)>::AssignTypedLambda<void (*)(tvm::runtime::ObjectRef, long, tvm::runtime::Optional<tvm::runtime::String>)>(void (*)(tvm::runtime::ObjectRef, long, tvm::runtime::Optional<tvm::runtime::String>), std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> >)::{lambda(tvm::runtime::TVMArgs const&, tvm::runtime::TVMRetValue*)#1}> >::Call(tvm::runtime::PackedFuncObj const*, tvm::runtime::TVMArgs, tvm::runtime::TVMRetValue*)
0: tvm::runtime::relax_vm::CheckTupleInfo(tvm::runtime::ObjectRef, long, tvm::runtime::Optional<tvm::runtime::String>)
File "/home/orangepi/tvm/src/runtime/relax_vm/builtin.cc", line 310
ValueError: Check failed: (static_cast<int64_t>(ptr->size()) == size) is false: ErrorContext(fn=clip_transform_params, loc=param[0], param=model_params, annotation=R.Tuple(R.Tuple(R.Tensor((77, 768), dtype="float32"), R.Tensor((49408, 768), dtype="float32"), R.Tensor((768,), dtype="float32"), R.Tensor((768,), dtype="float32"), R.Tensor((768,), dtype="float32"), R.Tensor((768,), dtype="float32"), R.Tensor((3072,), dtype="float32"), R.Tensor((3072, 768), dtype="float32"), R.Tensor((768,), dtype="float32"), R.Tensor((768, 3072), dtype="float32"), R.Tensor((768,), dtype="float32"), R.Tensor((768, 768), dtype="float32"), R.Tensor((768,), dtype="float32"), R.Tensor((768, 768), dtype="float32"), R.Tensor((768,), dtype="float32"), R.Tensor((768, 768), dtype="float32"), R.Tensor((768,), dtype="float32"), R.Tensor((768, 768), dtype="float32"), R.Tensor((768,), dtype="float32"), R.Tensor((768,), dtype="float32"), R.Tensor((768,), dtype="float32"), R.Tensor((768,), dtype="float32"), R.Tensor((3072,), dtype="float32"), R.Tensor((3072, 768), dtype="float32"), R.Tensor((768,), dtype="float32"), R.Tensor((768, 3072), dtype="float32"), R.Tensor((768,), dtype="float32"), R.Tensor((768, 768), dtype="float32"), R.Tensor((768,), dtype="float32"), R.Tensor((768, 768), dtype="float32"), R.Tensor((768,), dtype="float32"), R.Tensor((768, 768), dtype="float32"), R.Tensor((768,), dtype="float32"), R.Tensor((768, 768), dtype="float32"), R.Tensor((768,), dtype="float32"), R.Tensor((768,), dtype="float32"), R.Tensor((768,), dtype="float32"), R.Tensor((768,), dtype="float32"), R.Tensor((3072,), dtype="float32"), R.Tensor((3072, 768), dtype="float32"), R.Tensor((768,), dtype="float32"), R.Tensor((768, 3072), dtype="float32"), R.Tensor((768,), dtype="float32"), R.Tensor((768, 768), dtype="float32"), R.Tensor((768,), dtype="float32"), R.Tensor((768, 768), dtype="float32"), R.Tensor((768,), dtype="float32"), R.Tensor((768, 768), dtype="float32"), R.Tensor((768,), dtype="float32"), R.Tensor((768, 768), dtype="float32"), R.Tensor((768,), dtype="float32"), R.Tensor((768,), dtype="float32"), R.Tensor((768,), dtype="float32"), R.Tensor((768,), dtype="float32"), R.Tensor((3072,), dtype="float32"), R.Tensor((3072, 768), dtype="float32"), R.Tensor((768,), dtype="float32"), R.Tensor((768, 3072), dtype="float32"), R.Tensor((768,), dtype="float32"), R.Tensor((768, 768), dtype="float32"), R.Tensor((768,), dtype="float32"), R.Tensor((768, 768), dtype="float32"), R.Tensor((768,), dtype="float32"), R.Tensor((768, 768), dtype="float32"), R.Tensor((768,), dtype="float32"), R.Tensor((768, 768), dtype="float32"), R.Tensor((768,), dtype="float32"), R.Tensor((768,), dtype="float32"), R.Tensor((768,), dtype="float32"), R.Tensor((768,), dtype="float32"), R.Tensor((3072,), dtype="float32"), R.Tensor((3072, 768), dtype="float32"), R.Tensor((768,), dtype="float32"), R.Tensor((768, 3072), dtype="float32"), R.Tensor((768,), dtype="float32"), R.Tensor((768, 768), dtype="float32"), R.Tensor((768,), dtype="float32"), R.Tensor((768, 768), dtype="float32"), R.Tensor((768,), dtype="float32"), R.Tensor((768, 768), dtype="float32"), R.Tensor((768,), dtype="float32"), R.Tensor((768, 768), dtype="float32"), R.Tensor((768,), dtype="float32"), R.Tensor((768,), dtype="float32"), R.Tensor((768,), dtype="float32"), R.Tensor((768,), dtype="float32"), R.Tensor((3072,), dtype="float32"), R.Tensor((3072, 768), dtype="float32"), R.Tensor((768,), dtype="float32"), R.Tensor((768, 3072), dtype="float32"), R.Tensor((768,), dtype="float32"), R.Tensor((768, 768), dtype="float32"), R.Tensor((768,), dtype="float32"), R.Tensor((768, 768), dtype="float32"), R.Tensor((768,), dtype="float32"), R.Tensor((768, 768), dtype="float32"), R.Tensor((768,), dtype="float32"), R.Tensor((768, 768), dtype="float32"), R.Tensor((768,), dtype="float32"), R.Tensor((768,), dtype="float32"), R.Tensor((768,), dtype="float32"), R.Tensor((768,), dtype="float32"), R.Tensor((3072,), dtype="float32"), R.Tensor((3072, 768), dtype="float32"), R.Tensor((768,), dtype="float32"), R.Tensor((768, 3072), dtype="float32"), R.Tensor((768,), dtype="float32"), R.Tensor((768, 768), dtype="float32"), R.Tensor((768,), dtype="float32"), R.Tensor((768, 768), dtype="float32"), R.Tensor((768,), dtype="float32"), R.Tensor((768, 768), dtype="float32"), R.Tensor((768,), dtype="float32"), R.Tensor((768, 768), dtype="float32"), R.Tensor((768,), dtype="float32"), R.Tensor((768,), dtype="float32"), R.Tensor((768,), dtype="float32"), R.Tensor((768,), dtype="float32"), R.Tensor((3072,), dtype="float32"), R.Tensor((3072, 768), dtype="float32"), R.Tensor((768,), dtype="float32"), R.Tensor((768, 3072), dtype="float32"), R.Tensor((768,), dtype="float32"), R.Tensor((768, 768), dtype="float32"), R.Tensor((768,), dtype="float32"), R.Tensor((768, 768), dtype="float32"), R.Tensor((768,), dtype="float32"), R.Tensor((768, 768), dtype="float32"), R.Tensor((768,), dtype="float32"), R.Tensor((768, 768), dtype="float32"), R.Tensor((768,), dtype="float32"), R.Tensor((768,), dtype="float32"), R.Tensor((768,), dtype="float32"), R.Tensor((768,), dtype="float32"), R.Tensor((3072,), dtype="float32"), R.Tensor((3072, 768), dtype="float32"), R.Tensor((768,), dtype="float32"), R.Tensor((768, 3072), dtype="float32"), R.Tensor((768,), dtype="float32"), R.Tensor((768, 768), dtype="float32"), R.Tensor((768,), dtype="float32"), R.Tensor((768, 768), dtype="float32"), R.Tensor((768,), dtype="float32"), R.Tensor((768, 768), dtype="float32"), R.Tensor((768,), dtype="float32"), R.Tensor((768, 768), dtype="float32"), R.Tensor((768,), dtype="float32"), R.Tensor((768,), dtype="float32"), R.Tensor((768,), dtype="float32"), R.Tensor((768,), dtype="float32"), R.Tensor((3072,), dtype="float32"), R.Tensor((3072, 768), dtype="float32"), R.Tensor((768,), dtype="float32"), R.Tensor((768, 3072), dtype="float32"), R.Tensor((768,), dtype="float32"), R.Tensor((768, 768), dtype="float32"), R.Tensor((768,), dtype="float32"), R.Tensor((768, 768), dtype="float32"), R.Tensor((768,), dtype="float32"), R.Tensor((768, 768), dtype="float32"), R.Tensor((768,), dtype="float32"), R.Tensor((768, 768), dtype="float32"), R.Tensor((768,), dtype="float32"), R.Tensor((768,), dtype="float32"), R.Tensor((768,), dtype="float32"), R.Tensor((768,), dtype="float32"), R.Tensor((3072,), dtype="float32"), R.Tensor((3072, 768), dtype="float32"), R.Tensor((768,), dtype="float32"), R.Tensor((768, 3072), dtype="float32"), R.Tensor((768,), dtype="float32"), R.Tensor((768, 768), dtype="float32"), R.Tensor((768,), dtype="float32"), R.Tensor((768, 768), dtype="float32"), R.Tensor((768,), dtype="float32"), R.Tensor((768, 768), dtype="float32"), R.Tensor((768,), dtype="float32"), R.Tensor((768, 768), dtype="float32"), R.Tensor((768,), dtype="float32"), R.Tensor((768,), dtype="float32"), R.Tensor((768,), dtype="float32"), R.Tensor((768,), dtype="float32"), R.Tensor((3072,), dtype="float32"), R.Tensor((3072, 768), dtype="float32"), R.Tensor((768,), dtype="float32"), R.Tensor((768, 3072), dtype="float32"), R.Tensor((768,), dtype="float32"), R.Tensor((768, 768), dtype="float32"), R.Tensor((768,), dtype="float32"), R.Tensor((768, 768), dtype="float32"), R.Tensor((768,), dtype="float32"), R.Tensor((768, 768), dtype="float32"), R.Tensor((768,), dtype="float32"), R.Tensor((768, 768), dtype="float32"), R.Tensor((768,), dtype="float32"), R.Tensor((768,), dtype="float32")))) expect a Tuple with 1 elements, but get a Tuple with 196 elements.
(sd1) orangepi@orangepi5:~/RK3588-stable-diffusion-GPU$ python ./deploy.py --device-name opencl
Traceback (most recent call last):
File "/home/orangepi/RK3588-stable-diffusion-GPU/./deploy.py", line 169, in <module>
deploy_to_pipeline(ARGS)
File "/home/orangepi/RK3588-stable-diffusion-GPU/./deploy.py", line 139, in deploy_to_pipeline
const_params_dict = utils.load_params(args.artifact_path, device)
File "/home/orangepi/RK3588-stable-diffusion-GPU/web_stable_diffusion/utils.py", line 115, in load_params
params, meta = tvmjs.load_ndarray_cache(f"{artifact_path}/params", device)
File "/home/orangepi/tvm/python/tvm/contrib/tvmjs.py", line 344, in load_ndarray_cache
json_info = json.loads(open(cachepath, "r").read())
FileNotFoundError: [Errno 2] No such file or directory: 'dist/params/ndarray-cache.json'
请问这是怎么回事
expect a Tuple with 1 elements, but get a Tuple with 196 elements.
看一遍readme