Closed sbe-arg closed 4 months ago
latest
latest stable
chromium
relates to this issue https://github.com/mudler/LocalAI/discussions/975 first set of errors
8:15PM DBG GRPC: Loading model with options: {state:{NoUnkeyedLiterals:{} DoNotCompare:[] DoNotCopy:[] atomicMessageInfo:<nil>} sizeCache:0 unknownFields:[] Model:stablediffusion_assets ContextSize:0 Seed:0 NBatch:0 F16Memory:false MLock:false MMap:false VocabOnly:false LowVRAM:false Embeddings:false NUMA:false NGPULayers:0 MainGPU: TensorSplit: Threads:0 LibrarySearchPath: RopeFreqBase:0 RopeFreqScale:0 RMSNormEps:0 NGQA:0 ModelFile:/models/stablediffusion_assets Device: UseTriton:false ModelBaseName: UseFastTokenizer:false PipelineType: SchedulerType: CUDA:false CFGScale:0 IMG2IMG:false CLIPModel: CLIPSubfolder: CLIPSkip:0} 8:15PM DBG GRPC(stablediffusion_assets-127.0.0.1:41875): stdout ----------------[start generation upscaled image]------------------ 8:15PM DBG GRPC(stablediffusion_assets-127.0.0.1:41875): stdout positive_prompt: self-constructed dirndl 3d craft sheet 8:15PM DBG GRPC(stablediffusion_assets-127.0.0.1:41875): stdout output_png_path: /tmp/b643903930003.png 8:15PM DBG GRPC(stablediffusion_assets-127.0.0.1:41875): stdout negative_prompt: 8:15PM DBG GRPC(stablediffusion_assets-127.0.0.1:41875): stdout step: 15 8:15PM DBG GRPC(stablediffusion_assets-127.0.0.1:41875): stdout seed: 0 8:15PM DBG GRPC(stablediffusion_assets-127.0.0.1:41875): stdout ----------------[prompt]------------------ [127.0.0.1]:57394 200 - GET /readyz 8:15PM DBG GRPC(stablediffusion_assets-127.0.0.1:41875): stderr fopen /models/stablediffusion_assets/UNetModel-fp16.param failed 8:15PM DBG GRPC(stablediffusion_assets-127.0.0.1:41875): stderr fopen /models/stablediffusion_assets/UNetModel-fp16.bin failed 8:15PM DBG GRPC(stablediffusion_assets-127.0.0.1:41875): stdout ----------------[diffusion]--------------- 8:15PM DBG GRPC(stablediffusion_assets-127.0.0.1:41875): stderr find_blob_index_by_name in0 failed
second set of errors
9:55AM DBG GRPC(stablediffusion_assets-127.0.0.1:45997): stdout ----------------[start generation upscaled image]------------------ 9:55AM DBG GRPC(stablediffusion_assets-127.0.0.1:45997): stdout positive_prompt: a dog 9:55AM DBG GRPC(stablediffusion_assets-127.0.0.1:45997): stdout output_png_path: /tmp/generated/images/b642847695003.png 9:55AM DBG GRPC(stablediffusion_assets-127.0.0.1:45997): stdout negative_prompt: 9:55AM DBG GRPC(stablediffusion_assets-127.0.0.1:45997): stdout step: 15 9:55AM DBG GRPC(stablediffusion_assets-127.0.0.1:45997): stdout seed: 1855247344 9:55AM DBG GRPC(stablediffusion_assets-127.0.0.1:45997): stdout ----------------[prompt]------------------ 9:55AM DBG GRPC(stablediffusion_assets-127.0.0.1:45997): stdout ----------------[diffusion]--------------- 9:55AM DBG GRPC(stablediffusion_assets-127.0.0.1:45997): stderr ModelBin read weight_data failed 135 9:55AM DBG GRPC(stablediffusion_assets-127.0.0.1:45997): stderr layer load_model 9 pnnx_fold_v_8.1 failed 9:55AM DBG GRPC(stablediffusion_assets-127.0.0.1:45997): stderr SIGSEGV: segmentation violation 9:55AM DBG GRPC(stablediffusion_assets-127.0.0.1:45997): stderr PC=0xea7320 m=4 sigcode=1 9:55AM DBG GRPC(stablediffusion_assets-127.0.0.1:45997): stderr signal arrived during cgo execution
Image generation from text to work
Install Local Ai using compose:
something like this baseline
version: '3.9' name: 'artificial-intelligence' services: localai: container_name: local-ai hostname: local-ai image: docker.io/localai/localai:v2.10.1 ports: - 8080:8080 environment: - MODELS_PATH=/models - IMAGE_PATH=/tmp/generated/images/ - REBUILD=true - BUILD_API_ONLY=false # rebuild everything - BUILD_GRPC_FOR_BACKEND_LLAMA=true - BUILD_TYPE=openblas - GO_TAGS=stablediffusion,tts - GALLERIES='[{"name":"model-gallery","url":"github:go-skynet/model-gallery/index.yaml"},{"url":"github:go-skynet/model-gallery/huggingface.yaml","name":"huggingface"}]' - THREADS=2 - DEBUG=true - TZ=Pacific/Auckland volumes: - localai_models:/models:cached - localai_images:/tmp/generated/images/ command: - https://raw.githubusercontent.com/mudler/LocalAI/3cf64d1e7e835224da0ad5a3df5dcf8f675722f4/aio/cpu/text-to-text.yaml - https://raw.githubusercontent.com/mudler/LocalAI/3cf64d1e7e835224da0ad5a3df5dcf8f675722f4/aio/cpu/speech-to-text.yaml - https://raw.githubusercontent.com/mudler/LocalAI/b4386be369130338d4087291da75c5c6cb2d9c58/aio/cpu/image-gen.yaml #- https://raw.githubusercontent.com/mudler/LocalAI/3cf64d1e7e835224da0ad5a3df5dcf8f675722f4/aio/cpu/image-gen.yaml # https://localai.io/basics/getting_started/index.html#running-models restart: unless-stopped volumes: localai_models: {} localai_images: {}
run asistant to generate image from text
I see it works with the stable diffusion local image app install, closing
Which version of assistant are you using?
latest
Which version of Nextcloud are you using?
latest stable
Which browser are you using? In case you are using the phone App, specify the Android or iOS version and device please.
chromium
Describe the Bug
relates to this issue https://github.com/mudler/LocalAI/discussions/975 first set of errors
second set of errors
Expected Behavior
Image generation from text to work
To Reproduce
Install Local Ai using compose:
something like this baseline
run asistant to generate image from text