soulteary / docker-prompt-generator

Using a Model to generate prompts for Model applications. / 使用模型来生成作图咒语的偷懒工具,支持 MidJourney、Stable Diffusion 等。
https://soulteary.com/2023/04/05/eighty-lines-of-code-to-implement-the-open-source-midjourney-and-stable-diffusion-spell-drawing-tool.html
MIT License
1.16k stars 111 forks source link

构建镜像时报错 #11

Open dwow100 opened 1 year ago

dwow100 commented 1 year ago

docker build -t soulteary/prompt-generator:base . -f docker/Dockerfile.base

Error response from daemon: Dockerfile parse error line 11: FROM requires either one or three arguments

cnhuz commented 1 year ago

我在windows下构建也遇到了,把Dockerfile.base中这段代码


RUN cat > /get-models.py <<EOF
from transformers import AutoTokenizer, AutoModelForSeq2SeqLM, pipeline
AutoModelForSeq2SeqLM.from_pretrained('Helsinki-NLP/opus-mt-zh-en')
AutoTokenizer.from_pretrained('Helsinki-NLP/opus-mt-zh-en')
pipeline('text-generation', model='succinctly/text2image-prompt-generator')
EOF

改为


RUN echo "from transformers import AutoTokenizer, AutoModelForSeq2SeqLM, pipeline" > /get-models.py && \
    echo "AutoModelForSeq2SeqLM.from_pretrained('Helsinki-NLP/opus-mt-zh-en')" >> /get-models.py && \
    echo "AutoTokenizer.from_pretrained('Helsinki-NLP/opus-mt-zh-en')" >> /get-models.py && \
    echo "pipeline('text-generation', model='succinctly/text2image-prompt-generator')" >> /get-models.py

就可以了

baymax55 commented 1 year ago

构建Dockerfile.gpu出错时,将下列代码进行替换: RUN cat > /get-models.py <<EOF from clip_interrogator import Config, Interrogator import torch config = Config() config.device = 'cuda' if torch.cuda.is_available() else 'cpu' config.blip_offload = False if torch.cuda.is_available() else True config.chunk_size = 2048 config.flavor_intermediate_count = 512 config.blip_num_beams = 64 config.clip_model_name = "ViT-H-14/laion2b_s32b_b79k" ci = Interrogator(config) EOF

->

RUN echo "from clip_interrogator import Config, Interrogator" >> /get-models.py && \ echo "import torch" >> /get-models.py && \ echo "config = Config()" >> /get-models.py && \ echo "config.device = 'cuda' if torch.cuda.is_available() else 'cpu'" >> /get-models.py && \ echo "config.blip_offload = False if torch.cuda.is_available() else True" >> /get-models.py && \ echo "config.chunk_size = 2048" >> /get-models.py && \ echo "config.flavor_intermediate_count = 512" >> /get-models.py && \ echo "config.blip_num_beams = 64" >> get-models.py && \ echo "config.clip_model_name = \"ViT-H-14/laion2b_s32b_b79k\"" >> /get-models.py && \ echo "ci = Interrogator(config)" >> /get-models.py