microsoft / semantic-kernel

Integrate cutting-edge LLM technology quickly and easily into your apps
https://aka.ms/semantic-kernel
MIT License
21.97k stars 3.27k forks source link

Containerize Copilot-chat-sample using docker-sample #1505

Closed N-E-W-T-O-N closed 1 year ago

N-E-W-T-O-N commented 1 year ago

I am working to Containerize the Copilot-chat-sample using docker-composer . To run Copilot you required two images. 1) webapi(backend) written in .NET which have semantic kernel to call the LLM And generate answer based on user query . 2) webapp(frontend) written in TYPESCRIPT.

I am successfully able to create webapp image


FROM node:17-alpine
WORKDIR /app

COPY package.json ./
COPY yarn.lock ./

RUN yarn install
COPY . .
RUN yarn build
EXPOSE 3030
CMD ["yarn", "start"]

which runs on localhost/3000 .

Now I am facing issue to run the webapi image . The first thing I observed is port localhst/40443 is passing in build through appsetting.json file . I don't know what I am doing wrong . Please anyone which tell me what I am doing wrong


ROM mcr.microsoft.com/dotnet/aspnet:6.0 AS base
WORKDIR /app

FROM mcr.microsoft.com/dotnet/sdk:6.0 AS build
WORKDIR /src
EXPOSE 40443
COPY ["CopilotChatWebApi.csproj", "."]
RUN dotnet restore "./CopilotChatWebApi.csproj"
COPY . .
WORKDIR "/src/."
RUN dotnet build "CopilotChatWebApi.csproj" -c Release -o /app/build
FROM build AS publish
RUN dotnet publish "CopilotChatWebApi.csproj" -c Release -o /app/publish /p:UseAppHost=false

FROM base AS final
WORKDIR /app
COPY --from=publish /app/publish .
ENTRYPOINT ["dotnet", "CopilotChatWebApi.dll"]

note : currently, I am passing all credentials directly in appsetting.json

N-E-W-T-O-N commented 1 year ago

If it work I will PR in main project .

N-E-W-T-O-N commented 1 year ago

If it work I will PR in main project .

JadynWong commented 1 year ago

I am currently deploying this example using Docker. However, there is a caveat: you need to override the Kestrel__Endpoints__Https__Url setting. When using HTTPS in containers, there can be issues, so I use the environment variable Kestrel__Endpoints__Https__Url=http://+:80 to override this setting.

webapp

FROM node:lts-alpine AS build
WORKDIR /build

COPY . ./
RUN --mount=type=cache,target=/root/.yarn YARN_CACHE_FOLDER=/root/.yarn \
    yarn && yarn build

FROM httpd:alpine
WORKDIR /usr/local/apache2/htdocs/
COPY --from=build /build/build/ .

webapi

FROM mcr.microsoft.com/dotnet/sdk:6.0 AS build
WORKDIR /src

# Publish
COPY . ./
RUN --mount=type=cache,id=nuget,target=/root/.nuget/packages\
    dotnet build -c Release && dotnet publish -c Release --no-restore -o /out

# Build runtime image
FROM mcr.microsoft.com/dotnet/aspnet:6.0
WORKDIR /app
COPY --from=build /out .
ENTRYPOINT ["dotnet", "CopilotChatWebApi.dll"]

docker-compose.yaml

version: '3'
services:

  backend:
    container_name: webapi
    build: ./webapi
    ports:
      # replace your webapp's .env 'REACT_APP_BACKEND_URI' with it
      - 8090:80
    environment:
      - Kestrel__Endpoints__Https__Url=http://+:80
      - AIService__Endpoint=REPLACE_YOUR_ENDPOINT
      - AIService__Key=REPLACE_YOUR_KEY
      - AIService__Models__Completion=gpt-35-turbo-0301
      - AIService__Models__Embedding=text-embedding-ada-002
      - AIService__Models__Planner=gpt-35-turbo-0301
      - AIService__Type=AzureOpenAI
      - AllowedOrigins__0=http://localhost:3000
      - ChatStore__Filesystem__FilePath=./data/chatstore.json
      - ChatStore__Type=filesystem
      - MemoriesStore__Qdrant__Host=http://qdrant
      - MemoriesStore__Type=qdrant
    volumes:
      - chat_store_data:/app/data
    depends_on:
      - qdrant

  frontend:
    container_name: webapp
    build: ./webapp
    ports:
      - 3000:80

  qdrant:
    container_name: qdarant
    image: qdrant/qdrant:latest
    ports:
      - 6333:6333

volumes:
  chat_store_data:
N-E-W-T-O-N commented 1 year ago

Thank @JadynWong for the dockerfiles.