microsoft / azure-pipelines-tasks

Tasks for Azure Pipelines
https://aka.ms/tfbuild
MIT License
3.5k stars 2.61k forks source link

[BUG]: No data was written into the file (Docker@2) #17893

Open JtMotoX opened 1 year ago

JtMotoX commented 1 year ago

Task name

Docker@2

Task version

2.214.0

Environment type (Please select at least one enviroment where you face this issue)

Azure DevOps Server type

dev.azure.com (formerly visualstudio.com)

Azure DevOps Server Version (if applicable)

No response

Operation system

Debian 11

Issue Description

I am getting this warning:

##[warning]No data was written into the file /home/vsts/work/_temp/task_outputs/build_****.txt

This was originally brought up in https://github.com/microsoft/azure-pipelines-tasks/issues/12470 but the issue was closed without it being resolved. Many users (including myself) are still facing this issue.

Task log

. . .
#8 exporting to image
#8 sha256:***
#8 exporting layers done
#8 writing image sha256:*** done
#8 naming to *** done
#8 DONE 0.0s
##[warning]No data was written into the file /__w/_temp/task_outputs/build_***.txt
Finishing: Docker build

Aditional info

- task: Docker@2
  inputs:
    command: 'build'
slayoffer commented 1 year ago

Having the same issue on same self-hosted Linux agent with the same task.

JtMotoX commented 1 year ago

I have done some troubleshooting and may have stumbled upon something. I noticed that one of our agents do not get this error so I started doing some comparison with the other agents. It was running an older version of docker. I upgraded docker to match the version of the other agents and all the sudden it also started to get the error. I downgraded docker back to original and the error went away.

Here are my findings:

Version status
docker-ce-3:20.10.21-3.el8.x86_64 fine
docker-ce-3:23.0.1-1.el8.x86_64 broken
docker-ce-3:23.0.3-1.el8.x86_64 broken

So far I have not found any issues with downgrading. BuildKit still works fine. Hopefully Microsoft will provide an update to this Docker@2 extension for the newer docker versions.

Here is what I did to "fix" the other agents.

Workaround # 1:

sudo yum remove \
    docker-buildx-plugin \
    docker-ce \
    docker-ce-cli \
    docker-ce-rootless-extras

sudo yum install \
    docker-ce-3:20.10.9-3.el7.x86_64 \
    docker-ce-cli-1:20.10.21-3.el7.x86_64 \
    docker-ce-rootless-extras-20.10.21-3.el7.x86_64
JtMotoX commented 1 year ago

Just found another workaround. If you are using the newer version of Docker, just make sure you define the BuildKit env var in the task. The value doesn't seem to matter so you can use 0 or 1 to get rid of the error. It will be a pain to set this env in every task across every project. If anybody has a solution to do this at a global level please let me know. I have tried setting the buildkit feature in the daemon.json on the host and restarted the service but the pipeline still throws the warning.

Workaround # 2:

  - task: Docker@2
    inputs:
      command: 'build'
    env:
      DOCKER_BUILDKIT: 1


EDIT:

This method does not seem to work consistently.

JtMotoX commented 1 year ago

So far this is the best workaround I have found. This sets the environment variable globally which gets rid of the warning.

Workaround # 3:

Edit the runsvc.sh file in the agent directory and add the following export, then restart the agent service.

# insert anything to setup env when running as a service
export DOCKER_BUILDKIT=1


EDIT:

This method does not seem to work consistently.

KLuuKer commented 1 year ago

can confirm having the same issue using a VMSS pool using the Ubuntu Minimal LTS 22.04 image canonical:0001-com-ubuntu-minimal-jammy:minimal-22_04-lts-gen2:latest

using this docker version

 Version:           23.0.4
 API version:       1.42
 Go version:        go1.19.8
 Git commit:        f480fb1
 Built:             Fri Apr 14 10:32:03 2023
 OS/Arch:           linux/amd64
 Context:           default

Server: Docker Engine - Community
 Engine:
  Version:          23.0.4
  API version:      1.42 (minimum version 1.12)
  Go version:       go1.19.8
  Git commit:       cbce331
  Built:            Fri Apr 14 10:32:03 2023
  OS/Arch:          linux/amd64
  Experimental:     false
 containerd:
  Version:          1.6.20
  GitCommit:        2806fc1057397dbaeefbea0e4e17bddfbd388f38
 runc:
  Version:          1.1.5
  GitCommit:        v1.1.5-0-gf19387a
 docker-init:
  Version:          0.19.0
  GitCommit:        de40ad0
filetrail-dgrossman commented 1 year ago

Just to add to the chorus here: my on-prem agent on a Windows VM throws this warning while building a Linux container. Surprised to see this has been a problem for something like 3 years now...

jimAtLoyal commented 1 year ago

Workaround # 4:

Similar the the ones above, but this sets it for the entire stage/job in the yaml.

variables:
- name: DOCKER_BUILDKIT
  value: 1
gvtek0 commented 1 year ago

Another echoing of the chorus, we just started getting this on Azure Pipelines hosted agents in the past 24-48 hours on Docker@2 tasks. However it's not breaking luckily for us anyway, will attempt to look into some of the workarounds if we get time.

swisman commented 1 year ago

Another echoing of the chorus, we just started getting this on Azure Pipelines hosted agents in the past 24-48 hours on Docker@2 tasks. However it's not breaking luckily for us anyway, will attempt to look into some of the workarounds if we get time.

Same here but fortunately Workaround #4 seems to work - trying it twice the warning didn't show up any longer after having set docker buildkit as suggested above! Hopefully Azure behaves deterministically here and the workaround keeps working.

aperona-hai commented 1 year ago

Same here. One strange thing I've noticed is that agents seem to have started using buildkit without any apparent change on our side (ubuntu-latest Hosted) and even though all agents of the pool have the same agent version running, one of them looks like is still using classic build instead of buildkit.

twinter-amosfivesix commented 1 year ago

In case anyone doesn't realize it, BuildKit became the default builder starting in Docker 23. BuildKit acts differently than the old builder, sending no output to stdout. That's what the original ticket was about. The fix for the original ticket was to detect when BuildKit was enabled by looking at the DOCKER_BUILDKIT environment variable and acting appropriately when it was set. But with Docker 23 BuildKit is enable by default even if you don't have DOCKER_BUILDKIT defined. But in that case the task doesn't know you are using BuildKit so it expects the old builder behavior. This is why the workaround mentioned above of defining DOCKER_BUILDKIT works.

We use a self-hosted agent and when I started using Docker 23 on it, I defined DOCKER_BUILDKIT at the agent level, so it's always defined for all jobs. So I have not run into this error again.

swisman commented 1 year ago

In case anyone doesn't realize it, BuildKit became the default builder starting in Docker 23. BuildKit acts differently than the old builder, sending no output to stdout. That's what the original ticket was about. The fix for the original ticket was to detect when BuildKit was enabled by looking at the DOCKER_BUILDKIT environment variable and acting appropriately when it was set. But with Docker 23 BuildKit is enable by default even if you don't have DOCKER_BUILDKIT defined. But in that case the task doesn't know you are using BuildKit so it expects the old builder behavior. This is why the workaround mentioned above of defining DOCKER_BUILDKIT works.

We use a self-hosted agent and when I started using Docker 23 on it, I defined DOCKER_BUILDKIT at the agent level, so it's always defined for all jobs. So I have not run into this error again.

thanks for sharing this info, I didn't realize it. Great that you highlighted what's going on in the back.

sayandaw commented 1 year ago

Same issue, I am contantly getting this Warning now with the Docker Task Docker@2

matt1munich commented 1 year ago

+1 image

i-Coderr commented 11 months ago

image

celluj34 commented 9 months ago

I am still getting this, even on a hosted agent AND with DOCKER_BUILDKIT set to 1.

eli-fin commented 8 months ago

Same here

GKhelio commented 7 months ago

Same problem, solution Workaround # 4 is no longer working

kimmymonassar commented 6 months ago

Any news on this?

klose4711 commented 5 months ago

Any known workaround that is still working to get rid of these warnings?

matt-lethargic commented 5 months ago

All of our pipelines now have and it works to get rid of warnings

variables:
  DOCKER_BUILDKIT: 1
AvinashPamula commented 5 months ago

9 naming to //avinashpamulanodejs:17 done

9 DONE 0.4s

[warning]No data was written into the file /home/vsts/work/_temp/task_outputs/build_1716980577110.txt i am still getting this error here's my script can someone help me where my image is pushing to ACR where build is showing successfull but gave this warning

Docker

Build and push an image to Azure Container Registry

https://docs.microsoft.com/azure/devops/pipelines/languages/docker

trigger:

resources:

variables:

Container registry service connection established during pipeline creation

dockerRegistryServiceConnection: '484f7284-2d0d-4a5a-9c89-465005269850' imageRepository: 'avinash123' containerRegistry: 'azure12.azurecr.io' dockerfilePath: '$(Build.SourcesDirectory)/tg-redirect-master/Dockerfile' tag: '$(Build.BuildId)'

Agent VM image name

vmImageName: 'ubuntu-latest'

stages:

AvinashPamula commented 5 months ago

any solutions for above issue

pbertie commented 5 months ago

Adding the following is suggested above and fixed it for my builds. However even with this I still get the warning for Export and Tag tasks.

    variables:
      - name: DOCKER_BUILDKIT
        value: 1
TLA020 commented 4 months ago

same issue here. recommended fix does nothing

jimAtLoyal commented 2 months ago

The issue seems to be for containers that write nothing to stdout, which is valid. I get this when running trivy in a container in a pipeline since it writes nothing to stdout with -o

Below is a test program and pipeline. The step that only writes to stderr gets the warning.

if (args.Length == 0 || args[0] == "stdout" || args[0] == "both")
    Console.WriteLine("Hello from stdout!");

if (args.Length > 0 && (args[0] == "stderr" || args[0] == "both"))
    Console.Error.WriteLine("Hello from stderr!");
trigger: none
pr: none

jobs:
  - ${{ each buildKit in split('0,1',',')}}:
    - job: test_${{ buildKit }}
      displayName: 'Test with BUILDKIT=${{ buildKit }}'

      variables:
        - ${{ if buildKit }}:
            - name: DOCKER_BUILDKIT
              value: 1

      steps:
        - task: Docker@2
          displayName: Build ${{ parameters.repository }} Image
          inputs:
            repository: stdouttest
            command: build
            Dockerfile: 'tools/stdoutTest/Dockerfile'
            buildContext: 'tools/stdoutTest'
            tags: latest

        - task: Docker@2
          displayName: 'Stdout only '
          inputs:
            command: 'run'
            arguments: --rm stdouttest stdout

        - task: Docker@2 
          displayName: 'Stderr only '
          inputs:
            command: 'run'
            arguments: --rm stdouttest stderr # 👈 produces the warning

        - task: Docker@2
          displayName: 'Both only '
          inputs:
            command: 'run'
            arguments: --rm stdouttest both
jimAtLoyal commented 2 months ago

This workaround works for me.

  1. Use Docker@2 only for 'login'
    - task: Docker@2
    inputs:
      containerRegistry: ${{ parameters.registry }}
      command: 'login'
  2. Use a script task to run the docker command by hand

    - pwsh: |
      Set-StrictMode -Version Latest
    
      "##[command]docker run --rm -q ..."
    
      docker run --rm -q ...

I do not get any errant warnings doing docker run that way