Open nitinkansal1984 opened 3 years ago
Below are the 2 steps I am using in my buildspec.yaml.
- skaffold config set default-repo $REPOSITORY_URI
- skaffold run -f skaffold.yaml
Below is the skaffold.yaml I am using:
apiVersion: skaffold/v2beta17
kind: Config
build:
artifacts:
- image: emailservice
context: src/emailservice
- image: productcatalogservice
context: src/productcatalogservice
- image: recommendationservice
context: src/recommendationservice
- image: shippingservice
context: src/shippingservice
- image: checkoutservice
context: src/checkoutservice
- image: paymentservice
context: src/paymentservice
- image: currencyservice
context: src/currencyservice
- image: cartservice
context: src/cartservice/src
- image: frontend
context: src/frontend
- image: loadgenerator
context: src/loadgenerator
- image: adservice
context: src/adservice
tagPolicy:
gitCommit: {}
I am using AWS ECR to pull/push the image from. I am using AWS CodeCommit to store src code. When I do any change in the code, tagger failed and caused all images to be rebuilt and pushed to ECR.
@nitinkansal1984 you are using a git tagging policy but skaffold is unable to run the following command.
git describe --tags --always
Not a valid object name HEAD
I see some stackoverflow threads to solve this issue. Can you try using these? https://stackoverflow.com/questions/41582949/how-to-resolve-stderr-fatal-not-a-valid-object-name-head-in-jenkins
I don't see this issue is related to git version. I am using git latest version in runtime of build.
@tejal29 @jakehow @schmurfy Can someone please direct me to some other reference link? I am testing this with AWS ECR. Did someone tested it AWS ECR already?
I can see images are being pushed to ECR with untagged. Each time it is creating new image. This is small microservice I am running. Instead of creating respective image, it keeps on creating all the images on each commit. Please suggest !
You can try changing the tagPolicy
to inputDigest
and setting tryImportMissing
in the build-env definition:
...
build:
+ local:
+ tryImportMissing: true
+ tagPolicy:
+ inputDigest: {}
artifacts:
...
This way, the tag generated for each artifact won't change unless there is a source code change for that specific artifact (due to inputDigest
tagger). And the build will be skipped if skaffold is able to pull an image with the same tag (due to tryImportMissing
setting).
is this fine? tagPolicy: inputDigest: {} local: tryImportMissing: true useBuildkit: false concurrency: 5
@gsquared94
I am getting one more error this time. unable to stream build output: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit. Please fix the Dockerfile and try again..
I am getting one more error this time. unable to stream build output: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit. Please fix the Dockerfile and try again..
that's the docker rate-limiting error. https://www.docker.com/increase-rate-limits Are you sure you're not using DockerHub images anywhere? You want to set default-repo
to your AWS ECR. You can do it the way you're doing in the global config. Or pass the flag --default-repo=<your container registry>
what in case I want images to be pulled from both dockerhub and ECR?
I moved all the images to aws ECR to avoid any issue with rate limiting. I am back to original issue. When I did run skaffold run it has built all the images and this time it generated a tag as "2f5688692f20a8e21953f2cc08b44f89266d527904d4ad937af5762c90e52505" But my kubernetes manifest is still pointing to latest image. Below is my current settings:
I could make it work. Now it is creating image for the one for which code is committed. The only thing left is: During every commit, it is redeploying all the kubernetes manifests instead of just rolling out the one for which image is built. Please suggest !
I think kubernetes should handle unchanged manifests. Even then you could try adding the flag --add-skaffold-labels=false
?
Thanks @gsquared94. Sometime my kubernetes manifest are failed to run at first attempt, there restart the container as part of deployment and restart policy. It took longer than expected to come. Skaffold failed meanwhile and reported pipeline execution as failed. I added a parameter in the deploy section "statusCheckDeadlineSeconds: 900" to let it wait for 900 seconds. But it is not honored. Any clue?
@nitinkansal1984 can you paste your current skaffold.yaml
along with the output of running with the -v DEBUG
flag?
This issue comes only when my deployment on kubernetes takes time or container restarted within pod (as part of restart policy). Other time when everything comes fine at first time, then I don't see this issue. But statuscheckdeadlinesecond is not being honored. That is for sure.
FYI I am using latest version of skaffold.
StatusCheckDeadlineSeconds
can be ignored for non-recoverable errors or deployment errors. Is it possible to share the logs?
211 | - deployment/cartservice failed. Error: container server terminated with exit code 139. 212 | - deployment/shippingservice is ready. [1/12 deployment(s) still pending] 213 | - deployment/checkoutservice is ready. 214 | 6/12 deployment(s) failed 215 | 216 | [Container] 2021/06/17 12:59:42 Command did not exit successfully skaffold run -f skaffold.yaml exit status 1 217 | [Container] 2021/06/17 12:59:42 Phase complete: BUILD State: FAILED 218 | [Container] 2021/06/17 12:59:42 Phase context status code: COMMAND_EXECUTION_ERROR Message: Error while executing command: skaffold run -f skaffold.yaml. Reason: exit status 1 219 | [Container] 2021/06/17 12:59:42 Entering phase POST_BUILD 220 | [Container] 2021/06/17 12:59:42 Running command skaffold build --file-output ./build.json
I am running skaffold in AWS codepipeline to build a microservice.
Expected behavior
Actual behavior
Information
Steps to reproduce the behavior
skaffold <command>