Open reimgun opened 4 years ago
Hi @reimgun -
The docker-compose that anchore supplies by default is really meant as a quick start/simple/single host/single analyzer config - for a more scaled up deployment we'd recommend using the Helm chart and start using multiple hosts so that there is enough host resource to support parallel load, and the configuration for running multiple services is handled for you with Helm chart.
That being said, if you have a lot of resource on a single server and still want to run multiple analyzers on a single host, in order to add more analyzers to a deployment successfully, the system needs to have each analyzer register with a unique hostname and 'host id'. In a nutshell, the quickest way to achieve this is to make the service name (in docker compose), ANCHORE_ENDPOINT_HOSTNAME, and ANCHORE_HOST_ID environment variables the same name - e.g.
analyzer0:
...
environment:
- ANCHORE_ENDPOINT_HOSTNAME=analyzer0
- ANCHORE_HOST_ID=analyzer0
...
...
analyzer1:
...
environment:
- ANCHORE_ENDPOINT_HOSTNAME=analyzer1
- ANCHORE_HOST_ID=analyzer1
...
...
With a successful configuration, you will then see multiple analyzer services running in both the output of docker-compose and the anchore system status output:
% docker-compose ps | grep analyzer
aevolumedev_analyzer0_1 /docker-entrypoint.sh anch ... Up (health: starting) 8228/tcp
aevolumedev_analyzer1_1 /docker-entrypoint.sh anch ... Up (health: starting) 8228/tcp
% anchore-cli system status | grep analyzer
Service analyzer (analyzer0, http://analyzer0:8228): up
Service analyzer (analyzer1, http://analyzer1:8228): up
%
The telltale characteristic of ensuring that this is working is to queue up a few images for analysis, and confirm after a few minutes that more than one is in the 'analyzing' state in parallel
% anchore-cli image add centos:7
...
% anchore-cli image add centos:8
...
% sleep 15
% anchore-cli image list | grep analyzing
docker.io/centos:7 sha256:c2f1d5a9c0a81350fa0ad7e1eee99e379d75fe53823d44b5469eb2eb6092c941 analyzing
docker.io/centos:8 sha256:fd84102fc72960dd1b8da0ee3b4c13e3b0c1d2a085de118bc4c97821cd986e02 analyzing
%
If everything is set up correctly, then you will be able to see as many unique digest images in 'analyzing' state as you have correctly configured analyzers.
Note again, that even with multiple analyzers, you will not see 'faster' analysis times by adding concurrency if your underlying server resources are constrained, as the analyzers will be competing for CPU, IO and memory resources - using the Helm chart and allocating multiple servers/nodes to your deployment is the recommended path.
Best -Dan
Docker compose with --scale should work, I've done that before with 8 analyzers and that worked just fine. Are you seeing the requests time out in Jenkins. Can you confirm that the jobs are all talking to the correct endpoint and that it has the images in the queue? What would help is an output of anchore-cli image list
on the engine install during the job execution to show what the status of the images in analysis are. You should see an image in the 'analyzing' state for each analyzer you've scaled-out.
Can you provide a bit more info in the jenkins job logs and the anchore image listing outputs?
when this 404 Error occurs i found in the events this error message:
docker-compose exec api anchore-cli event get b36dfa5680d04c2dbecae4f450f42994 details: msg: 'Failed to pull image (central-registry.srv.allianz:10000/at_ubi8_mssql@sha256:e87c55009981387714cc60b30833d60c55f51fc58cb20d8f3745244cc0163a6c)
i always check before i start scanning if anchore is available with the two commands: docker-compose exec -T api anchore-cli system status docker-compose exec -T api anchore-cli system wait
only with --scale=4 occurs then that Problem:
three docker build jobs trigger one anchore job and the last one wins. How can we avoid that?
we tryed the configuration from nurmi and configured analyzer0 and analyzer1 in the docker-compose.yaml and in the config.yaml but the analyzer exit always on start with the error message in the logs:
analyzer0_1 | [MainThread] [anchore_manager.cli.service/start()] [WARN] specified service analyzer0 not found in list of available services ['analyzer', 'simplequeue', 'apiext', 'catalog', 'policy_engine'] - removing from list of services to start
analyzer0_1 | [MainThread] [anchore_manager.cli.service/start()] [ERROR] No services found in ANCHORE_ENGINE_SERVICES or as enabled in config.yaml to start - exiting
Can you also provide the details of how you are invoking the Anchore plugin in the Jenkins build? Jenkins logs for the job would be helpful to that end. Anchore plugin expects to be invoked only once per build. You can scan multiple images by adding them to the file passed to the plugin. So build all the images (serial or in parallel) first and then analyze them all using a single invocation of Anchore plugin. Invoking the plugin multiple times will override the output space, and the final report will only contain the results for the last image scanned
ok i have reconfigured it to remove the anchore scan from the separate image builds to one job which scans all the images:
node ('anchore') {
stage ('scan all images for CVE`s') {
sh "echo \"central-registry.srv.allianz:10001/at_ubi8_admin:latest\" > /var/lib/jenkins/workspace/anchore_scanner/anchore_images"
sh "echo \"central-registry.srv.allianz:10001/at_ubi8_dotnet_radarlive_calc:latest\" >> /var/lib/jenkins/workspace/anchore_scanner/anchore_images"
sh "echo \"central-registry.srv.allianz:10001/at_ubi8_dotnet_radarlive_dpo:latest\" >> /var/lib/jenkins/workspace/anchore_scanner/anchore_images"
sh "echo \"central-registry.srv.allianz:10001/at_ubi8_dotnet_radarlive_mgmt:latest\" >> /var/lib/jenkins/workspace/anchore_scanner/anchore_images"
sh "echo \"central-registry.srv.allianz:10001/at_ubi8_dotnet_radarlive_preinstallupgrade:latest\" >> /var/lib/jenkins/workspace/anchore_scanner/anchore_images"
sh "echo \"central-registry.srv.allianz:10001/at_ubi8_dotnet_radarlive_schedulemanager:latest\" >> /var/lib/jenkins/workspace/anchore_scanner/anchore_images"
sh "echo \"central-registry.srv.allianz:10001/at_ubi8_dotnet_radarlive_schedulemanager:latest\" >> /var/lib/jenkins/workspace/anchore_scanner/anchore_images"
sh "echo \"central-registry.srv.allianz:10001/at_ubi8_dotnet_radarlive_settingsmanager:latest\" >> /var/lib/jenkins/workspace/anchore_scanner/anchore_images"
sh "echo \"central-registry.srv.allianz:10001/at_ubi8_dotnet_radarlive_settingsmanager:latest\" >> /var/lib/jenkins/workspace/anchore_scanner/anchore_images"
sh "echo \"central-registry.srv.allianz:10001/at_ubi8_dotnet_radarlive_smf:latest\" >> /var/lib/jenkins/workspace/anchore_scanner/anchore_images"
sh "echo \"central-registry.srv.allianz:10001/at_ubi8_dotnet_radarlive_web:latest\" >> /var/lib/jenkins/workspace/anchore_scanner/anchore_images"
sh "echo \"central-registry.srv.allianz:10001/at_ubi8_minimal_cmon:latest\" >> /var/lib/jenkins/workspace/anchore_scanner/anchore_images"
sh "echo \"central-registry.srv.allianz:10001/at_ubi8_minimal_nginx:latest\" >> /var/lib/jenkins/workspace/anchore_scanner/anchore_images"
sh "echo \"central-registry.srv.allianz:10001/at_ubi8_minimal_openjdk11:latest\" >> /var/lib/jenkins/workspace/anchore_scanner/anchore_images"
sh "echo \"central-registry.srv.allianz:10001/at_ubi8_minimal_openjdk11_infinispan_10.0.1:latest\" >> /var/lib/jenkins/workspace/anchore_scanner/anchore_images"
sh "echo \"central-registry.srv.allianz:10001/at_ubi8_minimal_openjdk11_keycloak7:latest\" >> /var/lib/jenkins/workspace/anchore_scanner/anchore_images"
sh "echo \"central-registry.srv.allianz:10001/at_ubi8_minimal_openjdk11_prometheus_cloudwatch_exporter:latest\" >> /var/lib/jenkins/workspace/anchore_scanner/anchore_images"
sh "echo \"central-registry.srv.allianz:10001/at_ubi8_minimal_openjdk11_tomcat_8.5:latest\" >> /var/lib/jenkins/workspace/anchore_scanner/anchore_images"
sh "echo \"central-registry.srv.allianz:10001/at_ubi8_minimal_openjdk11_tomcat_9:latest\" >> /var/lib/jenkins/workspace/anchore_scanner/anchore_images"
sh "echo \"central-registry.srv.allianz:10001/at_ubi8_minimal_openjdk8:latest\" >> /var/lib/jenkins/workspace/anchore_scanner/anchore_images"
sh "echo \"central-registry.srv.allianz:10001/at_ubi8_minimal_openjdk8_infinispan_9.4.16:latest\" >> /var/lib/jenkins/workspace/anchore_scanner/anchore_images"
sh "echo \"central-registry.srv.allianz:10001/at_ubi8_minimal_openjdk8_sso:latest\" >> /var/lib/jenkins/workspace/anchore_scanner/anchore_images"
sh "echo \"central-registry.srv.allianz:10001/at_ubi8_minimal_openjdk8_tomcat_8.5:latest\" >> /var/lib/jenkins/workspace/anchore_scanner/anchore_images"
sh "echo \"central-registry.srv.allianz:10001/at_ubi8_minimal_prometheus_blackbox_exporter:latest\" >> /var/lib/jenkins/workspace/anchore_scanner/anchore_images"
sh "echo \"central-registry.srv.allianz:10001/at_ubi8_minimal_radarlive_smfservermgmt:latest\" >> /var/lib/jenkins/workspace/anchore_scanner/anchore_images"
build job: 'anchore_scanner'
}
}
the output of the anchore plugin is:
2020-07-31T09:17:41.356 INFO AnchoreWorker Analysis request accepted, received image digest sha256:a12c552ec2cb016856265dfff341e79c28caa269c7dccb0e889e79b7389b880e
2020-07-31T09:17:41.356 INFO AnchoreWorker Submitting central-registry.srv.allianz:10001/at_ubi8_minimal_prometheus_blackbox_exporter:latest for analysis
2020-07-31T09:17:41.858 INFO AnchoreWorker Analysis request accepted, received image digest sha256:59eb72e0deaf432ff4bc0ed8c2b68a0892b09d350acb1b9268ee71b2640def91
2020-07-31T09:17:41.858 INFO AnchoreWorker Submitting central-registry.srv.allianz:10001/at_ubi8_minimal_radarlive_smfservermgmt:latest for analysis
2020-07-31T09:17:42.395 INFO AnchoreWorker Analysis request accepted, received image digest sha256:c2a5accd66550f1dcc88d78dd4f9b80bc8c22e9cfb782307c9e825c63037ed9a
2020-07-31T09:17:42.396 INFO AnchoreWorker Waiting for analysis of central-registry.srv.allianz:10001/at_ubi8_admin:latest, polling status periodically
2020-07-31T09:21:38.777 INFO AnchoreWorker Completed analysis and processed policy evaluation result
2020-07-31T09:21:38.777 INFO AnchoreWorker Waiting for analysis of central-registry.srv.allianz:10001/at_ubi8_dotnet_radarlive_calc:latest, polling status periodically
2020-07-31T09:24:55.413 INFO AnchoreWorker Completed analysis and processed policy evaluation result
2020-07-31T09:24:55.413 INFO AnchoreWorker Waiting for analysis of central-registry.srv.allianz:10001/at_ubi8_dotnet_radarlive_dpo:latest, polling status periodically
2020-07-31T09:27:34.818 INFO AnchoreWorker Completed analysis and processed policy evaluation result
2020-07-31T09:27:34.818 INFO AnchoreWorker Waiting for analysis of central-registry.srv.allianz:10001/at_ubi8_dotnet_radarlive_mgmt:latest, polling status periodically
2020-07-31T09:30:57.294 INFO AnchoreWorker Completed analysis and processed policy evaluation result
2020-07-31T09:30:57.294 INFO AnchoreWorker Waiting for analysis of central-registry.srv.allianz:10001/at_ubi8_dotnet_radarlive_preinstallupgrade:latest, polling status periodically
2020-07-31T09:47:51.284 INFO AnchoreWorker Completed analysis and processed policy evaluation result
2020-07-31T09:47:51.284 INFO AnchoreWorker Waiting for analysis of central-registry.srv.allianz:10001/at_ubi8_dotnet_radarlive_schedulemanager:latest, polling status periodically
2020-07-31T09:47:51.881 INFO AnchoreWorker Completed analysis and processed policy evaluation result
2020-07-31T09:47:51.881 INFO AnchoreWorker Waiting for analysis of central-registry.srv.allianz:10001/at_ubi8_dotnet_radarlive_settingsmanager:latest, polling status periodically
2020-07-31T09:47:52.628 INFO AnchoreWorker Completed analysis and processed policy evaluation result
2020-07-31T09:47:52.629 INFO AnchoreWorker Waiting for analysis of central-registry.srv.allianz:10001/at_ubi8_dotnet_radarlive_smf:latest, polling status periodically
2020-07-31T09:50:55.921 INFO AnchoreWorker Completed analysis and processed policy evaluation result
2020-07-31T09:50:55.921 INFO AnchoreWorker Waiting for analysis of central-registry.srv.allianz:10001/at_ubi8_dotnet_radarlive_web:latest, polling status periodically
2020-07-31T10:16:02.373 INFO AnchoreWorker Completed analysis and processed policy evaluation result
2020-07-31T10:16:02.374 INFO AnchoreWorker Waiting for analysis of central-registry.srv.allianz:10001/at_ubi8_minimal_cmon:latest, polling status periodically
2020-07-31T10:16:02.957 INFO AnchoreWorker Completed analysis and processed policy evaluation result
2020-07-31T10:16:02.957 INFO AnchoreWorker Waiting for analysis of central-registry.srv.allianz:10001/at_ubi8_minimal_nginx:latest, polling status periodically
2020-07-31T10:16:04.009 INFO AnchoreWorker Completed analysis and processed policy evaluation result
2020-07-31T10:16:04.010 INFO AnchoreWorker Waiting for analysis of central-registry.srv.allianz:10001/at_ubi8_minimal_openjdk11:latest, polling status periodically
2020-07-31T10:16:04.778 INFO AnchoreWorker Completed analysis and processed policy evaluation result
2020-07-31T10:16:04.778 INFO AnchoreWorker Waiting for analysis of central-registry.srv.allianz:10001/at_ubi8_minimal_openjdk11_infinispan_10.0.1:latest, polling status periodically
2020-07-31T10:38:25.713 INFO AnchoreWorker Completed analysis and processed policy evaluation result
2020-07-31T10:38:25.714 INFO AnchoreWorker Waiting for analysis of central-registry.srv.allianz:10001/at_ubi8_minimal_openjdk11_keycloak7:latest, polling status periodically
2020-07-31T11:30:09.415 WARN AnchoreWorker anchore-engine get policy evaluation failed. HTTP method: GET, URL: http://lx-rhel76.aeat.allianz.at:8228/v1/images/sha256:eb832d67834e0d18ece24d31eeb35d55cf0a71d55c57fbfcc7ef3b0779bc006e/check?tag=central-registry.srv.allianz:10001/at_ubi8_minimal_openjdk11_keycloak7:latest&detail=true, status: 404, error: {
"detail": {
"error_codes": []
},
"httpcode": 404,
"message": "image is not analyzed - analysis_status: analysis_failed"
}
2020-07-31T11:30:09.415 WARN AnchoreWorker Exhausted all attempts polling anchore-engine. Analysis is incomplete for sha256:eb832d67834e0d18ece24d31eeb35d55cf0a71d55c57fbfcc7ef3b0779bc006e
and the anchore event shows me that:
docker-compose exec api anchore-cli event get 708a115191f14574a517971cfa9adbfe
details:
msg: 'Failed to pull image (central-registry.srv.allianz:10001/at_ubi8_minimal_openjdk11_keycloak7@sha256:eb832d67834e0d18ece24d31eeb35d55cf0a71d55c57fbfcc7ef3b0779bc006e)
- exception: Error encountered in skopeo operation. cmd=/bin/sh -c skopeo copy
--src-tls-verify=false docker://central-registry.srv.allianz:10001/at_ubi8_minimal_openjdk11_keycloak7@sha256:eb832d67834e0d18ece24d31eeb35d55cf0a71d55c57fbfcc7ef3b0779bc006e
oci:/analysis_scratch/683d430c-2f62-4faf-a07e-ad0d65b170a6/raw:image, rc=1, stdout=b''Getting
image source signatures\nCopying blob sha256:600f7e2abab1c2bca3fd900b08daa16ed87f1b938affba76c446fd21da39e3d1\nCopying
blob sha256:6cd4d9d86398cbcec5a33b9af09b8d82bccd56162a7558346b7cd1cf65363f65\nCopying
blob sha256:5835cbeb85c26ed7e41a443f78b15101eec949c8fa52378e80401a7cbd3a2acb\nCopying
blob sha256:2b1f10a6e703445a46dc8e31b39edb4d152629fe4d65aecec7dac5029c8eaeb3\nCopying
blob sha256:a53f03c74205434876a80f03cef43e2bccae42aa6b77291ff47b0bdd52175181\nCopying
blob sha256:b96679cb9c2b1717be0648a1bb4038fbcb99b68108ddfa49f2a2ad5db41f26a8\nCopying
blob sha256:16f007c8cbedfe1048263ff34f138f1e19effb9e72d67691dd711ce34418b968\nCopying
blob sha256:b45b784eab6ac9b18147e927f54d172a1771c24008df241b0281a6b8fa6b32f1\nCopying
blob sha256:4b529b30301e8987acad0b7008760547275fd8231166ee910a55402171de40b3\nCopying
blob sha256:c27e6d7cb282483c920df4bb35d00d01e8b322e1146c3dde697b10359b096242\nCopying
blob sha256:e76577cd03c8474422a54d7f76ab3e6b40e2d04900e7c60bdcfacbecefa27334\nCopying
blob sha256:21f9408fb978f533373be6d2eee64e26dcb0e1eee75ac4cb81a2686dd9750c53\n'',
stderr=b''time="2020-07-31T08:31:47Z" level=fatal msg="Error writing blob: unexpected
EOF"\n'', error_code=SKOPEO_UNKNOWN_ERROR'
level: error
message: Failed to analyze image
resource:
id: central-registry.srv.allianz:10001/at_ubi8_minimal_openjdk11_keycloak7:latest
type: image_tag
user_id: admin
source:
base_url: http://analyzer:8228
hostid: anchore-quickstart
request_id: null
servicename: analyzer
timestamp: `'2020-07-31T08:31:47.280276Z'
type: user.image.analysis.failed
Consolidating scanning with a single invocation of anchore plugin fixed the initial issue. Error in the above logs indicates anchore engine couldn't even pull central-registry.srv.allianz:10001/at_ubi8_minimal_openjdk11_keycloak7:latest
due to msg="Error writing blob: unexpected EOF"\n''
That seems like something is off with the image, can you verify the image is valid
Is this a request for help?: yes
Is this a BUG REPORT or a FEATURE REQUEST? (choose one): BUG REPORT
Version of Anchore Engine and Anchore CLI if applicable:
docker-compose exec api anchore-cli system status Service catalog (anchore-quickstart, http://catalog:8228): up Service analyzer (anchore-quickstart, http://analyzer:8228): up Service policy_engine (anchore-quickstart, http://policy-engine:8228): up Service simplequeue (anchore-quickstart, http://queue:8228): up Service apiext (anchore-quickstart, http://api:8228): up
Engine DB Version: 0.0.13 Engine Code Version: 0.7.3 anchore-cli, version 0.7.2
we have the following infrastructure:
Anchore engine V0.7.3 installed(docker-compose) on separate Jenkins Slave Server Redhat 7.8 with docker installed docker-ce-19.03.5-3.el7.x86_64 docker-compose ps Name Command State Ports
aevolume_analyzer_1 /docker-entrypoint.sh anch ... Up 8228/tcp aevolume_analyzer_2 /docker-entrypoint.sh anch ... Up 8228/tcp aevolume_analyzer_3 /docker-entrypoint.sh anch ... Up 8228/tcp aevolume_analyzer_4 /docker-entrypoint.sh anch ... Up 8228/tcp aevolume_api_1 /docker-entrypoint.sh anch ... Up 0.0.0.0:8228->8228/tcp aevolume_catalog_1 /docker-entrypoint.sh anch ... Up 8228/tcp aevolume_db_1 docker-entrypoint.sh postgres Up 5432/tcp aevolume_policy-engine_1 /docker-entrypoint.sh anch ... Up 8228/tcp aevolume_queue_1 /docker-entrypoint.sh anch ... Up 8228/tcp
What happened: i have started 3 jenkins build jobs to build docker images in parallel and scan them with anchore(only OS Feeds no nvdv2 Feeds). this was the result: 3 jobs starting 1 anchore job and the last one win`s.
the Fail Report shows that only at_ubi8_dotnet_radarlive_calc:latest was scanned.
What did you expect to happen: i expect that builds/anchore scans should work in parallel
Any relevant log output from /var/log/anchore: in the log files are no warnings or errors.
What docker images are you using: ubi8 -> https://www.redhat.com/en/blog/introducing-red-hat-universal-base-image
How to reproduce the issue: run 3 or 4 jenkins builds in parallel
Anything else we need to know: