Elasticsearch storage backend for Grafeas.
An externally running Elasticsearch cluster must already be available. This repository contains a docker-compose.yaml
file
that can be used to run a single node Elasticsearch cluster locally:
docker-compose up -d elasticsearch
You can run the Grafeas server by using one of our prebuilt Docker images:
docker run \
-p 8080:8080 \
-v ./local/docker-config.yaml:/etc/grafeas/config.yaml \
ghcr.io/rode/grafeas-elasticsearch --config /etc/grafeas/config.yaml
A configuration file must be provided, with the path specified with a --config
flag.
grafeas:
api:
address: "0.0.0.0:8080"
cafile:
keyfile:
certfile:
cors_allowed_origins:
# Must be `elasticsearch`
storage_type: elasticsearch
elasticsearch:
# URL to external Elasticsearch
url: "http://elasticsearch:9200"
# Basic auth to external Elasticsearch
username: "grafeas"
password: "grafeas"
# How Grafeas should interact with Elasticsearch index refreshes.
# Recommend using `true`, unless unique circumstances require otherwise.
# Options are `true`, `wait_for`, `false`.
refresh: "true"
This backend is still a work in progress, so not all functionality has been finished yet. Below is a checklist of all the currently implemented features, along with the features that have not been implemented yet:
CreateProject
GetProject
ListProjects
DeleteProject
CreateOccurrence
BatchCreateOccurrences
GetOccurrence
ListOccurrences
UpdateOccurrence
DeleteOccurrence
CreateNote
BatchCreateNotes
GetNote
ListNotes
UpdateNote
DeleteNote
GetOccurrenceNote
ListNoteOccurrences
GetVulnerabilityOccurrencesSummary
List
methods)
==
operator!=
operator&&
operator||
operator<
operator>
operator<=
operator>=
operatorvulnerability.details[0].cpeUri
)vulnerability.details[*].cpeUri
)nestedFilter
function .startsWith
function (ex: "resource.uri".startsWith("gcr.io")
).contains
function (ex: "resource.uri".contains("alpine")
).endsWith
functionShared run configurations for Jetbrains IDEs are kept in the default .run/
directory.
Theses are automatically read and added to your local run configurations.
Unit tests use Ginkgo, and integration tests use the standard testing library. All tests use Gomega for assertions and matching, for consistency.
Unit tests live alongside production code in go/
directory.
make test
will run unit tests, along with vet and fmt.
go test unit
IDE run configuration is also available.
make mocks
will regenerate test mocks in go/mocks
directory.
Integration tests are in the test/
directory.
These require Elasticsearch and a build of this project to be running.
This is handled through docker-compose
.
docker-compose up -d --build elasticsearch server
-d
if you want to watch logs.--build
if you have already built the local images against the latest code.
Skipping build will significantly improve startup time.make integration
or go test integration
IDE run configuration
docker-compose down