DependencyTrack / hyades

Incubating project for decoupling responsibilities from Dependency-Track's monolithic API server into separate, scalable services.
https://dependencytrack.github.io/hyades/latest
Apache License 2.0
58 stars 18 forks source link

Update CycloneDX Protobuf schema to v1.6 #1333

Closed nscuro closed 1 month ago

nscuro commented 2 months ago

Current Behavior

We currently use the CycloneDX Protobuf schema v1.4:

Proposed Behavior

Update the schema to v1.6: https://github.com/CycloneDX/specification/blob/master/schema/bom-1.6.proto

Note that v1.6 may not be backward-compatible with v1.4, because up until v1.6, the schema was not checked for backward-compatibility. We need to check how this affects us. If Kafka records were produced with CycloneDX schema v1.4, can we consume them with v1.6?

Since that version, the Protobuf schema is tested with buf: https://github.com/CycloneDX/specification/blob/master/.github/workflows/test_proto.yml

That should make it more stable / backward-compatible for future releases.

Checklist

nscuro commented 2 months ago

We will likely need a deserializer that:

Here's a quick draft for how that might look like:

FallbackKafkaDeserializer.java ```java /* * This file is part of Dependency-Track. * * Licensed under the Apache License, Version 2.0 (the "License"); * you may not use this file except in compliance with the License. * You may obtain a copy of the License at * * http://www.apache.org/licenses/LICENSE-2.0 * * Unless required by applicable law or agreed to in writing, software * distributed under the License is distributed on an "AS IS" BASIS, * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. * See the License for the specific language governing permissions and * limitations under the License. * * SPDX-License-Identifier: Apache-2.0 * Copyright (c) OWASP Foundation. All Rights Reserved. */ package org.dependencytrack.event.kafka.serialization; import org.apache.kafka.common.errors.SerializationException; import org.apache.kafka.common.serialization.Deserializer; import java.util.function.Function; public class FallbackKafkaDeserializer implements Deserializer { private final Deserializer delegateDeserializer; private final Deserializer fallbackDeserializer; private final Function fallbackMapper; public FallbackKafkaDeserializer( final Deserializer delegateDeserializer, final Deserializer fallbackDeserializer, final Function fallbackMapper ) { this.delegateDeserializer = delegateDeserializer; this.fallbackDeserializer = fallbackDeserializer; this.fallbackMapper = fallbackMapper; } @Override public T deserialize(final String topic, final byte[] data) { if (data == null) { return null; } final SerializationException originalException; try { return delegateDeserializer.deserialize(topic, data); } catch (SerializationException e) { originalException = e; } final R fallback; try { fallback = fallbackDeserializer.deserialize(topic, data); } catch (SerializationException e) { e.addSuppressed(originalException); throw e; } return fallbackMapper.apply(fallback); } } ```

That enables consumers to start working with the new schema, even though they may still receive records using the old schema.