Closed jb-arpia closed 2 weeks ago
You're totally right. Having multiple named client is currently not supported. If you want to tackle this one, this would be awesome.
hmm it sounds like a cool thing to contribute with, but I am not sure where to even start from to be honest, as i never contributed to quarkus or any of its extensions. any suggestions on how to approach this?
Side note for the security team, you may not use access key in the first place to consume queue and use IAM identity. Unfortunately, I don't know how to do it, but it should be possible to tweak the credential provider with a AwsCredentialProvider
instance as a bean
quarkus.sqs.aws.credentials.type=custom
quarkus.sqs.aws.credentials.custom-provider.name=mybean
@Named("mybean")
@Produces
@ApplicationScoped
public AwsCredentialsProvider credentialProvider() {
return // whatever is required to authenticate with IAM identity.
}
That being said, I first thought it should be easy to produce a client for injection point annotated with a name attribute like @AwsClient("myclient")
. But it is not that easy. I'll try to give you some context on the intrinsics of the extension. You may read these two tutorials first:
After reading them, make sure you understand the difference between a runtime module in the first hand and a deployment module with its build step processor and recorder on the other hand. If you are at ease with CDI mechanisms then, it will be a piece of cake to naviguate in the repository (at least what follows should make more sense).
The very basics of the project is to produce a bean for each kind of aws sdk client. One extension per aws client. Each extension is a duplication of the same pattern where we strongly type the client type to be used in abstract classes and processors in the common module. Actually, a default bean is produced by a producer in the runtime module of each client extension. This producer is being injected with a aws client builder instance (both sync and async builder) of the type of the client it should produce. The aws client builder instance beans are generated programmaticaly (aka synthetic beans) when an injection point for a client is discovered. The builder instance is fully set up with its transport layer and endpoint/credential config.
To resume, we scan the CDI context at build time and discover injection points of a client. If one is found, we produce a bean for the client builder. At runtime, the producer in the runtime module can produce the client bean to be injected in the injection point.
To implement the requested feature, I think the general idea would be to generate named bean based on discovered injected points decorated with a new named-like attribute AwsClient
like DataSource
.
Let's take Sqs as an example.
In the runtime module, you will find Config classes. The one that is of interest for this feature is SqsConfig
. It contains two properties sdk
and aws
https://github.com/quarkiverse/quarkus-amazon-services/blob/4ecc71767857d476767ffd36f0fac98c1b2b7de1/sqs/runtime/src/main/java/io/quarkus/amazon/sqs/runtime/SqsConfig.java#L16-L27
Both allow to configure the endpoint and credentials. I think that it is enough for the needs. Others allow to configure how the underlying transport layer (netty/apache/url/crt) will behave and I don't think it is desirable to have different settings here.
Looking at how quarkus DataSource implement the config part https://github.com/quarkusio/quarkus/blob/32b2b08f4a5b16bb4df7333fd3137334c33091e8/extensions/datasource/runtime/src/main/java/io/quarkus/datasource/runtime/DataSourcesRuntimeConfig.java#L25
we can move these two properties in a containing interface and replace them in SqsConfig
with a Map of string and this new interface.
Now the harder part.
Clients injection points are discovered in a build step https://github.com/quarkiverse/quarkus-amazon-services/blob/4ecc71767857d476767ffd36f0fac98c1b2b7de1/common/deployment/src/main/java/io/quarkus/amazon/common/deployment/AbstractAmazonServiceProcessor.java#L60
This step produces RequireAmazonClientBuildItem
item for each discovered injection point that match the extension. This item is a signal for the rest of the common module that a perticular client is required and that we should start building and registrering the client builder bean.
We may use it to actually require default and named client if we add a name property.
To find out if an injection point requires a named client, we can check for a qualifier like what is done in the processing of the S3Crt alternative
Note the pattern of matching syncClientName()
or asyncClientName()
. You will find it a lot. This is because all extensions inherit from this base class. So if you include two extensions (say sqs and s3), the logic will be run twice. One for sqs, one for s3.
Now, we will track the use of RequireAmazonClientBuildItem.
In setupClient
, the RequireAmazonClientBuildItem
is exchanged for a AmazonClientBuildItem
without much logic.
https://github.com/quarkiverse/quarkus-amazon-services/blob/4ecc71767857d476767ffd36f0fac98c1b2b7de1/common/deployment/src/main/java/io/quarkus/amazon/common/deployment/AbstractAmazonServiceProcessor.java#L109
The AmazonClientBuildItem
is then consumed by a bunch of createXXXTransportBuilder
, which builds the tranport builder instance for the matching configured transport (apache/netty/crt/url-connection). Only one of them will produce a AmazonClientSyncTransportBuildItem
that contains a wrapped instance of the builder for a client type.
Then, finally, this AmazonClientSyncTransportBuildItem
is consumed in the createClientBuilders
and the wrapped instance of the builder will serve to create the synthetic bean instance.
https://github.com/quarkiverse/quarkus-amazon-services/blob/4ecc71767857d476767ffd36f0fac98c1b2b7de1/common/deployment/src/main/java/io/quarkus/amazon/common/deployment/AbstractAmazonServiceProcessor.java#L354-L368
Note that this whole path handles both sync and async injection points.
As we saw, we need the transport builder instance not only for the default client bean, but now, for all the beans. So we cannot assume anymore that we should produce a single bean from AmazonClientSyncTransportBuildItem
.
If possible, we can inject the RequireAmazonClientBuildItem
in the createClientBuilders
, and we will have to iterate over them and produce a synthetic bean for each of them with the appropriate qualifiers and the appropriate configuration.
You can inspire from agroal to apply qualifers
For the configuration, it is actually retrived with a recorder. The recorder can now be passed a named argument and look for the configuration in the map.
This is a long journey but we are not finished. Now we have collection of XXXClientBuilder
beans ready to be injected in some producers. If you want to take a break here, you should be able to create a producer in a test project that require a named instance of the the client builder and create an instance from it.
Something like
@ApplicationScoped
public class SqsClientProducer {
private final SqsClient syncClient;
SqsClientProducer(@Named("test") Instance<SqsClientBuilder> syncClientBuilderInstance) {
this.syncClient = syncClientBuilderInstance.isResolvable() ? syncClientBuilderInstance.get().build() : null;
}
@Named("test")
@Produces
@ApplicationScoped
public SqsClient client() {
if (syncClient == null) {
throw new IllegalStateException("The SqsClient is required but has not been detected/configured.");
}
return syncClient;
}
@PreDestroy
public void destroy() {
if (syncClient != null) {
syncClient.close();
}
}
}
and inject the client bean in your class
@Inject
@Named("test")
SqsClient sync;
The last part if all goes well, is to produce this bean programmaticaly. This should be doable with a syntheticBean and a recorder injected with the builder instance. Something more or less like what is done for the S3Crt alternative (which do not require a producer in the runtime module of its extension.
And voilà. If you are not lost in the wild. What you can try first, is to refactor the code but without supporting multiple clients. Try first to replace producer with synthetic bean (the very last part). This is mandatory to have everything else working, so if you run into trouble here, nothing else will work.
Then, try to update the createBuilder method so it take both a RequireAmazonClientBuildItem
and AmazonClientSyncTransportBuildItem
and produce synthetic beans from them. You should now have the client builder AND the client bean from the previous refactoring produced programaticaly without producer.
The next step, is to introduce the config map of clients and the named attribute. This is potentially the longer part to write, but it should be the easiest in code complexity.
As a bonus, we will have to support dev services for named client.
Let me know how things are going, I have limited time currently so things will go slowly from my side.
I opened a PR to keep track of the progress.
@jb-arpia This was more difficult than I expected. Good news is that I finally achieve a working PR. You can test it. I plan to merge it next week.
@jb-arpia version 3.0.0.alpha1
is out. Could you have a look ? Doc is https://docs.quarkiverse.io/quarkus-amazon-services/dev/common-features.html#_named_clients
Hi, opening a ticket as I didn't find a mention about multiple clients anywhere on the docs (maybe I'm blind).
I was using this extensions to inject SqsClient on my quarkus production app which is using around 10 different SQS queues.
Recently, people from security came up with a new requirement where basically each queue will need its own dedicated access key and secret to be accessed, and this was not up for debate :)
As far as I am aware, this extension is able to manage only a single SqsClient. On quarkus, it is common that extensions allow you to configure a global client, but also multiple other named clients. This can be observed on extensions such as OIDC, OIDC-CLIENT and DATASOURCE, etc.
If this was in place, I'd easily be able to comply with this new requirement, but right now I'm having to completely move away from the extension and go back to managing the SDK/Clients myself.
Am I missing something, or is this the current scenario?