Open edeandrea opened 3 years ago
/cc @stuartwdouglas
Ideally the port configuration could come from the rest client configuration
Ideally the port configuration could come from the rest client configuration
True, if that's how the developer writes the code :)
I have actually been talking to @cescoffier about this for a while, it's just that we have not had time to act on it
mvn quarkus:remote-dev
in a project it should automatically connect to a running application. e.g. If I was working on 'Hero' and then realized I needed to change 'Villain' I could run remote-dev in the villain project and it would auto discover and connect.This idea intersects somewhat with https://github.com/quarkusio/quarkus/issues/17861. We really need to figure out a great developer experience for this super common scenario.
@edeandrea Your system looks quite similar to the one from the Quarkus workshop :-) I'm going to update it to Quarkus 2.2.x next week and use dev services.
@cescoffier It was taken from that as an example. Always easier to try to explain something when everyone has a visual :)
@stuartwdouglas for a full experience though I'm not sure we could assume all services are written using Quarkus?
Is there any progress on this? This story describes exactly what I'm looking for.
We have a project with about 25 (micro) services. Most services have downstream rest dependencies (like flight in the diagram) and during development we don't want to use real instances of those. Our current workaround is to use mock-server through QuarkusTestResourceLifecycleManager and mock the downstream rest services on a test by test basis.
The proposed solution would work for unit-tests, jar target integration tests, container target integration tests and dev-mode while our current set-up only works for unit-tests and jar target integration tests. It is already a PITA to get our current set-up working for container target integration tests because of docker networking and there is no way to use it during dev-mode.
When this is available our developers can focus on the definition of the functional interface of the mock instead of the technical parts to provide the mocking.
Thanks for the input @eijckron.
This is absolutely something we want to get right, since the scenario you outlined is indeed very common. #21196 also outlines an alternative idea to this.
@eijckron how does your team deal with keeping the "interface" exposed by the mock in sync with the actual implementation of the downstream services?
@geoand That is currently a manual process.
For services developed in the same area (15-25 developers) problems are rare because we have feature based development teams that change multiple services in a single user story. For dependencies between areas this causes issues now and then when epics contain user stories picked up by multiple teams in different areas.
We release all service updates in a 2 week cycle and have a last line of defence with project wide automated acceptance tests that interact through our front-end applications with the services. We have been struggling to get these acceptance tests stable though. To many moving parts to make them run reliable.
We have looked at pact in the past with the intention to move to a fully automated pipeline that can test and release individual services. We spend a couple of team sprints on proof of concepts but never got it to the required reliability to ditch the acceptance tests.
Thanks for the info @eijckron
Here's a good article stating why running dependent services locally IS NOT a good idea.
https://eng.lyft.com/scaling-productivity-on-microservices-at-lyft-part-1-a2f5d9a77813
I've run into this myself. Developing within staging environments is pretty important. Sometimes there are third party integrations that can only point to one testbed IP address. Dev teams were notorious about not maintaining initialization scripts for clones. And there were often dev scenarios where you need to work with a lot of data that would not be possible to create within an environment clone (think a history of billing transactions). Then there's the obvious simple case that your laptop just cannot handle the load. I just feel that this idea of DevServices booting all dependencies would quickly become unusable.
I think a better focus might be to implement in a standard easy-to-use way all the routing lyft added. To be able to get to the point where you could have a development service running locally on a laptop and it could join a staging environment. Then you could login to the UI of your app that is running in staging and all requests would run through their regular paths except the service you would be working on.
Much of this would be non-Quarkus, i.e. like writing or improving or configuring a proxy. The quarkus parts would be making sure that context propagation happens, i.e. modifying client libraries so that the pick up and propagate any routing headers.
@patriot1burke We don't want to run all the dependent services locally, we want run mocks for the direct dependencies locally
The article is interesting though, I'm going to investigate how we can link the component under development locally using mvn quarkus:dev and link to a more complete landscape running on a staging or test environment. Downstream components is relatively easy, upstream components a bit more complicated I think.
Hi everyone, I've been redirected to this issue from #21196. In my case, I started by mocking an external API for integration tests following: Using a Mock HTTP Server for tests.
I then wanted to use this same mocking code in dev mode. So my use case is a bit different as I'm not using containers. Here QuarkusTestResourceLifecycleManager is taking care of starting and stopping a MuServer for me.
Is that something that would be considered as well ? That is, not only container based services but also any class implementing QuarkusTestResourceLifecycleManager interface.
Thanks in advance!
Not sure, but it's unlikely that QuarkusTestResourceLifecycleManager
will be handled
Just to throw my hat in the ring:
Product I work on supports queries across a variety of databases
In tests, that's no problem because I have QuarkusTestResourceLifecycleManager
for each database type that spin up a TestContainer
But if I want to hand-test the app, and interact with it outside of tests, then I can't re-use those services and have them spun up and started as part of quarkusDev
(AFAIK)
It'd be really useful to have the ability to write your own DevService definitions.
So there are a couple of issues with QuarkusTestResourceLifecycleManager:
Class
in a new ClassLoader.inject
method is problematic and would cause problems for long lived managers.That said we could still do something very similar:
So there are a couple of issues with QuarkusTestResourceLifecycleManager:
- Dev Services are started once at launch, these test resources are started and stopped after each run
- This is because the classes need to reloaded after each run, so the resource that runs each time is actually a different
Class
in a new ClassLoader.- Even if we ignored this the
inject
method is problematic and would cause problems for long lived managers.That said we could still do something very similar:
- Create a QuarkusPersistentResourceLifecycleManager
- Give it the same start and stop methods, but no way to directly interact with other application classes.
- Load it in a special ClassLoader to ensure it is isolated
- Launch it at the start of dev mode/when continuous testing is started, but then have it persist for the life of the app, For simplicity we can say that hot reload is not allowed.
Where QuarkusPersistentResourceLifecycleManager would be implemented to not be considered production code ? DevServices implementation are in deployment module but that will not be the case for user defined code, so we need to figure out how to remove such dev code. Especially for native compilation, code should be omit.
BTW, had a look at your generic-devservices and it's a great idea. Would it be possible to mount a local path inside the container in dev mode and a classPath for test mode ? The use case would be to start a wiremock docker container with mappings config. In dev mode, we could still interact with /__admin
endpoint to further configure it if required.
Note that technically even for dev mode in theory we could use a lifecycle manager in the test module, we have all the infrastructure to do it because of continuous testing.
It does feel a bit hacky, but it would solve the issue of not wanting this stuff in production code.
Description
Say I have a system that looks like this:
Maybe the Hero & Villain services have a source repo somewhere or a container image published somewhere and I now want to work on the Fight service. It would be really slick if somehow I could configure Quarkus Dev Services to run those dependent services as part of running my service.
Implementation ideas
Not really implementation ideas, but things to think about.
8080
), so they'd have to be mapped & configured somehow. Each service URL would have to have a matching property or something that the developer working on the Fight service would use for its URL (i.e. as described in the rest client guide)This does start to get into some really tall weeds though. Working on the Fight app, I'd have to know about all of its downstream dependencies, and all of their dependencies (recursively) and have to express them.
Maybe if the downstream dependencies contained a
docker-compose
or something to "isolate" all of its dependencies? Again, just spitballing/thinking out loud.