micronaut-projects / micronaut-grpc

Integration between Micronaut and GRPC
Apache License 2.0
69 stars 39 forks source link

Micrometer metrics #80

Closed racevedoo closed 2 years ago

racevedoo commented 4 years ago

Are there any built-in metrics? Shouldn't micronaut grpc integrate with micrometer?

graemerocher commented 4 years ago

There is no official integration https://github.com/micrometer-metrics/micrometer/issues/656

So we are waiting for that to be solved

mr-tim commented 4 years ago

Metrics for grpc endpoints aside - is there a way to get the endpoints that expose metrics (eg, MetricsEndpoint, PrometheusEndpoint) up and running (perhaps on another port) when using micronaut-grpc? Currently I get an HTTP2 error back if I try and access the endpoints on the port configured for grpc:

io.netty.handler.codec.http2.Http2Exception: Unexpected HTTP/1.x request: GET /prometheus 
    at io.netty.handler.codec.http2.Http2Exception.connectionError(Http2Exception.java:103)
    at io.netty.handler.codec.http2.Http2ConnectionHandler$PrefaceDecoder.readClientPrefaceString(Http2ConnectionHandler.java:302)
    at io.netty.handler.codec.http2.Http2ConnectionHandler$PrefaceDecoder.decode(Http2ConnectionHandler.java:239)
    at io.netty.handler.codec.http2.Http2ConnectionHandler.decode(Http2ConnectionHandler.java:438)
    at io.netty.handler.codec.ByteToMessageDecoder.decodeRemovalReentryProtection(ByteToMessageDecoder.java:498)
    at io.netty.handler.codec.ByteToMessageDecoder.callDecode(ByteToMessageDecoder.java:437)
    at io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:276)
    at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:377)
    at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:363)
    at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:355)
    at io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1410)
    at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:377)
    at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:363)
    at io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:919)
    at io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:163)
    at io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:714)
    at io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:650)
    at io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:576)
    at io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:493)
    at io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:989)
    at io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74)
    at io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30)
    at java.base/java.lang.Thread.run(Thread.java:834)
graemerocher commented 4 years ago

maybe in Micronaut 2.0 M1 (just released) with http/2 support enabled

mr-tim commented 4 years ago

Thanks, will give that a try - I actually have just managed to fixed this by adding in the appropriate io.micronaut:micronaut-http dependency - this results in grpc running on 50051 alongside http on 8080.

graemerocher commented 4 years ago

Nice

nichuHere commented 4 years ago

Thanks, will give that a try - I actually have just managed to fixed this by adding in the appropriate io.micronaut:micronaut-http dependency - this results in grpc running on 50051 alongside http on 8080. @mr-tim @graemerocher Can you further explain this. My need too is to send grpc metrics info to prometheous

gaetancollaud commented 4 years ago

@mr-tim I have the same question as @nichuHere How did you manage to do that ?

I've tried this but none of the ports are open :(

micronaut:
  server:
    port: 8088

endpoints:
  all:
    port: 8085
nichuHere commented 4 years ago

@gaetancollaud are you trying to get the grpc server running on the specific port. In that case, you need

grpc:
  server:
    port: 8088
    keep-alive-time: 3h
gaetancollaud commented 4 years ago

@nichuHere No, I'm trying to make the http management port work. Since I use kubernetes I would like to have liveness and readiness probes.

I don't actually use the grpc server. This app is just a client.

mr-tim commented 4 years ago

For me it was just a case of making sure that I had a dependency on the io.micronaut:micronaut-http jar from my application. Once I had that, micronaut started up with the http port open (you should be able to see this in the logs at startup).

HurmuzacheCiprian commented 3 years ago

There is no official integration micrometer-metrics/micrometer#656

So we are waiting for that to be solved

Looks like this is solved now

graemerocher commented 3 years ago

Once the new release is out we can include metrics

oehme commented 3 years ago

1.7.0 is out. Note that the client interceptor currently has no indication from which channel a metric came. So if you use the same interface over multiple channels, you won't be able to distinguish which hosts are slow.

noam-alchemy commented 3 years ago

Hey, noticing that micrometer shipped this over a year ago. Any updates on this?

graemerocher commented 3 years ago

@burtbeckwith please take a look at this. Thanks

graemerocher commented 2 years ago

This was integrated with https://github.com/micronaut-projects/micronaut-micrometer/pull/321

pumano commented 2 years ago

I found how to run micronaut-grpc with micronaut-management in different ports:

grpc:
  server:
    port: 8080

micronaut:
  server:
    port: 8333
  application:
    name: ms-example
  metrics:
    enabled: true
    export:
      prometheus:
        enabled: true
        step: PT1M
        descriptions: true

endpoints:
  prometheus:
    sensitive: false

just add io.micronaut:micronaut-http-server-netty as dependency, and set micronaut->server->port as provided above. Because you need webserver to serve metrics but micronaut-grpc-runtime have different webserver grpc-netty, that does not support http/1.1 which is needed to scraping prometheus metrics.

list of needed dependencies for prometheus:

// monitoring
runtimeOnly("io.micronaut:micronaut-http-server-netty") // http server for metrics
implementation("io.micronaut:micronaut-management") // expose metrics via endpoint
implementation("io.micronaut.micrometer:micronaut-micrometer-core") // micrometer core
implementation("io.micronaut.micrometer:micronaut-micrometer-registry-prometheus") // prometheus registry

I also confirm, micronaut-micrometer contains grpc interceptors that intercept requests and provide info out-of-the-box.

For example:

# HELP grpc_server_processing_duration_seconds The total time taken for the server to complete the call
# TYPE grpc_server_processing_duration_seconds summary
grpc_server_processing_duration_seconds_count{method="GetProjects",methodType="UNARY",service="com.projects.v7.ProjectsApi",statusCode="OK",} 0.0
grpc_server_processing_duration_seconds_sum{method="GetProjects",methodType="UNARY",service="com.projects.v7.ProjectsApi",statusCode="OK",} 0.0
grpc_server_processing_duration_seconds_count{method="GetProjects",methodType="UNARY",service="com.projects.v7.ProjectsApi",statusCode="INVALID_ARGUMENT",} 2.0
grpc_server_processing_duration_seconds_sum{method="GetProjects",methodType="UNARY",service="com.projects.v7.ProjectsApi",statusCode="INVALID_ARGUMENT",} 0.074629084
# HELP grpc_server_processing_duration_seconds_max The total time taken for the server to complete the call
# TYPE grpc_server_processing_duration_seconds_max gauge
grpc_server_processing_duration_seconds_max{method="GetProjects",methodType="UNARY",service="com.projects.v7.ProjectsApi",statusCode="OK",} 0.0
grpc_server_processing_duration_seconds_max{method="GetProjects",methodType="UNARY",service="com.projects.v7.ProjectsApi",statusCode="INVALID_ARGUMENT",} 0.067716709

That type of metrics is same as produced by grpc-spring-boot-starter.

scprek commented 4 months ago

@graemerocher Is @pumano still the way to do this? We just happen to host our builtin endpoints on a different port already so I have

micronuat 4.5.0
micronaut-grpc 4.5.0
endpoints:
  all:
    port: 8085
  prometheus:
    enabled: true # Default
    sensitive: false

and I did this in the micronaut plugin instead

micronaut {
    // This is a gRPC application, but we need to include the HTTP server to expose metrics
    runtime(MicronautRuntime.NETTY)

I then have the logs at startup

{"@timestamp":"2024-07-12T11:24:18.298962-04:00","message":"GRPC started on port 50051","logger_name":"i.m.g.s.GrpcEmbeddedServerListener","thread_name":"main","level":"INFO"}
{"@timestamp":"2024-07-12T11:24:18.300249-04:00","message":"Startup completed in 814ms. Server Running: http://localhost:8080","logger_name":"io.micronaut.runtime.Micronaut","thread_name":"main","level":"INFO"}

Probably unrelated, but I didn't see it before adding the HTTP server and will follow up in another issue if relevant.

But then I get these errors mentioned in https://github.com/micronaut-projects/micronaut-grpc/issues/408

Jul 12, 2024 11:21:24 AM io.grpc.netty.NettyServerTransport notifyTerminated
INFO: Transport failed
io.netty.handler.codec.http2.Http2Exception: HTTP/2 client preface string missing or corrupt. Hex dump for received bytes: 1603010200010001fc0303ec2d17e9698e5687ed0749216d

Normally if I hit the GRPC endpoint with a GET or something I get an error like this

Jul 12, 2024 11:26:45 AM io.grpc.netty.NettyServerTransport notifyTerminated
INFO: Transport failed
io.netty.handler.codec.http2.Http2Exception: Unexpected HTTP/1.x request: GET /prometheus