liftbridge-io / go-liftbridge

Go client for Liftbridge. https://github.com/liftbridge-io/liftbridge
Apache License 2.0
68 stars 18 forks source link

Dart client ? #36

Open winwisely99 opened 4 years ago

winwisely99 commented 4 years ago

Dart / flutter supports grpc and protobufs and flatbuffers

Is there ongoing work anywhere on a dart lib ?

tylertreat commented 4 years ago

There is a lot of interest in a Dart client (see https://github.com/liftbridge-io/go-liftbridge/issues/32 and https://github.com/liftbridge-io/liftbridge/issues/34 for example). Unfortunately, I haven't had time to focus on that. I don't know if anyone is currently working on one.

winwisely99 commented 4 years ago

thanks @tylertreat

Well we might just embed the go client inside flutter using gomobile. Its a common solution. Our Issue on it: https://github.com/getcouragenow/embed/issues/4

tylertreat commented 4 years ago

Cool, interested to hear your experience using Liftbridge and outcomes of your evaluation.

winwisely99 commented 4 years ago

Update We got GRPC working with NATS. BOth for Web (grpc-web) and naive. had to use envoy in k8. SO i think it will be easy to make it work with Liftbridge and we are keen to try.

Hows the flat buffers work going. are you close to v1 ?

tylertreat commented 4 years ago

We will not be using flatbuffers in 1.0 due to the lack of gRPC support across a wide number of languages (see discussion here). The 1.0 release is very close however. Planning on the next week or so.

winwisely99 commented 4 years ago

@tylertreat thanks for update. Glad to hear this decision was made as it will make it possible to start using LiftBridge now.

We are getting 50 K transactions per second on NATS Streaming with the PostreSQL backing. With memory we get 900 K on the same hardware.

So we are keen to put LIftBridge into our architecture.

One thing i have not worked out is how NATS Server (s) are configured when used with Liftbridge in terms of HA. A mutation hits NATS and then is distributed to the LiftBridge nodes ( 3 normally for HA reasons). But how is NATS setup to be HA.

I guess the answer is that the NATS Server does NOT need durable storage because the data gets distributed to 3 LiftBridge Servers immediately. But there is only one NATS Server and so its a single point of failure.

I am maybe calling things the wrong name, but i hope can understand what i am getting at.

tylertreat commented 4 years ago

One thing i have not worked out is how NATS Server (s) are configured when used with Liftbridge in terms of HA. A mutation hits NATS and then is distributed to the LiftBridge nodes ( 3 normally for HA reasons). But how is NATS setup to be HA.

I guess the answer is that the NATS Server does NOT need durable storage because the data gets distributed to 3 LiftBridge Servers immediately. But there is only one NATS Server and so its a single point of failure.

Indeed, the NATS server is merely a transport. Acking is the mechanism which provides delivery guarantees in Liftbridge (and NATS Streaming). For HA, I would recommend running a NATS cluster rather than a single NATS Server instance. The NATS cluster is configured independently of Liftbridge.

As an aside, there are plans to allow embedding a NATS instance within a Liftbridge server, but this is not yet implemented.

winwisely99 commented 4 years ago

@tylertreat thank you . you are awesome !!

We will look into how best to run a NATS Cluster on k8 and baremetal. If you have any advice feel free to offer it.

We are open and our k8's are here: https://github.com/getcouragenow/network/tree/master/main/cloud/k8