Open cmingxu opened 5 months ago
Hello,
Sorry for the late response and thank you for your initiative!
Yes, it would be great to filter events in the same way as in API.
Yes, websocket service should be separate.
Why do you want to use Redis? Firstly, Redis Pub/Sub provides only at most once message delivery guarantee. Secondly, if you create one Pub/Sub channel for each websocket data filter, then you will create connection to Redis for each subscription; this may be not good for a large number of clients. I would use RabbitMQ or ZeroMQ.
Yes, I would also filter all data in indexer service to not send excess messages to websocket service. How do you want to pass filters from websocket service to indexer and how do you want to save them? Because it would be good to be able to restart indexer server without loosing user's subscription requests.
- Or we can consider another implementation, which I think is much easier: store user subscription requests only in websocket service, in indexer service send all data from the processed master block. As there is not many transactions in one master block right now, it should work pretty fast.
@iam047801 Thank you for your reply. can you please be more specific on "in indexer service send all data from the processed master block", or give some mockup code? As in my understanding master chain only have transactions related to validators syncup, all user transactions happen on basechain.
Hello,
Sorry for the late response and thank you for your initiative!
- Yes, it would be great to filter events in the same way as in API.
- Yes, websocket service should be separate.
- Why do you want to use Redis? Firstly, Redis Pub/Sub provides only at most once message delivery guarantee. Secondly, if you create one Pub/Sub channel for each websocket data filter, then you will create connection to Redis for each subscription; this may be not good for a large number of clients. I would use RabbitMQ or ZeroMQ.
- Yes, I would also filter all data in indexer service to not send excess messages to websocket service. How do you want to pass filters from websocket service to indexer and how do you want to save them? Because it would be good to be able to restart indexer server without loosing user's subscription requests.
ZeroMQ or RabbitMQ would be good choices
Can you please be more specific on "in indexer service send all data from the processed master block"?
I mean, a processed masterchain block contains shard blocks. Each block (both from masterchain and shardchain) contains transactions. Each transaction contains messages and account state.
So we can send all that info to websocket service on each new master block. You can refer to saveBlocksLoop
in internal/app/indexer/save.go
.
Can you please be more specific on "in indexer service send all data from the processed master block"?
I mean, a processed masterchain block contains shard blocks. Each block (both from masterchain and shardchain) contains transactions. Each transaction contains messages and account state.
So we can send all that info to websocket service on each new master block. You can refer to
saveBlocksLoop
ininternal/app/indexer/save.go
.
Got it.
Hi @cmingxu are you working on this? let me know if I can help as I am interested too.
Mamad.bayram2448@gmail.com
Hi, team
I want add Websocket endpoints for anton to notify in realtime. Here I brief my thoughts and want your feedbacks.
1, add websocket API endpoint for blocks/txs/accounts/messages, the API parameter will follow conventions just like those for HTTP;
2, will add standalone service
websocket
just likeweb
in docker-compose.3, will add redis as in memory queue to store client request(websocket -> indexer), and as result queue of each websocket request(indexer -> websocket)
4, will add filters on indexer/save.go, just before where data are going to persist into repositories.
Looking forward for your feedbacks.