stingergraph / stinger

The STINGER in-memory graph store and dynamic graph analysis platform. Millions to billions of vertices and edges at thousands to millions of updates per second.
http://www.stingergraph.com
Other
210 stars 67 forks source link

Dropped batches #249

Closed abasak24 closed 6 years ago

abasak24 commented 6 years ago

Hi, I am streaming in edges into STINGER server using the CSV stream parser. However, in the batch_server.cpp, there is defined a STINGER_MAX_BATCHES of 100, beyond which further update batches are dropped. I can see multiple batches being dropped when I run my program. I had three questions regarding this arrangement: 1) What is the reasoning behind the MAX value of 100?
2) What is the fate of the dropped batches? Will the client streamer try to enqueue a dropped batch again in the next trial? Where is the code path for what happens to a dropped batch? If a dropped batch is not re-enqueued, does this mean we are totally ignoring some edges in the graph? 3) The STINGER struct keeps a track of dropped batches since it has a component uint64_t dropped_batches. What purpose does it serve?

Thanks! Abanti

ehein6 commented 6 years ago

The batch dropping mechanism is designed to deal with the case where data is coming in faster than the stinger server can deal with it. Incoming batches are inserted into a queue, the server is supposed to dequeue and insert them. STINGER_MAX_BATCHES is a max queue size after which we simply drop batches instead of trying to hold onto all of them. Basically, "how far can we fall behind before we give up".

Dropped batches go away, they are ignored. It's better than running out of memory or crashing. The counter keeps track of how many batches have been dropped as a measure of how bad the problem is.

It's possible that you are overwhelming the server with many small batches instead of sending all the edges in one large batch. Try increasing the batch size in the CSV stream parser. The default of 1000 may be too small.

abasak24 commented 6 years ago

Thanks!