br1ghtyang / asterixdb

Automatically exported from code.google.com/p/asterixdb
0 stars 0 forks source link

Data (feed) ingestion halts as it hits in-memory component flush #569

Closed GoogleCodeExporter closed 8 years ago

GoogleCodeExporter commented 8 years ago
Hyracks Branch: master
Asterix Branch: raman/master_udf_feeds

Observation:
Running feeds ingestion test via feed adaptor, once we hit asterix's in-memory 
component size limit to flush, our ingestion client can not proceed, and system 
does not flush either. The eco-system gets into a hanging mode. 

How to Reproduce:

- create your asterix instance 
- use attached .aql file to create required dataTypes and Dataset (**Make sure 
you replace your instance name and NC name there correctly - I've already left 
a comment for you there **)
- run attached Feeder2.jar from your client machine to start ingestion. Note 
your client should see your NC (not CC - but NC ) as your NC is your ingestion 
node. You need to run feeder2.jar as:

java -jar Feeder2.jar XXX 2909 TPS DUR

where XXX is the ip-address of your NC (your ingestion node)
TPS is the ingestion rate  (how many tweets per sec you wanna send)
DUR is the duration you want your client to run

as an example

java -jar Feeder2.jar 127.0.0.1 2909 100 500

runs against localhost for 500 seconds and it send 100 tweets/sec

Original issue reported on code.google.com by pouria.p...@gmail.com on 18 Jul 2013 at 10:19

Attachments:

GoogleCodeExporter commented 8 years ago
Fixed in raman/master_udf_feeds.

Original comment by salsuba...@gmail.com on 18 Jul 2013 at 8:32

GoogleCodeExporter commented 8 years ago

Original comment by salsuba...@gmail.com on 23 Aug 2013 at 5:33