Open yuyantingzero opened 9 years ago
@yuyantingzero I fix this problem by change workload file, you can change there number of columns in each record, size of column and number of rows (db size).
The current configuration populates the database with ~1GB of records. This is certainly too small for stressing Cassandra - I usually increase this to at least 2x the total memory of the cluster (via --ycsb_recordcount
).
I'm OK increasing the default, but for in-memory cases (Aerospike if it's not configured to use disk, Redis) we need to keep the database size relatively small. How about for MongoDB, Aerospike (disk only), HBase, and Cassandra increasing the default recordcount to 2x the total memory of the data serving nodes?
Default ycsb config for workloadb (95% read, 5% update) on n1-standard-8 doesn't issue disk reads with newer cassandra version (2.1.10), the entire db fit in memory.