fadyat / speedy

I'm reinventing the wheel, daddy
MIT License
0 stars 0 forks source link

make benchmarks of different sharding algos #21

Open fadyat opened 11 months ago

fadyat commented 11 months ago

Step 1: Defining Metrics and Benchmarking Scenarios

  1. Performance Evaluation Metrics:

    • Number of cache misses/hits.
    • Node fill levels.
    • Key distribution across nodes.
  2. Benchmarking Scenarios:

    • Fixed number of keys and nodes.
    • Fixed number of keys and variable number of nodes.
    • Variable number of keys and fixed number of nodes.
    • Variable number of keys and nodes.

Step 2: Developing Test Data

  1. Creating Fixed Data:

    • Generating test datasets with a fixed number of keys and nodes for each benchmarking scenario.
  2. Creating Variable Data:

    • Generating test datasets with varying numbers of keys and nodes for each scenario.

Step 3: Implementing Benchmarking

  1. Selection of Measurement Tools:

    • Utilizing specialized libraries or tools to measure cache misses/hits and node fill levels.
  2. Implementation of Sharding Algorithms:

    • Writing code for three different sharding algorithms.
  3. Metric Measurement:

    • Running benchmarks on test data for each scenario.
    • Measuring cache misses/hits, node fill levels, and key distribution.

Step 4: Analyzing Results

  1. Comparative Analysis:

    • Comparing results for each algorithm across different benchmarking scenarios.
    • Determining the best algorithm for each scenario.
  2. Assessing Impact of Changes:

    • Analyzing the impact of adding/removing nodes on key distribution and node fill levels.

Step 5: Reporting and Conclusions

  1. Preparing a Report:

    • Documenting benchmarking results for each algorithm and scenario.
    • Including an analysis of the impact of changes and recommendations for algorithm usage in different scenarios.
  2. Conclusions:

    • Drawing conclusions about the performance of each algorithm in various conditions.
    • Providing recommendations for choosing algorithms based on different usage scenarios.

This approach will help make the benchmarking process more representative of real-world scenarios, considering various conditions and changes within the system.

fadyat commented 10 months ago
// Benchmark functions
func BenchmarkFixedKeysFixedNodes(b *testing.B) {
    // Setup with fixed number of keys and nodes
    keys := generateFixedKeys()
    nodes := generateFixedNodes()

    b.ResetTimer()
    for i := 0; i < b.N; i++ {
        // Run the benchmark for each sharding algorithm
        ConsistentHashing(keys, nodes)
        // Run other algorithms similarly
    }
}

func BenchmarkVariableKeysFixedNodes(b *testing.B) {
    // Setup with variable number of keys and fixed nodes
    // Generate different key sets

    b.ResetTimer()
    for i := 0; i < b.N; i++ {
        // Run benchmarks for different key sets
        // Run sharding algorithms
    }
}

// Create similar benchmark functions for other scenarios