microsoft / ALEX

A library for building an in-memory, Adaptive Learned indEX
MIT License
656 stars 112 forks source link

Cannot replicate experiments in Table 2 #11

Closed kaiwang19 closed 3 years ago

kaiwang19 commented 3 years ago

Dear Jialing,

I want to replicate the experiments of Table 2 in the paper. image In section 6.1.2, the paper stated that:

For a given dataset, we initialize an index with 100 million keys.

And I need help with the following 3 problems:

For Q3, I used the benchmark to test them and get different statistics. I set both init_num_keys and total_num_keys to 100M, so that benchmark will only bulk-load 100M keys. For example,

./build/benchmark \ --keys_file=[path to location of a given dataset] \ --keys_file_type=binary \ --init_num_keys=100000000 \ --total_num_keys=100000000 \ --batch_size=100000 \ --insert_frac=0.5 \ --lookup_distribution=zipf \ --print_batch_stats

For longitudes, I bulk-load 100M keys. The results are as follows: image

For longlat, I bulk-load 100M keys. The results are as follows: image

For lognormal, I bulk-load 100M keys. The results are as follows: image

For YCSB, I bulk-load 100M keys. The results are as follows: image

Thank you so much for your time.

jialinding commented 3 years ago

It denotes the average depth over all keys (so equivalently, the average depth of all data nodes, weighted by how many keys fall in that data node).

Yes.

There are three reasons you're seeing different numbers. First, the cost model weights we used in the paper (see the last paragraph of page 18 in our arxiv report) are actually slightly different from the default weights in this open-source implementation. Second, the expected_insert_frac was likely set to 0 to produce these numbers. Third, we've made quite a few changes since we initially submitted our paper, so the bulk loading behavior is different. It is probably not possible to exactly reproduce the paper's results by using this open-source implementation.

kaiwang19 commented 3 years ago

Thank you so much for your detailed reply. I have tried to change the expected_insert_frac, it helps when I change expected_insert_frac from 1 to 0. For the effects of the cost model weights, I changed them as follows:

// Intra-node cost weights double kExpSearchIterationsWeight = 10; // 20->10 double kShiftsWeight = 1; // 0.5->1

// TraverseToLeaf cost weights double kNodeLookupsWeight = 10; // 20->10 double kModelSizeWeight = 1e-6; // 5e-7->1e-6

The results are as follows: image image image image The statistics for longitudes, longlat, lognormal are quite close to the statistics in the paper. The performance of YCSB has the largest difference. Therefore, like the third reason you mentioned, it is probably not possible to exactly reproduce the paper's results by using this open-source implementation.

Could I have another small question about the retraining strategy? I found that when there is a hyper-parameter(threshold) in the resize method of data nodes. This threshold is set to 50, which will call the retraining of models. Do I need to change this threshold for different datasets? Or is this threshold the best choice given empirical experiments? image Thank you so much for your patience. I am quite interested in ALEX~

jialinding commented 3 years ago

Yes, that's a threshold value that we found works well on all datasets in our empirical experiments, and changing it shouldn't have a big impact on performance. But of course you're free to try modifying it if you want to improve performance a bit further on a particular dataset.

kaiwang19 commented 3 years ago

Thanks so much. Your reply helps a lot.