brandontrabucco / design-bench

Benchmarks for Model-Based Optimization
MIT License
80 stars 19 forks source link

bugs in TF10 dataset #10

Open tung-nd opened 1 year ago

tung-nd commented 1 year ago

Hi. Thank you for the great work and for publishing the benchmark. I am experimenting with the TF10 task and found out that we have more than 4M data points in the public dataset and more than 8M in the hidden dataset. Do you know if this is a bug? Because these numbers are even larger than the number of all possible configurations (4^10 ~ 1M)

TsingQAQ commented 1 year ago

hi @tung-nd, do you have any clue now? it seems both TF8 and TF10 have duplicating inputs, while for TF8 i can safely remove duplications as the output are exactly the same, while for TF10 duplications have obviously different output though.

brandontrabucco commented 8 months ago

Hello tung-nd and TsingQAQ,

Thanks for bringing this to my attention! After inspecting the TFBind10 dataset, it appears that each 10-mer sequence is evaluated 4 times to compute the ddG score. In the current benchmark, each trial was stored as an additional datapoint.

However, now knowing this repetition, each of the 4 trials should be averaged and treated as a single datapoint so that there is no overlap between training and testing datasets. I'm working on a patch for this in the form of a TFBind10-Exact-v1 task.

The original task with duplicate datapoints will continue to be served through TFBind10-Exact-v0, which is the current id for that task in design-bench.

I will add a similar patch for TFBind8.

-Brandon