moj-analytical-services / splink

Fast, accurate and scalable probabilistic data linkage with support for multiple SQL backends
https://moj-analytical-services.github.io/splink/
MIT License
1.36k stars 149 forks source link

[BUG] `seed` parameter not working in `linker.estimate_u_using_random_sampling()` #1743

Open James-Osmond opened 12 months ago

James-Osmond commented 12 months ago

What happens?

$m$- and $u$- probabilities are not calculated to be the same across different runs of Splink (using a DuckDBLinker), even when using the seed parameter. I checked the match weights chart and the numbers are slightly different each time I run.

To Reproduce

To test this bug I used this example code with Splink v3.9.9 - the only change I made to the example was adding a seed of 42 to linker.estimate_u_using_random_sampling() like so:

linker.estimate_u_using_random_sampling(max_pairs=5e6, seed=42)

Then I made a note of some of the values in the match weights chart the example code produces, and saw the values changed from run to run.

OS:

Windows 10

Splink version:

3.9.9

Have you tried this on the latest master branch?

Have you tried the steps to reproduce? Do they include all relevant data and configuration? Does the issue you report still appear there?

RobinL commented 12 months ago

Thanks for the report

What do you get if you run this?

from splink.datasets import splink_datasets
from splink.duckdb.linker import DuckDBLinker
import altair as alt
from splink.duckdb.comparison_library import exact_match

import pandas as pd 
pd.options.display.max_rows = 1000
df = splink_datasets.historical_50k

from splink.duckdb.blocking_rule_library import block_on

# Simple settings dictionary will be used for exploratory analysis
settings = {
    "link_type": "dedupe_only",
    "blocking_rules_to_generate_predictions": [
        block_on(["first_name", "surname"]),
        block_on(["surname", "dob"]),
        block_on(["first_name", "dob"]),
        block_on(["postcode_fake", "first_name"]),
    ],
    "comparisons": [
        exact_match("first_name"),
        exact_match("surname"),

    ],
    "retain_matching_columns": True,
    "retain_intermediate_calculation_columns": True,
    "max_iterations": 10,
    "em_convergence": 0.01
}
for i in range(10):
    linker = DuckDBLinker(df, settings)

    linker.estimate_u_using_random_sampling(1e5, seed=2)
    print(linker.save_settings_to_json()["comparisons"][0]["comparison_levels"][1]["u_probability"])

I get 0.011881562201718104 every time

James-Osmond commented 12 months ago

Thanks for such a quick response @RobinL! That's very interesting - you're right, I get that exact value every time. However, if I use a more complex comparisons list, I no longer get consistent values with the following:

from splink.datasets import splink_datasets
from splink.duckdb.linker import DuckDBLinker
import altair as alt
from splink.duckdb.comparison_library import exact_match
import splink.duckdb.comparison_template_library as ctl
import splink.duckdb.comparison_library as cl

linker = DuckDBLinker(df, settings)

import pandas as pd 
pd.options.display.max_rows = 1000
df = splink_datasets.historical_50k

from splink.duckdb.blocking_rule_library import block_on

# Simple settings dictionary will be used for exploratory analysis
settings = {
    "link_type": "dedupe_only",
    "blocking_rules_to_generate_predictions": [
        block_on(["first_name", "surname"]),
        block_on(["surname", "dob"]),
        block_on(["first_name", "dob"]),
        block_on(["postcode_fake", "first_name"]),
    ],
    "comparisons": [
        ctl.name_comparison("first_name", term_frequency_adjustments=True),
        ctl.name_comparison("surname", term_frequency_adjustments=True),
        ctl.date_comparison("dob", cast_strings_to_date=True, invalid_dates_as_null=True),
        ctl.postcode_comparison("postcode_fake"),
        cl.exact_match("birth_place", term_frequency_adjustments=True),
        cl.exact_match("occupation",  term_frequency_adjustments=True),
    ],
    "retain_matching_columns": True,
    "retain_intermediate_calculation_columns": True,
    "max_iterations": 10,
    "em_convergence": 0.01
}

for i in range(10):
    linker = DuckDBLinker(df, settings)

    linker.estimate_u_using_random_sampling(1e5, seed=2)
    print(linker.save_settings_to_json()["comparisons"][0]["comparison_levels"][1]["u_probability"])
ADBond commented 12 months ago

Haven't looked in great detail yet, but looks like between iterations the table __splink__df_concat_with_tf is not consistent between iterations. Then when we Bernoulli sample from this, because the rows are in a different order, we get a different __splink__df_concat_with_tf_sample table, which ultimately means a different u-prob. This might be driven by the tf tables themselves which are pretty variable in order - I guess this then affects how the join ends up working

RobinL commented 12 months ago

Right - nice spot - so it's a consequence of tables (results) in SQL being inherently unordered (unless an ORDER BY is specified) as opposed to anything to do with the random number generator itself. That would make sense.

Whilst this would theoretically be fixable by ensuring we put an unambiguous ORDER BY in all our results, it would add too much complexity to the codebase (and be a big piece of work) so we might just need to drop support for seed altogether in backends that produce inconsistent results. @RossKen what do you think?

ADBond commented 12 months ago

Perhaps we could ORDER BY the table we sample from within the method, if a seed is provided? Something roughly like (in estimate_u.py)

if seed is not None:
    table_to_sample_from = "SELECT * FROM __splink__df_concat_with_tf ORDER BY unique_id"
else:
    table_to_sample_from = "__splink__df_concat_with_tf"
...
sql = f"""
SELECT *
FROM {table_to_sample_from}
{sampling_sql_string}
"""

Realise there is a computational cost to this, but potentially not too bad, and as it is an (optional) part of training might be a reasonable trade-off to offer users

RobinL commented 12 months ago

Yes - it hadn't occurred to me the solution might be that simple - but if it is, that sounds like a sensible solution to me

ADBond commented 12 months ago

I'm happy to give it a whirl and then revisit if it turns out to not be just a quick few lines