ooni / backend

Everything related to OONI backend infrastructure: ooni/api, ooni/pipeline, ooni/sysadmin, collector, bouncers and test-helpers
BSD 3-Clause "New" or "Revised" License
48 stars 29 forks source link

riseupvpn: relax scoring logic #745

Closed bassosimone closed 9 months ago

bassosimone commented 9 months ago

In https://github.com/ooni/probe-cli/pull/1363 et al. I am preparing riseupvpn to be included again. Because riseupvpn is a default-disabled experiment, it won't run on probes until check-in explicitly enables it.

When we'll enable riseupvpn, we should need to change and significantly relax the backend scoring logic. The probe currently does not score riseupvpn and never claims there was any anomaly.

I think the backend should do the same. Basically, we should collect and show riseupvpn measurements without interpreting them. This is part of what I discussed with @cyBerta in https://github.com/ooni/probe-cli/pull/1125#pullrequestreview-1526320800.

For reference, the current scoring logic is the following:

def score_riseupvpn(msm: dict) -> dict:
    """Calculate measurement scoring for RiseUp VPN
    Returns a scores dict
    """
    # https://github.com/ooni/backend/issues/541
    scores = init_scores()
    tk = g_or(msm, "test_keys", {})
    tstatus = tk.get("transport_status") or {}
    obfs4 = tstatus.get("obfs4")
    openvpn = tstatus.get("openvpn")
    anomaly = (
        tk.get("api_status") == "blocked"
        or tk.get("ca_cert_status") is False
        or obfs4 == "blocked"
        or openvpn == "blocked"
    )
    if anomaly:
        scores["blocking_general"] = 1.0

    scores["extra"] = dict(test_runtime=msm.get("test_runtime"))
    return scores

We should drop the part that computes whether there was an anomaly. But, if possible, we should not flag the measurement as failed, which would hide it. We should just do such that the measurement appears on Explorer w/o any backend-derived inference on whether there was blocking.