mint-metrics / mojito-r-analytics

Reporting & analytics tools for the Mojito split testing framework
https://mojito.mx/docs/r-analytics-intro
BSD 3-Clause "New" or "Revised" License
10 stars 2 forks source link

Add Snowplow/BigQuery support & more flexible Snowplow Redshift table selection #9

Open kingo55 opened 3 years ago

kingo55 commented 3 years ago

I've got experiment reports working from our Snowplow/BigQuery GCP pipeline. Will run you through it tomorrow, but here's a summary:

Summary of report config changes

Source tables are defined in wave_params

This allows us to define specific custom schemas, tables, views or subqueries as required by different experiments.

wave_params <- list(
  wave_id="ex282",
  start_date="2020-08-13 10:36:00",
  stop_date="2020-09-10 09:13:00",
  time_grain="day",
  recipes=c("Control","Treatment"),
  tables= list(
    exposure="mojito_exposures",
    goal="mojito_conversions",
    segment=NULL,
    channel=NULL,
    failure="mojito_failures"
  )
)

This does away with the - often confusing - wave_params$client_id and wave_params$subject abomination of table naming we have used previously. This should make things easier for other users to adopt reports.

Change to the way goals are defined

BigQuery's regex functions are a bit awkward to use and don't fit into the format we're used to:

REGEXP_CONTAINS(goal, r'^transaction [0-9]+(something|anotherthing)')

This is in contrast to Redshift's regex match:

goal ~ '^transaction [0-9]+(something|anotherthing)'

Due to this, I think we can combine the "goal" and "operand" parameters that we pass into reports, into just the one goal parameter:

  list(
    title="2 or more bookings",
    goal="goal like 'transaction %'",
    goal_count=2
  )

BigQuery time grains are less flexible

Whereas with Redshift we could specify time grains in plural or singular form, BigQuery only accepts the singular form, which we may need to handle for. E.g. in time to convert plots we use the SQL time grain.

Other notable changes

It looks like GCP is reporting slightly more traffic/conversions. But incredibly close considering we're comparing two different pipelines here

TODO

Update Redshift reports to line up with BigQuery

As much as possible, I'd like to make it easy to migrate our Redshift knits over to BigQuery. This includes:

  1. Source table references
  2. Goal references
  3. Changing revenue plots source data format

Rework segments implementation

I think now might be a good time to rethink how we apply segments. Especially considering how we might do it cost effectively with BigQuery.