pymc-labs / pymc-marketing

Bayesian marketing toolbox in PyMC. Media Mix (MMM), customer lifetime value (CLV), buy-till-you-die (BTYD) models and more.
https://www.pymc-marketing.io/
Apache License 2.0
665 stars 184 forks source link

CLV API Standardization #527

Open ColtAllen opened 7 months ago

ColtAllen commented 7 months ago

There are API inconsistencies in the CLV module. Standardization is a big task best broken down into multiple PRs:

PRs to be Completed In-Order

Current API

Beta-Geo/NBD Transactions Model

rfm_data = pd.DataFrame(
            {
                "customer_id": customer_id,
                "frequency": frequency,
                "recency": recency,
                "T": T,
            }
        )

model = BetaGeoModel(data=rfm_df)
model.build_model()
model.fit()

model.expected_num_purchases(
    customer_id=rfm_data["customer_id"],
    t=10,
    frequency=rfm_data["frequency"],
    recency=rfm_data["recency"],
    T=rfm_data["T"],
)

Note how fit data is provided as a dataframe, but in the predictive methods individual arrays must be provided. Specifying one array at a time was one of the most annoying aspects about using the legacy lifetimes library, and sometimes even created indexing issues that caused the underlying scipy functions to crash.

For ParetoNBDModel, I streamlined this nonsense with a dataframe argument, and made it optional if running predictions on the fit dataset:

Pareto/NBD Transactions Model

rfm_data = pd.DataFrame(
            {
                "customer_id": customer_id,
                "frequency": frequency,
                "recency": recency,
                "T": T,
            }
        )

model = ParetoNBDModel(data=rfm_data)
model.build_model()
model.fit()

# Data param is optional and only required for out-of-sample data
model.expected_purchases(future_t=10)

model.expected_purchases(
    data=future_rfm_df,
    future_t=10,
)

(We will also need to resolve the naming inconsistencies between these models.)

I've been told passing in dataframes instead of arrays loses some xarray broadcasting functionality, which I'd be interested to hear more about. I'm not opposed to arrays being passed in provided it's optional for in-sample data.

The API discrepancies between these models necessitated a hotfix for the monetary value model, which follows the same conventions as BetaGeoModel:

Gamma/Gamma Monetary Value Model

monetary_data = pd.DataFrame(
            {
                "customer_id": customer_id,
                "mean_transaction_value": monetary_value,
                "frequency": frequency,
            }
        )

model = GammaGammaModel(data=monetary_data)
model.build_model()
model.fit()

model.expected_customer_lifetime_value(
    transaction_model=transaction_model,
    customer_id=rfm_data["customer_id"],
    mean_transaction_value=rfm_data["monetary_value"],
    frequency=rfm_data["frequency"],
    recency=rfm_data["recency"],
    T=rfm_data["T"],
    time=12,
    discount_rate=0.01,
    freq="W",
)

Lastly, ShiftedBetaGeoModelIndividual is a whole different animal since it handles contractual transactions, but I think it'd be a good idea to add support for it to the customer_lifetime_value utility:

Shifted Beta-Geo Contractual Model

contract_data = pd.DataFrame(
            {
                "customer_id": customer_id,
                "t_churn": t_churn,
                "T": T,
            }
        )

model = ShiftedBetaGeoModelIndividual(data=contract_data)
model.build_model()
model.fit()

model.distribution_customer_churn_time(customer_id=contract_data["customer_id"])
ricardoV94 commented 7 months ago

I think the best would be to work with xarray datasets. It has the organization benefits of pandas, with the broadcasting behavior of numpy. Internally most predictive methods are already written with xarray code anyway. Users could pass pandas dataframes and we convert to xarray, but the default type which needs to conversion would be xarrays.

Definitely agree that passing separate numpy arrays is too cumbersome

ColtAllen commented 4 months ago

Updated original comment with list of PRs to complete.

wd60622 commented 4 months ago

Is t / future_t meant to be vector / vectorized in the API? I think previous implementation had it as vector of same size as each other input or scalar

ColtAllen commented 4 months ago

Is t / future_t meant to be vector / vectorized in the API? I think previous implementation had it as vector of same size as each other input or scalar

Both forms of parametrization (vectorized or scalar) are supported:

# scalar parametrization (here predictions are being ran for in-sample data)
model.expected_purchases(future_t=10)

# equivalent vectorized parametrization
data = data.assign(future_t=10)
model.expected_purchases(data)

Vectorization support was added to facilitate xarray inputs in the future.

ColtAllen commented 3 months ago

Steps 4-6 (along with adding CLV support for ShiftedBetaGeoModelIndividual) are extraneous and will be given their own issues after https://github.com/pymc-labs/pymc-marketing/pull/758 is merged.