ml4ai / delphi

Framework for assembling causal probabilistic models from text and software.
http://ml4ai.github.io/delphi
Apache License 2.0
24 stars 17 forks source link

Update generated lambdas to allow for batch processing #214

Closed pauldhein closed 4 years ago

pauldhein commented 5 years ago

Background

Currently we have a GrFN specified by a set of lambdas and a JSON file that is created via my GroundedFunctionNetwork class. This GrFN is capable of processing a single set of inputs at a time. Currently our system takes about one second to evaluate ~1000 samples through a GrFN.

Problem

Unfortunately for model analysis we need to be able to evaluate many more samples much faster.

Solution

Update the lambdas to allow for batched processing of samples through the generated GrFN.

Method

Original PETPT Lambdas

import math

def petpt__assign__td_1(tmax, tmin):
    return ((0.6*tmax)+(0.4*tmin))

def petpt__condition__IF_1_0(xhlai):
    return (xhlai<=0.0)

def petpt__assign__albedo_1(msalb):
    return msalb

def petpt__assign__albedo_2(msalb, xhlai):
    return (0.23-((0.23-msalb)*math.exp(-((0.75*xhlai)))))

def petpt__decision__albedo_3(IF_1_0, albedo_2, albedo_1):
    return albedo_1 if IF_1_0 else albedo_2

def petpt__assign__slang_1(srad):
    return (srad*23.923)

def petpt__assign__eeq_1(slang, albedo, td):
    return ((slang*(0.000204-(0.000183*albedo)))*(td+29.0))

def petpt__assign__eo_0(eeq):
    return (eeq*1.1)

def petpt__condition__IF_2_0(tmax):
    return (tmax>35.0)

def petpt__assign__eo_1(eeq, tmax):
    return (eeq*(((tmax-35.0)*0.05)+1.1))

def petpt__condition__IF_3_0(tmax):
    return (tmax<5.0)

def petpt__assign__eo_2(eeq, tmax):
    return ((eeq*0.01)*math.exp((0.18*(tmax+20.0))))

def petpt__decision__eo_3(IF_2_0, eo_0, eo_1):
    return eo_1 if IF_2_0 else eo_0

def petpt__decision__eo_4(IF_3_0, eo_2, eo_3):
    return eo_2 if IF_3_0 else eo_3

def petpt__assign__eo_5(eo):
    return max(eo, 0.0001)

Torch variant of PETPT Lambdas

import torch

def petpt__assign__td_1(tmax, tmin):
    return ((0.6*tmax)+(0.4*tmin))

def petpt__decision__albedo_1(xhlai, msalb):
    return torch.where(
        xhlai <= 0.0,
        0.23-((0.23-msalb)*torch.exp(-((0.75*xhlai)))),
        msalb
    )

def petpt__assign__slang_1(srad):
    return srad*23.923

def petpt__assign__eeq_1(slang, albedo, td):
    return (slang*(0.000204-(0.000183*albedo)))*(td+29.0)

def petpt__assign__eo_0(eeq):
    return eeq*1.1

def petpt__decision__eo_1(tmax, eeq, eo_0):
    return torch.where(
        tmax > 35.0,
        eeq*(((tmax-35.0)*0.05)+1.1),
        eo_0
    )

def petpt__decision__eo_2(tmax, eeq, eo_1):
    return torch.where(
        tmax < 5.0,
        (eeq*0.01)*torch.exp((0.18*(tmax+20.0))),
        eo_1
    )

def petpt__assign__eo_3(eo):
    eo = eo.reshape((len(eo), 1))
    return torch.max(
        torch.cat((eo, torch.full_like(eo, 0.0001)), dim=1),
        dim=1
    )

@pratikbhd, @skdebray, @cl4yton let's be sure to discuss this at the PA meeting today.

adarshp commented 5 years ago

I'd like to preserve the option to generate the lambdas as they are right now as well (perhaps with a torch=False kwarg to the GrFNGenerator class instantiation) - the current version is extremely easy to convert to LaTeX via SymPY.

adarshp commented 4 years ago

@pauldhein I'm going to mark this as closed by #428 .