OutlierVentures / QTM-Interface

GNU General Public License v3.0
28 stars 18 forks source link

Post Processing Scaling #17

Closed BlockBoy32 closed 1 year ago

BlockBoy32 commented 1 year ago

Is there a reason that you did each of the agents post processing separately instead of trying to loop through them all then add them to the dictionary?

For Example:

 `

team_tokens = agent_ds.map(lambda s: sum([agent['tokens'] 
                                           for agent 
                                           in s.values() if agent['type'] == 'team']))
foundation_tokens = agent_ds.map(lambda s: sum([agent['tokens'] 
                                           for agent 
                                           in s.values() if agent['type'] == 'foundation']))
`

Could all of the different metrics like these for every single type of agent not be accomplished with a single for loop iterating through all of the agents and then adding them to the dictionary afterwards?

For example I did this in the user adoption metrics here:

    # ORIGINAL CODE, Need to add to to the df manually too
    #product_users = user_adoption_ds.map(lambda s: s['product_users'])
    #token_holders = user_adoption_ds.map(lambda s: s['token_holders'])
    #product_revenue = user_adoption_ds.map(lambda s: s['product_revenue'])
    #token_buys = user_adoption_ds.map(lambda s: s['token_buys'])

    # New code, every metric calculation and addition to the dataframe done for you
    for key in user_adoption_ds[0].keys():
        key_values = user_adoption_ds.apply(lambda s: s.get(key))
        data[key] = key_values
achimstruve commented 1 year ago

Absolutely!

I just started these as an example.