Closed nkremerh closed 6 months ago
Some initial testing on 50 seeds for 1k timesteps using an otherwise default configuration:
Altruism behavior matches exactly between the Altruist class methods and when a Bentham agent with a selfishnessFactor
of 0 are compared head-to-head.
Egoism behavior closely matches (but not exactly) between the Egoist class methods and when a Bentham agent with a selfishnessFactor
of 1 are compared head-to-head. The trends of the resulting graphs post data collection roughly match each other, but the Bentham agents generally do slightly better (the calculation for utility is slightly different between the Bentham and Egoist classes).
A Bentham agent with a selfishnessFactor
of 0.5 matches exactly with a Bentham agent with a selfishnessFactor
of -1 (our prior version compatibility case).
I'd like to see what 50 seeds over 10k timesteps looks like between egoist and Bentham with selfishnessFactor
of 1 to get a clearer picture of how much deviation is occurring. If it's pretty tightly matching, we might be able to move to just using selfishnessFactor
for the consequentialist ethical theories (and particularly those inspired by Jeremy Bentham's approach).
Testing on 50 seeds for 10k timesteps:
The graphs look nearly the same as those from the summer's data collection. It looks like selfishnessFactor
is still working and is the correct mechanism to use. I'll be pushing my changes to address this from the simulation end.
Next step: gather thoughts from @WillemHueffed and @mmilkowski36 on how to adjust the data collection (data/run.py
) to this change.
There is a lot of repetitive code for the three flavors of decision model currently implemented (Bentham, Altruist, and Egoist). It would be preferable if there were only one flavor of decision model (Bentham) with some extra configuration options to make it behave in more altruistic or egoistic ways.
Currently, the selfishnessFactor attribute is intended to provide this functionality. It has been awhile since that's been tested, and with recent changes to how decision models are implemented this is an ideal time to see if that functionality still works as intended (i.e. that an agent with a selfishness factor of 0 will behave in a completely altruistic fashion, a factor of 1 will behave as a complete egoist, etc.). If so, the egoist and altruist decision models can go away in favor of using selfishnessFactor to provide the same behavior in fewer lines of code.
This will eventually require modifying the data collection process (in the configuration and the run.py file in the data directory) since that pipeline currently operates over named decision models rather than different selfishnessFactor values.
@mmilkowski36 is currently wrestling with a similar problem with data collection for UBI, so there may be some lessons learned for this issue once she completes that work.