Closed colinhanrahan closed 6 months ago
Short answer: Everything is working as intended.
Long answer:
An altruist agent cares only about the consequences for everyone else. Altruist agents should completely disregard their own wellbeing in deference to their neighborhood's wellbeing. This matches with a selfishnessFactor
of 0.
A utilitarian (in the Bentham sense) agent places equal weight to the consequences for all agents. They consider their own consequences, but they don't value them any more than the consequences of another. This matches with a selfishnessFactor
of 0.5 (and also a selfishnessFactor
of -1 as a backwards compatibility value).
A purely egoist agent has a selfishnessFactor
of 1 and concerns themselves only with their own consequences.
A selfishnessFactor
between 0 and 0.5 is a progressively less altruistic but still neighborhood-focused agent. A selfishnessFactor
between 0.5 and 1 is a progressively more egoistic but still neighborhood-considerate agent.
I'm not sure if the naming and the ethical weighting systems match up. In
ethics.py
,altruistic*
models multiply their own ethical weights by 0, placing no ethical value on themselves. This seems to line up more with "selflessness" than altruism, which should place equal importance on the wellbeing of every agent — including the current agent. Likewise, standardbentham*
models multiply all agents' scores, including the current agent, by a factor of 0.5. This places equal weight on the current agent and other agents, which doesn't seem correct for a "selfishness factor" of 0.5. That sounds like whataltruistic*
should be.There are two ways to fix this:
1. Remove selflessness.
self
scores byselfishnessFactor
andneighbor
scores by(1 - selfishnessFactor)
, only multiplyneighbor
scores by(1 - selfishnessFactor)
and keep theself
score the same.altruistic*
models will be altruistic andbentham*
models will be slightly selfish, which I think is the intended behavior.2. Include selflessness (technically the behavior we have right now).
self
vs.neighbor
scores inethics.py
.selfless*
model with aselfishnessFactor
of 0 that discards its own value in ethical calculations.altruistic*
models to have aselfishnessFactor
of 0.5 so they place equal weight on all agents.bentham*
models to have aselfishnessFactor
of 0.75 so they consider their own outcomes slightly more than other agents. You could also vary between -1 and 1 for better readability, but we're already using -1 to indicate no selfishness factor.@nkremerh please let me know what you think.