Closed Corvince closed 9 months ago
Although it might be a bit off topic, but Turf looks elegant in terms of their class hierachy. And your idea of Position
seems to fit into their Point
type.
As you mentioned type annotations, I am also thinking to use cython to enforce static type checking for better error handling and performance. Although it might mean more work, I think the benefits are also very obvious.
How would a network be a subtype of _AbstractSpace
? What would the Position
be?
I am trying to implement a new class SquareGrid
(partly) using the interface you proposed here. While it looks more elegant than the current one, I also encounter some nasty problems to fit MultiGrid and SingleGrid into one class. Of course a similar approach that was used before can be used to integrate them into one class, but a more general idea behind it, like multiplex network, could make the modifications later on a bit painful as well (if we ever want to implement a multiplex network).
I tried to google for definition and application of "MultiGrid", but all I found was an example model from mesa project. Therefore, I feel a simpler solution would be enforcing a grid to be a SingleGrid
, and in a network, each node can only represents an agent. We can think of MultiGrid
as multiple layers of SingleGrid
, and multiplex network would be of course multiple layers of networks. This would break the current visualization module but this approach seems to be more scalable and general.
What do you think? I am hesitant to push my code for the moment because it seems that you are looking for a once-and-for-all solution, and should we first determine a general hierarchy (possibly influence on other modules as well)?
P.S. I do not quite get the idea of separating Content and Agent. In which specific space type do you think it can be used?
I do not quite get the idea of separating Content and Agent. In which specific space type do you think it can be used?
It's about terminology and flexibility. There are some definitions that say "everything in an ABM is an agent", but there are different views as well. For example Netlogo has "patches" which are like agents, but are stationary and used to model "alive" background (e.g. growing grass). But one could use "content" to hold background information that is otherwise useful ("this is a street") or purely cosmetic ("this should be drawn in yellow").
I also encounter some nasty problems to fit MultiGrid and SingleGrid into one class. [...] Therefore, I feel a simpler solution would be enforcing a grid to be a
SingleGrid
, and in a network, each node can only represents an agent. We can think ofMultiGrid
as multiple layers ofSingleGrid
.
Yes, I also came to the conclusion that Multi- and SingleGrid don't fit well into a single Grid class. However, I came to the conclusion that MultiGrid would be the general case and SingleGrid is a special case, because it is somewhat limiting the generic case.
However, I really like the idea of several layers. This could allow to have background layer(s), each agent class on its own layer (which could be useful for some applications) and also to mix different spaces, primarily thinking of Grids and networks.
How would a network be a subtype of
_AbstractSpace
? What would thePosition
be?
Position
should just mean anything that identifies the position. For networks it could be simply the node_id.
I just recently began playing with Mesa and have been thinking about (1) performance and (2) organizing content for both ease and (potentially) performance. I like both the ideas of separating "content" from agents, and of layers.
Specifically, I was playing with changing Grid to use a Numpy ndarray
with dtype='object' and then updating the neighbor(hood) methods accordingly (amongst other things, iterators become somewhat less natural.) Basing Grid on a dict
makes a lot of sense, too (and eliminates potentially awkward mappings from e.g. network positions to tuples/2D coordinates, though this is not an issue of Grids are treated as separate from Network, merely sharing a Position
as their index type.) In any case, is performance a primary concern? Is there a desire/design principle the Mesa project has to limit dependencies on e.g. Numpy?
(And I realize "is performance a concern" is a loaded question, and I don't mean it to be aggressively so.)
I suspect this ties into the content question because, a la NetLogo patches, a grid of "grass", "sugar", etc. Non-agent content could have a different contract than step
. E.g. "sugar" where for for each tick every cell is incremented by (or reset to) 1, or a resource that can be mined by agents but is otherwise passive (e.g. "iron.") On a 1000x1000 grid it begins to make a big difference if this is a million function calls or one call that updates/outputs a 1000x1000 grid (especially if that is done via e.g. Numpy.)
I like layers for similar reasons but I also feel like I will have to play with an implementation first. (E.g. it would be easy enough to say a coordinate may have at most 1 wolf and 1 sheep but more of a pain to say "at most 2 animals." That is easy enough, but maybe there are other complicating scenarios.) The layers also open up the possibility of splitting implementations into "dense" and "sparse" versions, where the answer to "performance" might be different (dicts for sparse agents, but ndarrays
for dense resources/patches.)
I would add one more possible extension to mesa that makes performance become a serious issue: simulate the model 10000+ times and collect the distribution of some model/agent attributes. I have recently implemented a stochastic model that needs such analysis, and dropping mesa and using vectorization would speed up the program around 10000 times asymptotically.
This example is totally off topic...so I guess I just keep experimenting with my current ideas and see how it could turn out.
Thanks for joining the discussion @dcunning11235
Specifically, I was playing with changing Grid to use a Numpy
ndarray
with dtype='object' and then updating the neighbor(hood) methods accordingly (amongst other things, iterators become somewhat less natural.)
Be aware that, as far as I know, using numpy withdtype='object'
is basically using a less efficient list. Just using numpy doesn't involve any magic and the real speed-ups are based on using fixed dtypes. I don't want to discourage you and I am not familiar enough with numpy to definitely say it's not worth the effort, but making an efficient use of numpy is probably quite involved. At least for small grids my gut feeling tells me that the performance for simple things like getting agents at a specific position is probably slower with a numpy array. Not sure how it scales though and not sure about more complicated things (depends on built-in functions for numpy I guess).
But as you said for a background layer it would probably make a lot of sense. I like the idea of layers more and more and for (numerical) background layers numpy would definitely be the best option (and btw we are not limited in terms of dependency here - ContinousSpace is already based on numpy, for performance reasons).
But I am not sure how the interface for layers would look like. Without thinking to much about it, I would say it would be best to have a LayeredSpace "on top" that lets you define multiple subspaces and makes it somewhat convenient to access them simultaneously. But of course this shouldn't be forced and simple models should work as usual without layering and using a single space. That way we could narrow my proposed AbstractSpace to only hold Agents (i.e. remove Content) and implement the Layer class separately. But this is just the first thing that came to my mind.
(And I realize "is performance a concern" is a loaded question, and I don't mean it to be aggressively so.)
(quoting this to also address @kumom remark) I think performance is an obviously important topic for any ABM. But it is also important to remember that mesa is primarily a framework. It shouldn't produce any bottlenecks, but I would guess the performance/runtime of any model with at least some complexity is probably not limited by the mesa functionality. You would have to call a lot of grid functions and not do much else to see a limit here.
Be aware that, as far as I know, using numpy with
dtype='object'
is basically using a less efficient list.
I was not aware of this. To the extent I had thought about it I assumed it would be no worse than accessing lists of lists. Using Numpy-indexing for neighborhoods means that the position offset ndarray (e.g. all offsets (-1,-1) to (1,1), minus the center, using Moore radius) can be cached and then reused by broadcasting the addition of the center position to get the specific neighborhood around a point. I am implicitly assuming most (or all) neighborhood lookups are going to be the same (same radius, same radius type), and that lazy caching and reuse is appropriate (e.g. if there are two neighborhood lookups, they are not interleaved.) This does, in the Boltzmann wealth case, speed things up: I see a top-level increase of speed of ~4x.
That said...
but I would guess the performance/runtime of any model with at least some complexity is probably not limited by the mesa functionality ... You would have to call a lot of grid functions and not do much else to see a limit here.
You raise a good point. Using Boltzmann wealth model was meant to be an easy way for me to test what total difference the changes I made were making. But a couple of integer ops is essentially the minimum an agent could do and still be doing anything. So the question is when do those neighborhood operations "count" in more general or "real" models/agents... and I have not idea. I'm still at the level of playing with toy, "classic" examples.
Anyway, only semi-on-topic.
But I am not sure how the interface for layers would look like...
I'm thinking about this, too, but will put this in a separate post.
This does, in the Boltzmann wealth case, speed things up: I see a top-level increase of speed of ~4x.
You have my curiosity now, can you share your code? :)
I did some testing and for a simple object access a list of lists is indeed the fastest solution. On my desktop at work it was about twice as fast, but on my personal laptop the speed is actually almost the same. I guess most of the speed difference you see is simply due to getting all the neighbors at once, so I guess you are slicing the numpy grid (e.g. grid[5:8, 10:12]
), whereas the current implementation would access the grid 9 times. And that is something that cannot be done with list-of-lists (the example before would at best equal [row[10:12] for row in grid[5:8]]
). The current implementation also does some out-of-bounds and other calculations, I am wondering if you also have these included in your version? And finally, the current implementation accesses the grid with Grid[x][y] instead of Grid._grid[x][y], which alone doubles the access time.
So there is quite a lot of room for premature optimization, but my personal bottom line is this: List-of-Lists are surprisingly effective, but numpy could still beat it. And my dictionary-based approach is somewhat elegant and memory friendly, but otherwise comparatively slow.
It is all on a fork, you should be able to see here What's there should run. (Note that in the same fork I started working on a spacialresources.py
and new_space.py
in the last couple of days, both of which almost certainly don't even parse without errors.)
The highlights:
In Grid, change grid to be an ndarray
, make empties a dict
, and add members of lazy caching:
self.grid = np.empty((self.width, self.height), dtype='object')
# Add all cells to the empties list.
self.empties = dict.fromkeys(itertools.product(*(range(self.width), range(self.height))))
self.last_radius = 0
self.last_moore = True
self.last_include_center = True
self.last_neighborhood_block = None
Moved all logic for getting a neighborhood into a new function:
def _get_neighborhood(self, pos, moore, include_center, radius):
minx, maxx, miny, maxy, on_edge = self._get_bounds(pos, radius)
if not on_edge and self.last_neighborhood_block is not None and \
self.last_moore == moore and self.last_radius == radius and \
self.last_include_center == include_center:
inds = self.last_neighborhood_block + pos
else:
x = np.repeat(np.arange(minx, maxx), maxy-miny)
y = np.tile(np.arange(miny, maxy), maxx-minx)
inds = np.stack((x.T, y.T), 1)
if not moore:
manhattans = np.sum(np.abs(inds - pos), axis=1)
inds = inds[manhattans <= radius]
if len(inds) > 0 and not include_center:
inds = np.delete(inds, len(inds)//2, axis=0)
if self.torus:
inds[:,0] = np.remainder(inds[:,0], self.width)
inds[:,1] = np.remainder(inds[:,1], self.height)
if not on_edge and ((self.last_moore != moore or \
self.last_radius != radius or \
self.last_include_center != include_center) or \
self.last_neighborhood_block is None):
self.last_radius = radius
self.last_include_center = include_center
self.last_moore = moore
self.last_neighborhood_block = inds - pos
if inds.size > 0:
inds = np.unique(inds, axis=0)
return inds
And then adjusted other functions: accessing the ndarray instead of nested lists, using _get_neighborhood
everywhere, including making iterators iterate over its return (which is kludgy and ugly, but I wanted to keep the interface the same.) I think SingleGrid
escaped any changes, but I had to touch MultiGrid
just around accessing the ndarray.
I made some notes when checking in: 2.6x speed increase with just the grid (that is execution clock-time, but on a machine with an otherwise constant load), and then another 33% with caching which is... 3.5x. But I was consistently getting execution times of ~2'40" with Mesa-as-is and ~40" with my mods, so total of 4x. This is all based on some poking around I did spring/summer of last year but never followed up on (moved houses, had a baby, etc.) With the changes being discussed now a la _AbstractSpace, layers, storing in a dict I'm happy to hit pause again and see where this would fit into a redesigned/expanded set of interfaces (and spend some time thinking/working on those instead.)
Quite interesting! As you commented in your fork, the interaction with the current grid is quite quirky and I agree that an additional "pure numpy" grid would probably be best. So this goes back to the actual discussion of a space API. But let me just note again that this idea somewhat came up in my mind and this hasn't been discussed with any of the other mesa maintainers (I am an notoriously missing the dev meetings). But it is not like this is definitely going to happen and the interface isn't fixed what so ever. And your approach definitely doesn't really make sense with iterators (they would just exist for the sake of iterators).
So a simplified API could be something like this (without contents_at
and agents_at
now accepts a list of positions - since single positions can be queried directly as Space[pos])
Content = Union[Agent, List[Agent]]
@abstractmethod
def agents_at(self, pos: Iterable[Position]) -> List[Agent]:
"""Return the agents at specific positions."""
@abstractmethod
def neighbors_of(self, agent: Agent) -> List[Agent]:
"""Return the neighbors of an Agent."""
@abstractmethod
def neighborhood_at(self, pos: Position, include_own: bool = True) -> List[Position]:
"""Return the neighborhood positions at a Position."""
@abstractmethod
def neighborhood_of(self, agent: Agent, include_own: bool = True) -> List[Tuple[Position, Content]]:
"""Return the neighborhood of an agent."""
I think lists are also more "beginner friendly" to handle then iterators. I would still appreciate iterators, but they could be additionally included. We could even encourage them by adding the methods to AbstractSpace, but without the @abstractmethod annotatation (i.e. they are not required to be implemented for each space).
But we should probably already have an idea for the LayeredSpace to make sure both interact nicely together (I saw some notes regarding this in the fork of @dcunning11235)
RE: premature optimization
I just ran a profile run (%prun model.step()
in ipython/jupyter) for the MoneyModel. A hundred steps took 68 seconds, of which 60 (!!) seconds were spent on removing the position from the empties list... iter_neighborhood
took only 3.6 seconds. I guess that was the real game changer in your numpy implementation @dcunning11235 :-/
So just changing the empties from a list to a set brings down the total model run to 7 seconds.
Good news for mesa, but I guess bad news for further optimization. I will submit a PR for the change soonish.
A hundred steps took 68 seconds, of which 60 (!!) seconds were spent on removing the position from the empties list.
I ran %timeit
with my numpy code using a self.empties
of type set
, list
, and empty/keys-only dict
as I had. I get fractions of second difference.
I ran %prun
and I do see the huge chunk of time going to list.remove. I'm not super familiar with the output format of prun (and I should get back to actual work), but I think that must be the total time for all list.removes (the empties list, the grid itself, any other lists running around). In any case, changing the self.empties from a list to a set (or vice versa) makes little difference for me.
Here are my acutal results (numpy-based, altering only self.empties
type):
dict: %timeit for i in range(100): model.step()
: 26.4s +- 0.31s
list: %timeit for i in range(100): model.step()
: 26.5s +- 0.40s
set: %timeit for i in range(100): model.step()
: 26.9s +- 0.20s
Running code as-is (e.g. just pip installing over my code, so not building from github):
as-is:%timeit for i in range(100): model.step()
: 91s +- 0.10s
Digging down a bit:
%prun
for "as-is" version (truncated for space):
187759 function calls (186459 primitive calls) in 0.974 seconds
Ordered by: internal time
ncalls tottime percall cumtime percall filename:lineno(function)
2500 0.498 0.000 0.851 0.000 space.py:430(_place_agent)
1968 0.353 0.000 0.353 0.000 {method 'remove' of 'list' objects}
22500 0.032 0.000 0.054 0.000 space.py:128(iter_neighborhood)
42500 0.013 0.000 0.013 0.000 space.py:256(out_of_bounds)
22500 0.008 0.000 0.015 0.000 space.py:246(torus_adj)
2500 0.007 0.000 0.939 0.000
%prun
for numpy with list-for-empties version (truncated for space):
161657 function calls (160415 primitive calls) in 0.364 seconds
Ordered by: internal time
ncalls tottime percall cumtime percall filename:lineno(function)
2500 0.065 0.000 0.087 0.000 arraysetops.py:299(_unique1d)
2500 0.045 0.000 0.239 0.000 space.py:197(_get_neighborhood)
2500 0.041 0.000 0.280 0.000 space.py:231(get_neighborhood)
2500 0.024 0.000 0.187 0.000 arraysetops.py:151(unique)
5000 0.018 0.000 0.022 0.000 {method 'view' of 'numpy.ndarray' objects}
2500 0.011 0.000 0.330 0.000
So the list.remove
operation on the empties isn't the source of the slow-down, at least as far as I understand.
Okay, now I really should get back to real work :)
I ran
%timeit
with my numpy code using aself.empties
of typeset
,list
, and empty/keys-onlydict
as I had. I get fractions of second difference.
Since there really seems to be no difference at all, I guess you changed something at the wrong location or did not reload modules in between? It makes a lot of sense that large lists are considerably slower. If you run pos in self.empties
, Python has to actually compare every list entry to pos
, whereas for a set (or dict keys) it only has to hash and look once.
ncalls tottime percall cumtime percall filename:lineno(function) 2500 0.498 0.000 0.851 0.000 space.py:430(_place_agent) 1968 0.353 0.000 0.353 0.000 {method 'remove' of 'list' objects}
The "tottime" column tells you how much time was spent on each function itself. The rightmost column shows the file and line number of the function definition. The "cumtime" tells you how much time was spent on the function and its function calls. In your case _place_agent
took 0.851 seconds, of which 0.498 seconds by itself. That leaves 0.353 seconds, which is exactly the time spent for list-removes. (But you are correct that the second line would also include other list-removes, but in the example they only happen at _place_agent
). So out of the total 0.974 seconds, 0.851 seconds were spent on _place_agent
. If your numpy run would also use a list, this function should take about the same time, because the function should still be called for every agent.
You can also see that get_neighborhood
took a cumulative time of 0.06 seconds, whereas in your numpy version it took 0.28 seconds.
Additional profiling tip: You can install line_profiler
and then run something like
%load_ext line_profiler
%lprun -f model.grid._get_neighborhood model.step()
This will tell you how much time (also in %) was spent on each line of a function. Quite helpful to really see where your time is going.
This is a great discussion -- and @dcunning11235, that looks really promising!
A few scattered thoughts:
@Corvince: I like your proposed API, especially the _of
/ _at
distinction. Returning lists feels more streamlined, but iterators are more Python[3]ic.
Content seems unnecessary as a core feature of the abstract class. However, I think there's a place for a PatchGrid
class, which would auto-populate each cell with an agent from a provided class. That kind of grid would work best as one of several layered spaces (i.e. a [Multi]Grid to hold agents that move around, and a PatchGrid to hold the patches). That way, agents could access other neighboring agents but not the patch, as well as just the patch but not the other agents.
@kumom wouldn't having MultiGrid be layered SingleGrids mean that ultimately every agent would need its own grid? Or am I misunderstanding what you were suggesting?
Just for clarification: I really don't want to discourage you @dcunning11235 ! I am just seriously doubtful about the potential performance gains under the current framework. I looked a bit at your code with the line_profiler and while there are some possible further performance gains, the same is true for the current implementation.
So I would rather encourage you not to play Frankenstein with the current Grid, but rather create a pure-numpy Grid and think about how a "fresh" approach could really outperform anything else, even if that would mean some stronger assumptions. One example would be to disallow torus and don't perform any out-of-bounds checks (They can still be performed on the "simulation side").
Or just focus on your LayerdSpace spec :-)
Oh no worries @Corvince, I take all comments/critiques/etc. as being constructive (well, I try anyway; but I took yours to be.) And yes (somewhere up-thread I think I mentioned this, too) I don't want to spend a lot of effort re-working the old framework if there is already talk and/or work going on to replace it.
@dmasad So in your thinking, would Patch
be a subclass of Agent
(from which further subclasses, e.g. SugarPatch
, would be derived)? And then a PatchGrid
or PatchSpace
would ensure that only Patch
es were added to it? Or would just being positioned in a PatchGrid
make an agent a patch? Either way, this solves the issue of separating 'real' agents from resources, true.
Still, I like the idea of Content that wasn't an agent. One reason why was that we could store information that wasn't obviously an Agent, Patch
es with e.g. a constant growth function (say +1 each turn) or even simpler (there is 1 iron in the grid here, forever or until some agent comes and takes it, at which point there is 0 iron forever.) So rather than have step()
a million times for each agent in that PatchSpace we could just e.g. implement it as a Numpy ndarray and make one (or a couple of) calls and let np handle it.
Of course, now that I am writing this I'm thinking about this in a new way. Maybe what I'm really thinking about is not Content
, but a PatchSpace
that just allows coordinates to be queried for their value. Possibly the underlying implementation is a Patch
(aka an Agent
that has a value), or possibly it is a ndarray that is updated by a function each tick, or possibly its just a random value drawn from some distribution.
I was imagining a patch / patch-grid would work something like:
class WolfSheepModel(Model):
def __init__(*args, **kwargs):
# Other stuff
self.grass_grid = PatchGrid(self.width, self.height, patch=GrassPatch, fill_all=True)
def step(self):
# Other stuff
self.grass_grid.step_patches()
Then PatchGrid would automatically create a GrassPatch object in each cell. Something like PatchGrid.step_patches()
would allow us to give the patches their own step
method without needing to add all the patches to the scheduler.
Then, e.g, the sheep agent, instead of needing to search the cell contents for a grass agent, could just do:
# in Sheep.step
# ....
patch = self.model.grass_patch[self.pos]
if patch.grass > 0:
self.energy += 1
patch.grass -= 1
A layered grid would just create multiple grids wrapped in a single object, and pass shared parameters to all of them, e.g.:
self.grid = LayerGrid(self.width, self.height, torus=True,
layers={"animals": MultiGrid,
"grass": (PatchGrid, dict(patch_class=GrassPatch, fill_all=True))
})
That's a really nice idea for an (agent-based) Patch grid @dmasad . However, it should definitely be also possible to not only have Pacht-agents, but also simple patch values. That is, for simple things like grass, it could be simply a (numpy) array of integers, which can "grow" all at the same time (i.e. no need to iterate over the whole grid).
And while the LayerGrid
looks nice and simple enough, it remains to discuss which functions it implements and how they would work (regarding querying which layer).
And I think we should think about a more general LayerSpace
. I think a more interesting feature would be to couple different space types, namely thinking about grids and networks here.
The current implementation and this proposal do something like this:
def move_agent(self, agent: Agent, pos: Position) -> None:
"""Move an agent from its current to a new position."""
old_pos = getattr(agent, "pos")
del self[old_pos, agent]
self[pos] = agent
setattr(agent, "pos", pos)
I now think, however, that we should avoid side-effects, such as changing/relying on the "pos" attribute of agents. I think it would be better to have a space dictionary with a agent->position
mapping. This would also be handy to quickly query which agents are on the grid (or on which one for layered grids).
And a move could simply be a removing from grid and adding somewhere else:
def move_agent(self, agent: Agent, pos: Position) -> None:
"""Move an agent from its current to a new position."""
del self[agent] # notice the lack of position here (known to the space)
self[pos] = agent
I think you're right, we should remove this. E.g. an agent will ask its Model for its neighbors, not its _AbstractSpace; the Model is then responsible for asking its _AbstractSpace where the agent is and what its neighbors are, and then returning that data to the agent. That way a model is the (logical) hub between agents and spaces (or, in my somewhat more complex rendering, a Space/Metric and Agents.)
I've gone off an running with this at a branch LayeredSpace_and_Patches on my fork. I'm playing with quite a few changes and probably won't post descriptive details and "finished" code for several days/a week (but the link is there so you can always look at my incomplete, broken code and half-formed thoughts if you'd really like.)
What an amazing discussion this!
Almost half a decade later I stumbled on the same problem: #1889. It's also uncanny how similar our proposed solutions are (NumPy arrays for information/property layers). I didn't see this issue until now, but I'm going to carefully examine it and try to use it to port potential solutions to my approach, and find the best of both.
Feel free to join the discussion in #1889!
Based on the discussion of #625 and on the mailing list, I started to implement my own version of a revised mesa space module 🙈 . And of course I went down the same pitfalls as the current implementation and @kumom ideas. Thus I took a step back and thought about what space features are really essential.
Based on my considerations, I want to propose the basics of a typed abstract class with an example implementation for a basic grid.
The proposal uses three custom types: Position (Any), Content (Any) and the Agent class For a (multi-)grid the first two could resolve to GridCoordinates = Tuple[int, int] and CellContent = Set[Agent]
This would mean for every space implementation we have a clear definition of what a position is and how the content of a position looks like.
The beginning of the AbstractSpace could look something like this:
The
@abstractmethod
decorator means that we can only instantiate a subclass of_AbstractSpace
if all those methods are implemented.You might realise that the
place_
,remove_
andmove_agent
functions don't have this annotation. They rather rely on a correct implementation of__getitem__
,__setitem__
and__delitem__
Here is how a complementary Grid might look like
Notice that this is a different implementation for the grid that relies on a dictionary to store the cell contents.
Also notice that with the call to
super().__init__()
we make theplace_
,remove_
andmove_agent
functions available and we are ready to use them as we like. More importantly, these functions can have the exact same call signature for every space we derive from_AbstractSpace
.The idea here is to really separate what we want to do (e.g. place agent) from how we do it (
__setitem__
).So far, so good?
The really tricky part is what further functions to include, how to name them and what is there call signature. As a discussion starter I came up with the following:
First of all as a naming convention I use
*_at
for functions that have a position as their first argument and*_of
for functions that have an agent as their first argument.Then all functions yield an iterator (i.e. most likely are generator functions). I think this is the best choice both for performance and flexibility. However, one could argue if this is necessary for cases when the iterator would only contain a single element. Since this is quite often the case the return types could also be something like
Union[Agent, Iterator[Agent]]
, but of course this will be less precise.It is also currently not clear why
neighborhood_at
returns only positions, whileneighborhood_of
returns a tuple of position and content (or possiblyIterator[Tuple[Position], Iterator[Content]]
or evenIterator[Iterator[Tuple[Position, Content]]]
). I have gone some rounds with what could be the best solution, but I am still not sure, hence this discussion.And of course we can discuss if these functions actually make sense for all types of Space classes or which further functions should be required (size? empty positions? default values?).
One last note: In this proposal I made a conscious distinction between Content and Agent. But I don't think we have a class in mesa yet that supports anything else then agents as content. But I think we should and we could think about how that would look like. For example, the content could always be a dictionary with at least an "agents" key which holds a Set of agents.