Closed Namek closed 9 years ago
You've just made plans for us! ;)
Something like implementing LibGDX's poolable on events? What would be useful for you? Shouldn't be too hard to retool FastEventDispatcher
for event pooling.
Event event = PooledEventDispatcher.prepare(MyEvent.class)
?
static public <T extends Event> T prepare(Class<T> type) {
Pool<T> pool = Pools.get(type);
T node = pool.obtain();
node.setPool(pool);
return node;
}
and then releasing after all listeners have been called in #dispatch()
Yes, I meant pooling like in gdx.
Currently I made something I called Signaling: events.signal(Signals.EXPLOSION)
where argument is an integer.
Internally it just fills SignalEvent
with given integer and fires it as regular event. Implementation goes here:
public final class Signal implements Event {
public int code;
}
public class SignalAndEventDispatcher implements EventDispatchStrategy {
protected final FastEventDispatcher eventDispatcher = new FastEventDispatcher();
final IdentityHashMap<Integer, Bag<EventListener>> signalListeners = new IdentityHashMap<Integer, Bag<EventListener>>();
@Override
public void register(EventListener listener) {
if (listener instanceof SignalListener) {
final SignalListener signalListener = (SignalListener)listener;
Bag<EventListener> listeners = this.signalListeners.get(signalListener.signalCode);
if (listeners == null) {
listeners = new Bag<EventListener>();
this.signalListeners.put(signalListener.signalCode, listeners);
}
listeners.add(signalListener);
}
else {
eventDispatcher.register(listener);
}
}
@Override
public void dispatch(Event event) {
if (event instanceof Signal) {
final Signal signal = (Signal)event;
Bag<EventListener> listeners = this.signalListeners.get(signal.code);
if (signalListeners != null) {
Object[] data = listeners.getData();
for (int i = 0, n = listeners.size(); i < n; ++i) {
final SignalListener listener = (SignalListener) data[i];
if (listener != null) {
listener.handle(signal);
}
}
}
}
else {
eventDispatcher.dispatch(event);
}
}
}
public class SignalAndEventFinder implements ListenerFinderStrategy {
private SubscribeAnnotationFinder eventListenersFinder;
public SignalAndEventFinder() {
eventListenersFinder = new SubscribeAnnotationFinder();
}
@Override
public List<EventListener> resolve(Object o) {
// Find event listeners
final List<EventListener> listeners = eventListenersFinder.resolve(o);
// Find signal listeners
for (Method method : ClassReflection.getDeclaredMethods(o.getClass())) {
if (method.isAnnotationPresent(SubscribeSignal.class)) {
final Annotation declaredAnnotation = method.getDeclaredAnnotation(SubscribeSignal.class);
if (declaredAnnotation != null) {
final SubscribeSignal signalCode = declaredAnnotation.getAnnotation(SubscribeSignal.class);
listeners.add(new SignalListener(o, method, signalCode.value()));
}
}
}
return listeners;
}
}
public class SignalAndEventManager extends EventManager {
private final Signal signal = new Signal();
public SignalAndEventManager() {
super(new SignalAndEventDispatcher(), new SignalAndEventFinder());
}
public void signal(int code) {
signal.code = code;
this.dispatch(signal);
}
}
public class SignalListener extends EventListener {
public final int signalCode;
public SignalListener(Object object, Method method, int code) {
super(object, method);
signalCode = code;
}
}
@Retention(RetentionPolicy.RUNTIME)
@Target(ElementType.METHOD)
public @interface SubscribeSignal {
public int value();
}
Implementation above has following drawbacks:
SignalEvent
object is used for every signal.What I was used to (using some custom (job) AS3 engine) is using event system pretty frequently. I mean, really frequently.
Example:
events.fire(new ExplosionEvent(point, targetType))
- difference between this and Signal is that I can put some custom parameters into it, my simple signals don't have it.
Since I want to omit redundant work of garbage collection after hearing/reading some rumors about Android's Dalvik and experiencing it with Adobe AIR (it was ActionScript and Adobe's VM, not Java, but whatever) I can't create new event object for every explosion. Instead, I'd like to do something like:
ExplosionEvent evt = events.getEvent().setup(point, targetType);
events.fire(evt);
which unfortunately is less convenient. It would be best to have some one-liner...
Well, it's not like I couldn't do it on my own but it would be another library and my custom wrapper on top of it. And I'm not sure about your plans due to asynchronous dispatching of events (I'm not a fan of these, they often break a lot). While pooling for synchronous dispatcher is easy to be done just by extending dispatcher, asynchronous is not (I think). Anyway, I'm just asking about your intentions. Any plans to put it into repo or do I have to do it on my own for now? I planned to just use Pool
class from gdx.
Yes plans, no timetable though. I'm happy to take PR:s btw.
@junkdog and I have been bouncing some ideas about concurrency around, and the conclusion for now seems to be to leave it up to the user, outside the Artemis layer. (defer to system inner workings).
I'm all for implementing pooling and polling strategies for events. I think they fit Artemis/ECS perfectly and will cover most usecases in Artemis' abstraction.
For me asynchronous events do not make much sense within the ECS layer itself. I prefer one point of entry and a predictable order of operations. https://github.com/junkdog/artemis-odb/issues/221 would be the first step towards polling.
N-worlds or dispatching to listeners outside Artemis makes async more relevant. It's more of an edge case as far as i'm concerned, I'll probably wait for a PR for async event handler.
To shorten things, can always consider static field the Pool on each event class.. ;) That way you could do .dispatch(ExplosionEvent.of(x,y))
. Could even consider tracking listeners on a static event class field for extra performance, to skip the listener lookup costs.
Nb. Quick google tells me there are some ways to do async pools. The question is if it'll be better than taking the GC hit. Since your signals are basically just ints I imagine there might be very good solutions to make a multi threaded solution? Like some form of non blocking array based buffer?
Let me know if you want to PR a pooled event manager, or wait for me to whip something up. We'll probably want to make something to benchmark it as well just to be sure it performs better. ;)
I'm not sure what you mean about polling. Are you talking about collecting events and dispatching them when it's EventSystem's turn in World process queue? I mean, right now events are dispatched immediately. By polling you mean collecting them and dispatching all at once (once per World process)?
If yes, then why such approach? Why is it better than immediately dispatching events? I'd like to omit performance-like situation where you push 10000 events at once and don't one to serve them in one frame because I don't see a reason to do such things, it's not about Operating Systems, those are just games :dart:
Nb. Quick google tells me there are some ways to do async pools. The question is if it'll be better than taking the GC hit. Since your signals are basically just ints I imagine there might be very good solutions to make a multi threaded solution? Like some form of non blocking array based buffer?
I'm not really interested in async right now. Sure, as you stated "dispatching to listeners outside Artemis makes async more relevant" but for now there's a small chance that I will need it, maybe in future. And those signal ints are not enough too often.
To shorten things, can always consider static field the Pool on each event class.. ;) That way you could do .dispatch(ExplosionEvent.of(x,y)). Could even consider tracking listeners on a static event class field for extra performance, to skip the listener lookup costs.
That's what I did in previous engine.
I'm happy to take PR:s btw.
I'll sit and will decide what to do because I need it right now in project. Not sure about PR:s I'll take that into consideration. Stay tuned.
I'm not sure what you mean about polling.
Polling in the traditional sense, where the system polls for events at the start of each invocation. (abstracted away for cleanliness!).
If yes, then why such approach? Why is it better than immediately dispatching events? I'd like to omit performance-like situation where you push 10000 events at once and don't one to serve them in one frame because I don't see a reason to do such things, it's not about Operating Systems, those are just games :dart:
If it's better depends on the person using it I'd say. I had the same reaction initially, why would you ever want to poll. It's something that comes about when trying to unite the normal system invocation flow, and the flow created with events.
Take synchronous events without polling. On one hand you create highly specialized systems that each perform a discrete operation in a predictable order. On the other hand events introduce additional points of entry and a totally different flow. It's a bit of a smell. For me it leads to systems that do more than one operation, and are nothing more than glorified entity bags. No problem if you have the discipline to avoid that obviously ;)
Plus I can no longer make assumptions about the state of my entities based on the system order. For example, my event triggered operation could be dealing with entities moved but not bound checked yet. The source system of the dispatch has not finished processing yet, so you might accidentally cause issues there too depending on your setup.
I believe a system invocation should be predictable, respect order of operations, and either have a world tick or event as a precondition for processing, but not both (except for events which are just there to change system state). Polling is a step towards that. Basically EventEntitySystem
.
A variant would be delayed invocation, where you'd queue events and fire the event handling systems immediately after the current system has stopped processing.
I'm not really interested in async right now.
Good ! That saves a lot of time. ;)
I'll sit and will decide what to do because I need it right now in project. Not sure about PR:s I'll take that into consideration. Stay tuned.
What kind of quantity of events are we talking about here?
My experience #1 about polling synchronous events is that you have to decide about it before you make a game, not before you want to publish it because changing that breaks many things (most of them are hacks) :D
My second experience is same as you noted - "discipline" (did I mention about hacks already?)
Thanks for your input on that topic!
What kind of quantity of events are we talking about here?
If you're asking about how often I would dispatch events, then I'm not sure right now but it's a shooter. I'd like to inform whole World about every collision, explosion or whatever that happens often. For example, explosion detected in BulletCollisionSystem should inform CameraSystem so camera could shake a little. Of course it depends how far that explosion is from main (player's) avatar on screen SO I need explosion position as event parameter here.
No draw call events though, that limits the performance needed ;)
Interesting. I was thinking only about logical events. Well, it probably will trigger some animations pretty often. Not sure about "draw call events" (I know what are draw calls on graphics card).
I just meant events to dispatch render requests. Just trying to come up with extremely high event quantities. Not sure that's even a thing. XD
Do you agree with such simple changes as:
EventManager
to EventSystem
extending VoidEntitySystem
private final Bag<Event> eventQueue = new Bag<Event>();
/**
* Queue an event to dispatch synchronously.
*/
public void dispatch( Event event )
{
eventQueue.add(event);
}
3.
@Override
protected void processSystem()
{
Object[] eventsToDispatch = eventQueue.getData();
for (int i = 0, s = eventQueue.size(); i < s; i++) {
Event event = (Event) eventsToDispatch[i];
dispatcherStrategy.dispatch(event);
}
eventQueue.clear();
}
?
I can't see any benefits of making something like FastPollingEventDispatcher
, I think it's just better to change #dispatch
as shown above (yes, your arguments before convinced me).
/**
* Queue an event to dispatch synchronously.
*/
public <T extends Event> T dispatch(Class<T> eventClass)
{
T event = Pools.obtain(eventClass);
eventQueue.add(event);
return event;
}
@Override
protected void processSystem()
{
Object[] eventsToDispatch = eventQueue.getData();
for (int i = 0, s = eventQueue.size(); i < s; i++) {
Event event = (Event) eventsToDispatch[i];
dispatcherStrategy.dispatch(event);
Pools.free(event);
}
eventQueue.clear();
}
I was thinking about having both dispatch(Class<T> eventClass)
and dispatch(Event event)
but we'll have the problem whether event should be freed into the Pool
or not.
Example usage:
public class ExplosionEvent implements Event {
public Vector2 position = new Vector2();
public boolean isWall;
public ExplosionEvent setup(Vector2 position, boolean isWall) {
this.position.set(position);
this.isWall = isWall;
return this;
}
}
events.dispatch(ExplosionEvent.class).setup(explosionPosition, isWall);
Benchmark Mode Samples Score Score error Units
BaselineDispatcherBenchmark.eventWithFiftyListeners thrpt 20 148623404,733 2650565,988 ops/s
BaselineDispatcherBenchmark.eventWithHierarchyAndOneHandler thrpt 20 149985759,617 2880158,762 ops/s
BaselineDispatcherBenchmark.eventWithManySubclassListeners thrpt 20 94446085,778 1524018,403 ops/s
BaselineDispatcherBenchmark.eventWithMixedCalls thrpt 20 164059933,556 9600090,839 ops/s
BaselineDispatcherBenchmark.eventWithNoHierarchyAndOneHandler thrpt 20 147218026,582 4342800,864 ops/s
BasicDispatcherBenchmark.eventWithFiftyListeners thrpt 20 44219,037 862,645 ops/s
BasicDispatcherBenchmark.eventWithHierarchyAndOneHandler thrpt 20 51664,087 518,384 ops/s
BasicDispatcherBenchmark.eventWithManySubclassListeners thrpt 20 51212,196 1119,446 ops/s
BasicDispatcherBenchmark.eventWithMixedCalls thrpt 20 37462,920 590,478 ops/s
BasicDispatcherBenchmark.eventWithNoHierarchyAndOneHandler thrpt 20 37710,059 490,335 ops/s
FastDispatcherBenchmark.eventWithFiftyListeners thrpt 20 430959,889 11986,898 ops/s
FastDispatcherBenchmark.eventWithHierarchyAndOneHandler thrpt 20 9334572,865 70996,607 ops/s
FastDispatcherBenchmark.eventWithManySubclassListeners thrpt 20 2186740,448 16909,990 ops/s
FastDispatcherBenchmark.eventWithMixedCalls thrpt 20 8692988,711 60877,009 ops/s
FastDispatcherBenchmark.eventWithNoHierarchyAndOneHandler thrpt 20 8971982,372 355224,230 ops/s
PollingPooledDispatcherBenchmark.eventWithFiftyListeners thrpt 20 401927,959 8890,918 ops/s
PollingPooledDispatcherBenchmark.eventWithHierarchyAndOneHandler thrpt 20 3703210,051 91723,174 ops/s
PollingPooledDispatcherBenchmark.eventWithManySubclassListeners thrpt 20 1472077,252 47631,523 ops/s
PollingPooledDispatcherBenchmark.eventWithMixedCalls thrpt 20 3571915,114 60330,061 ops/s
PollingPooledDispatcherBenchmark.eventWithNoHierarchyAndOneHandler thrpt 20 3574238,791 48665,465 ops/s
Don't put too much trust in the benchmarks btw. ;)
Used dispatch batch size of 1000 for PollingpooledDispatcher.
You can run the benchmark by packaging the main project and running artemis-odb-contrib\contrib-benchmark\target>java -jar microbenchmarks.jar
@Namek On the plus side, look at this giant block of text and see how nice it is on the GC.
contrib-benchmark\target>java -jar microbenchmarks.jar -prof GC
n.m.a.e.d.FastDispatcherBenchmark.eventWithFiftyListeners thrpt 20 417484,056 18128,436 ops/s
n.m.a.e.d.FastDispatcherBenchmark.eventWithFiftyListeners:@gc.count.profiled thrpt 20 1462,000 0,000 counts
n.m.a.e.d.FastDispatcherBenchmark.eventWithFiftyListeners:@gc.count.total thrpt 20 1839,000 0,000 counts
n.m.a.e.d.FastDispatcherBenchmark.eventWithFiftyListeners:@gc.time.profiled thrpt 20 41200,000 0,000 ms
n.m.a.e.d.FastDispatcherBenchmark.eventWithFiftyListeners:@gc.time.total thrpt 20 51900,000 0,000 ms
n.m.a.e.d.FastDispatcherBenchmark.eventWithHierarchyAndOneHandler thrpt 20 9184701,626 104752,378 ops/s
n.m.a.e.d.FastDispatcherBenchmark.eventWithHierarchyAndOneHandler:@gc.count.profiled thrpt 20 1055,000 0,000 counts
n.m.a.e.d.FastDispatcherBenchmark.eventWithHierarchyAndOneHandler:@gc.count.total thrpt 20 1313,000 0,000 counts
n.m.a.e.d.FastDispatcherBenchmark.eventWithHierarchyAndOneHandler:@gc.time.profiled thrpt 20 24600,000 0,000 ms
n.m.a.e.d.FastDispatcherBenchmark.eventWithHierarchyAndOneHandler:@gc.time.total thrpt 20 31300,000 0,000 ms
n.m.a.e.d.FastDispatcherBenchmark.eventWithManySubclassListeners thrpt 20 2066867,368 27334,843 ops/s
n.m.a.e.d.FastDispatcherBenchmark.eventWithManySubclassListeners:@gc.count.profiled thrpt 20 1504,000 0,000 counts
n.m.a.e.d.FastDispatcherBenchmark.eventWithManySubclassListeners:@gc.count.total thrpt 20 1882,000 0,000 counts
n.m.a.e.d.FastDispatcherBenchmark.eventWithManySubclassListeners:@gc.time.profiled thrpt 20 34800,000 0,000 ms
n.m.a.e.d.FastDispatcherBenchmark.eventWithManySubclassListeners:@gc.time.total thrpt 20 44200,000 0,000 ms
n.m.a.e.d.FastDispatcherBenchmark.eventWithMixedCalls thrpt 20 8057089,684 227782,677 ops/s
n.m.a.e.d.FastDispatcherBenchmark.eventWithMixedCalls:@gc.count.profiled thrpt 20 929,000 0,000 counts
n.m.a.e.d.FastDispatcherBenchmark.eventWithMixedCalls:@gc.count.total thrpt 20 1164,000 0,000 counts
n.m.a.e.d.FastDispatcherBenchmark.eventWithMixedCalls:@gc.time.profiled thrpt 20 23500,000 0,000 ms
n.m.a.e.d.FastDispatcherBenchmark.eventWithMixedCalls:@gc.time.total thrpt 20 29800,000 0,000 ms
n.m.a.e.d.FastDispatcherBenchmark.eventWithNoHierarchyAndOneHandler thrpt 20 9177697,157 244408,249 ops/s
n.m.a.e.d.FastDispatcherBenchmark.eventWithNoHierarchyAndOneHandler:@gc.count.profiled thrpt 20 1058,000 0,000 counts
n.m.a.e.d.FastDispatcherBenchmark.eventWithNoHierarchyAndOneHandler:@gc.count.total thrpt 20 1319,000 0,000 counts
n.m.a.e.d.FastDispatcherBenchmark.eventWithNoHierarchyAndOneHandler:@gc.time.profiled thrpt 20 25900,000 0,000 ms
n.m.a.e.d.FastDispatcherBenchmark.eventWithNoHierarchyAndOneHandler:@gc.time.total thrpt 20 33000,000 0,000 ms
n.m.a.e.d.PollingPooledDispatcherBenchmark.eventWithFiftyListeners thrpt 20 407677,705 13557,084 ops/s
n.m.a.e.d.PollingPooledDispatcherBenchmark.eventWithFiftyListeners:@gc.count.profiled thrpt 20 1413,000 0,000 counts
n.m.a.e.d.PollingPooledDispatcherBenchmark.eventWithFiftyListeners:@gc.count.total thrpt 20 1762,000 0,000 counts
n.m.a.e.d.PollingPooledDispatcherBenchmark.eventWithFiftyListeners:@gc.time.profiled thrpt 20 43900,000 0,000 ms
n.m.a.e.d.PollingPooledDispatcherBenchmark.eventWithFiftyListeners:@gc.time.total thrpt 20 55100,000 0,000 ms
n.m.a.e.d.PollingPooledDispatcherBenchmark.eventWithHierarchyAndOneHandler thrpt 20 3688583,139 61769,511 ops/s
n.m.a.e.d.PollingPooledDispatcherBenchmark.eventWithHierarchyAndOneHandler:@gc.count.profiled thrpt 20 283,000 0,000 counts
n.m.a.e.d.PollingPooledDispatcherBenchmark.eventWithHierarchyAndOneHandler:@gc.count.total thrpt 20 355,000 0,000 counts
n.m.a.e.d.PollingPooledDispatcherBenchmark.eventWithHierarchyAndOneHandler:@gc.time.profiled thrpt 20 7900,000 0,000 ms
n.m.a.e.d.PollingPooledDispatcherBenchmark.eventWithHierarchyAndOneHandler:@gc.time.total thrpt 20 10600,000 0,000 ms
n.m.a.e.d.PollingPooledDispatcherBenchmark.eventWithManySubclassListeners thrpt 20 1454385,224 41703,789 ops/s
n.m.a.e.d.PollingPooledDispatcherBenchmark.eventWithManySubclassListeners:@gc.count.profiled thrpt 20 1007,000 0,000 counts
n.m.a.e.d.PollingPooledDispatcherBenchmark.eventWithManySubclassListeners:@gc.count.total thrpt 20 1263,000 0,000 counts
n.m.a.e.d.PollingPooledDispatcherBenchmark.eventWithManySubclassListeners:@gc.time.profiled thrpt 20 27300,000 0,000 ms
n.m.a.e.d.PollingPooledDispatcherBenchmark.eventWithManySubclassListeners:@gc.time.total thrpt 20 34500,000 0,000 ms
n.m.a.e.d.PollingPooledDispatcherBenchmark.eventWithMixedCalls thrpt 20 3471208,205 37089,793 ops/s
n.m.a.e.d.PollingPooledDispatcherBenchmark.eventWithMixedCalls:@gc.count.profiled thrpt 20 267,000 0,000 counts
n.m.a.e.d.PollingPooledDispatcherBenchmark.eventWithMixedCalls:@gc.count.total thrpt 20 332,000 0,000 counts
n.m.a.e.d.PollingPooledDispatcherBenchmark.eventWithMixedCalls:@gc.time.profiled thrpt 20 7500,000 0,000 ms
n.m.a.e.d.PollingPooledDispatcherBenchmark.eventWithMixedCalls:@gc.time.total thrpt 20 10200,000 0,000 ms
n.m.a.e.d.PollingPooledDispatcherBenchmark.eventWithNoHierarchyAndOneHandler thrpt 20 3630505,383 20544,365 ops/s
n.m.a.e.d.PollingPooledDispatcherBenchmark.eventWithNoHierarchyAndOneHandler:@gc.count.profiled thrpt 20 278,000 0,000 counts
n.m.a.e.d.PollingPooledDispatcherBenchmark.eventWithNoHierarchyAndOneHandler:@gc.count.total thrpt 20 347,000 0,000 counts
n.m.a.e.d.PollingPooledDispatcherBenchmark.eventWithNoHierarchyAndOneHandler:@gc.time.profiled thrpt 20 7600,000 0,000 ms
n.m.a.e.d.PollingPooledDispatcherBenchmark.eventWithNoHierarchyAndOneHandler:@gc.time.total thrpt 20 10300,000 0,000 ms
I assume main cost is primarily the identityhashmap lookups for the pools?
Here's performance when caching the last pool. Which cheats these specific benchmarks a bit ofcourse, but it doubles throughput in some cases. (run with really short benchmark, got sick of waiting ;)
Nb. eventWithMixedCalls alternates events, so it invalidates the cache.
public class PoolsCollection {
public <T> ObjectPool<T> getPool(Class<T> type) {
if ( type == lastType ) {
return ((ObjectPool<T>) lastPool);
}
ObjectPool<?> pool = (ObjectPool<?>) pools.get(type);
if (pool == null) {
pool = new ReflectionPool<T>(type);
pools.put(type, pool);
}
lastType = type;
lastPool = pool;
return (ObjectPool<T>) pool;
}
}
n.m.a.e.d.FastDispatcherBenchmark.eventWithFiftyListeners thrpt 5 434852,006 43370,070 ops/
n.m.a.e.d.FastDispatcherBenchmark.eventWithHierarchyAndOneHandler thrpt 5 8060799,778 1850835,668 ops/
n.m.a.e.d.FastDispatcherBenchmark.eventWithManySubclassListeners thrpt 5 1976979,137 721793,505 ops/
n.m.a.e.d.FastDispatcherBenchmark.eventWithMixedCalls thrpt 5 7846762,806 618288,499 ops/
n.m.a.e.d.FastDispatcherBenchmark.eventWithNoHierarchyAndOneHandler thrpt 5 9114365,725 1043321,556 ops/
n.m.a.e.d.PollingPooledDispatcherBenchmark.eventWithFiftyListeners thrpt 5 417207,046 60385,813 ops/
n.m.a.e.d.PollingPooledDispatcherBenchmark.eventWithHierarchyAndOneHandler thrpt 5 6885580,563 757038,407 ops/
n.m.a.e.d.PollingPooledDispatcherBenchmark.eventWithManySubclassListeners thrpt 5 1785353,773 199085,573 ops/
n.m.a.e.d.PollingPooledDispatcherBenchmark.eventWithMixedCalls thrpt 5 3401552,584 436272,671 ops/
n.m.a.e.d.PollingPooledDispatcherBenchmark.eventWithNoHierarchyAndOneHandler thrpt 5 6635957,308 1044915,807 ops/```
For giggles replaced IdentityHashMap
with LibGDX ObjectMap
in pool lookup and fast dispatcher.
Benchmark Mode Samples Score Score error Units
n.m.a.e.d.FastDispatcherBenchmark.eventWithFiftyListeners thrpt 5 443618,075 49793,128 ops/s
n.m.a.e.d.FastDispatcherBenchmark.eventWithHierarchyAndOneHandler thrpt 5 16458088,523 2540365,472 ops/s
n.m.a.e.d.FastDispatcherBenchmark.eventWithManySubclassListeners thrpt 5 2342797,674 351073,224 ops/s
n.m.a.e.d.FastDispatcherBenchmark.eventWithMixedCalls thrpt 5 14106843,931 78707,088 ops/s
n.m.a.e.d.FastDispatcherBenchmark.eventWithNoHierarchyAndOneHandler thrpt 5 16913645,889 1900837,790 ops/s
n.m.a.e.d.PollingPooledDispatcherBenchmark.eventWithFiftyListeners thrpt 5 413061,664 3343,491 ops/s
n.m.a.e.d.PollingPooledDispatcherBenchmark.eventWithHierarchyAndOneHandler thrpt 5 8482162,936 44997,959 ops/s
n.m.a.e.d.PollingPooledDispatcherBenchmark.eventWithManySubclassListeners thrpt 5 1911805,290 37479,507 ops/s
n.m.a.e.d.PollingPooledDispatcherBenchmark.eventWithMixedCalls thrpt 5 7566808,709 970916,233 ops/s
n.m.a.e.d.PollingPooledDispatcherBenchmark.eventWithNoHierarchyAndOneHandler thrpt 5 8361146,479 313470,322 ops/s
Yeah, I was afraid about IdentityHashMap
performance. I've seen it in artemis-odb
somewhere, that's why I choose this. That cheat with lastType
doesn't sound good. Any alternatives without using libgdx (well, I use it anyway, but it seems that artemis-odb-contrib
doesn't)?
The lastType
is just a hack for demonstration purposes.
core and event modules are agnostic. -Jam uses LibGDX though. If only LibGDX would package utilities separately.
Have you seen ObjectMap code? It's voodoo inside.
Yeah, I've been looking at ObjectMap a few times... can't we just take it and copy it into contrib?
Ummm. It looks that my implementation sux in performance terms anyway :D Actually, I didn't want to improve overall performance but rather defend myself from ugly garbage collection.
Poked @junkdog about it, I assume if it outperforms he'll be very eager to implement a similar solution for Artemis-odb as well ;)
Hmmm, there's one more thing. In your benchmark process()
is triggered after each dispatch. That clears event queue which is internally Arrays.fill(data, 0, size, null)
. The whole point of polling is to batch all those dispatches and do them all at once, then clear the queue once. Do you think that changes anything in an important manner?
I thought I added loops for each benchmark?
Pushed missing commits, sorry.
I only looked at this commit: https://github.com/DaanVanYperen/artemis-odb-contrib/commit/ae1afdf8065eddb0b122812a17539f50019520c7
Ah that's the unit test. Benchmarks are here: https://github.com/DaanVanYperen/artemis-odb-contrib/blob/develop/contrib-benchmark/src/main/java/net/mostlyoriginal/api/event/dispatcher/ClassBasedDispatcherBenchmark.java
OK thx. Any more requests about it? I won't be shooting thousands of events so I'm satisfied anyway. IdentityHashMap
could be replaced though.
Ummm. It looks that my implementation sux in performance terms anyway :D Actually, I didn't want to improve overall performance but rather defend myself from ugly garbage collection.
I don't think it's bad at all, especially since it does both queue and pool. Plus the benchmarks focus a lot on a very small part of the process, the limit with events is always going to be the code in the listeners anyway.
OK thx. Any more requests about it? I won't be shooting thousands of events so I'm satisfied anyway. IdentityHashMap could be replaced though.
I think it's a great addition, thanks for putting the time in! I'll harass @junkdog for a JunkDogObjectMap
Thing that lacks here is PollingEventDispatcher extends FastEventDispatcher
- version without pooling at all.
Hey, @DaanVanYperen. Do you have any plans or solutions for object pooling in EventManager?