Closed Cervator closed 4 years ago
Still seems to be happening, reported by @Rulasmur just now from the Alpha 5 multiplayer test event
Still happening - Alpha 5 and a half testing and @qwc separately on the same server earlier (noted in #2590)
12:25:11.176 [main] ERROR o.t.p.typeHandling.Serializer - No type handler for type class org.terasology.assets.ResourceUrn used by class org.terasology.eventualSkills.events.SkillTrainedOwnerEvent::skillTrained
12:27:21.128 [main] ERROR o.t.l.debug.ChunkEventErrorLogger - Multiple loads of chunk (-1, 0, -7)
12:27:23.958 [main] ERROR o.t.e.event.internal.EventSystemImpl - Failed to invoke event
java.lang.NullPointerException: null
at com.google.common.base.Preconditions.checkNotNull(Preconditions.java:212)
Update: While the "multiple loads of chunk x" still occurs and may have actually gotten worse to now also trigger heavily by replicating #3006 the one null issue in #2590 is resolved!
We could still improve logging related to when multiple loads happen, I expect, so this issue remains valid, just without the crash we just resolved. For instance, don't we / couldn't we track the reason a chunk is requested for loading? It would be swell to see "Chunk x dupe loading: requester was player y entering range at z")
One way to replicate this now with some intense log spam is to (maybe only on a headless server?) wander off a ways from spawn then suiciding and respawning. Possibly the exact range matters. I suspect at least once I hit this while from my death location the edge of loaded chunks were still covering spawn.
So while #3792 was only a possible fix for this issue the GitHub auto-close thing just saw "fix for" ;-)
During testing of that PR I did still provoke Multiple loads of chunk (0, 0, 0)
it didn't result in a crash, and I haven't noticed this outright crash for some time. Going to leave this closed and 🤞 that it is gone.
During a multiplayer test event we had a few crashes resulting in an error like http://pastebin.com/xeNM3uCn (snipped inlined below) or https://gist.github.com/rzats/b16a7fd41c0b37033d5de5079500e8c7 where seemingly dupe chunk loading precedes an error where a chunk comparison ends up with a null chunk (a cancelled dupe?)
That's just an unproven theory based on the lines of code involved, but we probably need to figure out why we're getting into loading multiple chunks in the first place. The logging doesn't yield a lot of intel to help figure that out, maybe we can improve it to get better ideas in the future? The spot in
ChunkEventErrorLogger
has access toEntityRef worldEntity
so maybe we could log some more, try to detect what player or other source caused the chunk load request. Or better defeat the multiple loads without risking crashes elsewhere?I'm really unsure on this one but it did happen multiple times and I've seen the error get logged infrequently over the past few weeks, or likely even longer. Chunk loading / sorting for prioritization could use some work, based on some vague old memories.
Possibly related: #2385
Want to back this issue? Post a bounty on it! We accept bounties via Bountysource.