until now, any thread crash would show as a generic crash since we aren't able to get the trace from the crashed thread directly. This uses some dirty tricks to export a partially serialized stack trace to the main thread, where it can be written into a crashdump.
This enables us to see proper crash information for async tasks in the crash archive (finally!!!) as well as being able to capture RakLib errors properly.
the motivation for this is described in #5917
a new version of DevTools will be required, as the current version will cause the server to abort during startup with this change due to duplicated plugin loading.
the only use for this class is to facilitate random runtime plugin loading, and it's not complete even for that purpose.
Since nothing but PM uses pocketmine/classloader anyway, it doesn't make sense to have it outside the core. As with LogPthreads, it's just adding more maintenance work.
this was previously part of the abandoned package pocketmine/spl. It had to be separated in the PM3 days, because RakLib depended on it.
Since RakLib 0.13, RakLib stopped being dependent on or aware of pthreads, so it no longer depends on any thread-related packages.
It's also possible to absorb pocketmine/snooze and pocketmine/classloader back into the core with this in mind.
due to the extremely large performance cost of instantiating AsyncTasks, it's usually not worth bothering with async compression except for very large packets.
While this large overhead can be significantly reduced by using specialized threads, it's early days in the testing stages for such improvements, and for now, we still have this to deal with.
Since async compression is always used prior to player spawn, this change may slightly improve the performance of the pre-join stage of the game.
there's a bunch of places we can't reach with this right now:
- particles
- sounds
- tile NBT
- entity metadata
- crafting data cache
- chunk encoding
- world block update encoding
this is a work in progress, but ultimately we want to get rid of these singletons entirely.
this has been a bug ever since Snooze was first introduced. The load statistic, similarly to timings, did not account for time spent processing notifications between ticks. The problem is that this is often where a significant amoutn of the load actually comes from, because Snooze is most often activated due to incoming packets.
This change fixes the problem by including the time spent processing notifications since the previous tick in the current tick's usage metric.